116 95 11MB
English Pages 368 [417]
Third Edition
A Guide for Training Professionals
Ruth Colvin Clark
© 2020 ASTD DBA the Association for Talent Development (ATD) All rights reserved. Printed in the United States of America. 22 21 20 19
1 2 3 4 5
No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, information storage and retrieval systems, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, please go to www.copyright.com, or contact Copyright Clearance Center (CCC), 222 Rosewood Drive, Danvers, MA 01923 (telephone: 978.750.8400; fax: 978.646.8600). ATD Press is an internationally renowned source of insightful and practical information on talent development, training, and professional development. ATD Press 1640 King Street Alexandria, VA 22314 USA Ordering information: Books published by ATD Press can be purchased by visiting ATD’s website at www.td.org/books or by calling 800.628.2783 or 703.683.8100. Library of Congress Control Number: 2019956104 ISBN-10: 1-949036-57-X ISBN-13: 978-1-949036-57-2 e-ISBN: 978-1-949036-58-9 ATD Press Editorial Staff Director: Sarah Halgas Manager: Melissa Jones Community of Practice Manager, Science of Learning: Justin Brusino Developmental Editor: Kathryn Stafford Production Editor: Hannah Sternberg Text Design: Michelle Jose Cover Design: Rose Richey Printed by P. A. Hutchinson Company, Mayfield, PA
Contents Preface: Why Evidence-Based Training Methods, Third Edition?............. v Part 1: Evidence-Based Practice and Learning..........................................1 Chapter 1: Training Fads and Fables................................................3 Chapter 2: What Is Evidence-Based Practice?...............................23 Chapter 3: How People Learn.........................................................51 Chapter 4: Active Learning..............................................................69 Part 2: Evidence-Based Use of Graphics, Text, and Audio......................89 Chapter 5: Visualize Your Content..................................................91 Chapter 6: Learning From Animations.........................................117 Chapter 7: Explaining Visuals.......................................................137 Chapter 8: Make Learning Personable.........................................157 Chapter 9: When Less Is More......................................................181 Part 3: Evidence-Based Use of Examples and Practice.........................203 Chapter 10: Accelerate Expertise With Examples.......................205 Chapter 11: Does Practice Make Perfect?...................................229 Chapter 12: Feedback to Optimize Learning and Motivation.....253 Part 4: Evidence-Based Lessons and Games.........................................271 Chapter 13: Give Effective Explanations......................................273 Chapter 14: Teaching Procedures................................................295 Chapter 15: Teaching Critical Thinking With Problem Scenarios....................................................317 Chapter 16: Digital Games for Workforce Learning....................341 Part 5: Evidence-Based Principles of Instruction...................................365 Chapter 17: Evidence-Based Methods Aligned to Training Development.............................367
iii
Appendix: A Synopsis of Evidence-Based Instructional Methods........383 References.................................................................................................391 About the Author.......................................................................................405 Index...........................................................................................................407
iv
Preface: Why Evidence-Based Training Methods, Third Edition? I wrote the first edition of Evidence-Based Training Methods because there was a wealth of reports from research scientists in the academic literature. I believed then and today that much of this evidence remains unknown to practitioners. Academic research professionals and workforce learning practitioners constitute two quite separate communities of practice; there is little overlap in their publications and conferences. Most practitioners lack the time to search, read, and synthesize the many research reports available. Plus, although many research papers do give some guidance for practitioners, guidance is not their main goal. I believe practitioners need not only guidelines but also examples to illustrate implementing those guidelines. Naturally, research continues to evolve. Fortunately, the science of instruction and learning does not move as quickly as, for example, medical research. However, many of the guidelines in the second edition needed updating. In the past few years, the research community has broadened their inquiry to evaluate not only immediate learning but also delayed learning and motivation. Sometimes an instructional method that may not offer big learning advantages but that is highly motivational is worth implementing. Two new chapters in this edition focus on feedback and animations. Some topics such as games have inspired a great deal of recent research leading to updated chapters. I am also encouraged by a continued interest in evidence-based guidelines among practitioners—especially v
vi • Preface
those in the allied health professions stimulated by the focus on evidence-based medicine. Finally, what author does not look back on their previous writing and not want to improve it? A third edition has offered me the opportunity to pursue all these goals.
What’s in This Book? This book is organized from smaller to larger instructional elements. Following the introductory chapters, I focus in part 2 on evidence on use of the basic modes for communication, including graphics (still and animated), text, and audio. Part 3 looks at evidence on a less granular level by reviewing three important instructional methods: examples, practice, and feedback. Finally, in part 4 I take a more macro view of lesson design with guidelines for explanations, teaching procedures, and building critical thinking skills. The book ends with an updated chapter on games and a recap of many evidence-based guidelines as they apply to your instructional design and development processes. Each chapter includes introductory questions about the instructional method, some evidence on those questions, guidelines based on the evidence, and a short application checklist at the end. For a quick overview, go to the appendix to see a high-level checklist of guidelines, then go back to the specific chapters that review the evidence regarding any guidelines of particular interest to you.
Limits of the Book There are many topics of interest in our field and you might wonder why certain topics are not addressed. My selection of topics is guided by the evidence available and by my ability to create a coherent set of guidelines around that evidence.
Preface • vii
No one person can claim to be cognizant of all relevant evidence. My apologies for omissions. Nor can I claim a flawless interpretation of the evidence I do review. In this edition I have cited the evidence sources. These citations provide you the opportunity to review the evidence firsthand and draw your own conclusions. —Ruth Colvin Clark, 2020 [email protected]
Part 1
Evidence-Based Practice and Learning
Chapter 1
Training Fads and Fables Blood, Phlegm, Black Bile, and Yellow Bile Training Mythology and Investments in Learning Training Myth #1: Learning Styles Training Myth #2: Media Panaceas Training Myth #3: The More They Like It, the More They Learn Training Myth #4: Learners Are Good Judges of Their Training Needs Training Myth #5: Active Engagement Is Essential to Learning Training Myth #6: Games, Stories, and Simulations Promote Learning Applying Evidence-Based Practice to Your Training
3
4 • Chapter 1
Do you talk on your cell phone (handheld or hands free) while driving? If yes, you are not alone. At any given moment about 7 percent of all drivers are using their phones (Zebra 2019). According to the National Safety Council, about a quarter of all crashes involve cell phone conversations. Evidence shows that even hands-free cell phones are potentially lethal distractions, putting you at four times greater risk of a crash. As of early 2009, when the first edition of this book was written, five states had banned handheld phones while driving. As I write this updated edition 10 years later, 20 states have similar bans (National Conference State Legislatures). In the first half of 2019, Arizona signed a cell phone ban law. The legislation was not prompted primarily by evidence but rather by a well-publicized death of a local patrol officer killed by a driver who was texting at the time. The journey from evidence to application of evidence is often slow, and workforce learning is no exception. This chapter will show how applying evidence to your instructional programs and products can save your organization time and money wasted on training fads that don’t work.
Blood, Phlegm, Black Bile, and Yellow Bile Our story starts in the early 1600s—the birth years of evidence-based practice. Prior to 1628, people believed that blood was produced by the heart and the liver and was continuously used up by the body. In other words, there was no accurate conception of blood circulation. William Harvey introduced the revolutionary idea that blood was not consumed by the body. Based on measures of blood volume and anatomical observations, he proposed that blood was pumped from the heart and circulated throughout the body, returning again to the heart. Harvey, along with Galileo, Descartes, and others, turned
Training Fads and Fables • 5
the 17th century world upside down by advocating evidence and reason—rather than traditional wisdom and faith—as the basis for knowledge and decisions. We’ve come a long way from the days when medical diagnosis and treatments were based on a balance of the four body humors of blood, phlegm, black bile, and yellow bile. If you were lucky, your treatment prescribed an amulet, which at least did no harm. If you were not so lucky, you were subjected to bloodletting. Although great strides were made in medical science, more than 400 years passed before health science professionals formally adopted evidence-based practice. Old habits die hard. Even though we’ve had evidence about the dangers of cell phones while driving for more than 20 years, that data is still being translated into policy changes. To see the latest updates on use of technology while driving, search the websites of the National Safety Council and the Insurance Institute for Highway Safety.
What Do You Think? See how your current knowledge matches up with evidence. Mark each statement you think is true. A. To accommodate different learning styles, it’s best to qq explain a visual with words presented in text and in audio. B. Instructor-led classroom training results in better learning qq than computer-delivered instruction. C. Courses that get higher student ratings generally produce qq better learning outcomes. D. Learners make accurate decisions about their instrucqq tional needs. E. Active engagement is essential to learning. qq F. Games are effective instructional methods. qq
6 • Chapter 1
Training Mythology and Investments in Learning How much do you think is invested in workforce learning? In 2009, when the first edition of this book was published, average employee spending was a little more than $1,000 per employee. In 2016, spending rose to $1,273 (ATD 2017). This is a low figure because it does not take into account one of the biggest expenses of training—worker production time lost during training events. No doubt, the organizations you work with make large investments in training. What kind of return does your organization get on its training investment? Think of the last class that you developed or facilitated. To what extent did the content sequencing, training methods, and facilitation techniques of that class promote learning and consequent improvements in quality, efficiency, safety, and other bottom-line metrics? I’m not surprised if you do not know the return your organization receives; few organizations capture this data. Many common training practices are based more on fads and fables than on evidence of what works. This chapter will review several popular training myths and the facts that debunk them.
Training Myth #1: Learning Styles Are you a visual or auditory learner? Has your organization invested resources in learning styles? Like the four body humors of blood, phlegm, yellow bile, and black bile, I think learning styles represent one of the more wasteful and misleading pervasive learning myths of the past 50 years. From auditory learners to visual learners or from “sensors” to “intuitives,” learning styles come in many flavors. And learning styles have been a profitable movement. Including books, assessments, and classes, a great deal of resources have been devoted to learning styles. For some reason, the idea of a “learning style” has a
Training Fads and Fables • 7
charismatic intuitive appeal that is very compelling. Ask almost anyone whether they are a visual learner or a verbal learner and you will get an immediate commitment to a specific learning style! The learning style myth leads to some very unproductive training approaches that are counter to modern evidence of what works. For example, many trainers believe that visuals should be described by words in text format for visual learners and narration mode for auditory learners. To accommodate visual and auditory learners, a visual on a slide is explained with text and audio narration of that text. As you will see in chapter 7, evidence has shown this practice to depress learning. The time and energy spent perpetuating the various learning style myths can be more wisely invested in supporting individual differences that are proven to affect learning—namely, prior knowledge of the learner. If you make one change as a result of reading this book, give up the learning style myth!
Evidence About Learning Styles Do we have any evidence about learning styles? Kratzig and Arbuthnott (2006) calculated the relationship among three learning style indicators. They asked a group of university students to do three things. First, each participant rated their own learning style as visual, auditory, or kinesthetic. Second, each individual took a learning style test that classified them as a visual, auditory, or kinesthetic learner. Finally, each person was given three tests to measure visual memory, auditory memory, and kinesthetic memory. If the learning style concept had substance, we would expect to find some positive relationships among these measures. For example, someone who considered themselves a visual learner would score higher on the visual index of a learning styles test and have better
8 • Chapter 1
memory for visual information. However, when all of the measures were compared, there were absolutely no relationships! A person who rated themselves an auditory learner was just as likely to score higher on the kinesthetic scale of the learning style test and show best memory for visual data. The research team concluded that “in contrast to learning style theory, it appears that people are able to learn effectively using all three sensory modalities.” A comprehensive review by Pashler and others (2008) concluded that while people do differ regarding aptitudes, “at present there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number.” In a recent review, Kirschner (2017) concurred: “The premise that there are learners with different learning styles and that they should receive instruction using different instructional methods that match those styles is not a ‘proven’ fact, but rather a belief which is backed up by precious little, if any, scientific evidence.” In spite of the lack of evidence for learning styles, this myth is still prevalent in many educational and training environments—another example of how slowly evidence transfers to practice. The lack of evidence about learning styles is the basis for my first recommendation.
Fads and Fables Guideline 1 Do not waste your training resources on any form of learning style products, including instructor training, measurement of learning styles, or books.
Training Fads and Fables • 9
Training Myth #2: Media Panaceas Only a few years ago, computer-delivered instruction incited a revolution in training. Of course, computers were not the first technology to cause a stir. Decades prior to computers, radio, film, and television were hailed as having high potential to revolutionize education. The first widespread dissemination of computer-based training (CBT) was primarily delivered on mainframe computers. Soon, however, advances in digital memory, display hardware, programming software, and Internet distribution catalyzed a rapid evolution of CBT to recent technological panaceas, including social media, digital games, simulations, and immersive virtual reality, to name a few. With each new technology wave, enthusiasts ride the crest with claims that finally there are tools to really revolutionize education and training. And yet, if you have been around for a few of these waves, those claims begin to sound a bit hollow. In just a few years, the latest media hype of today will fade, yielding to the inexorable evolution of technology and a fresh spate of technological hyperbole. What’s wrong with a technology-centric view of instruction? Instructional scientists have learned a lot about how humans learn. Like Harvey, who gave birth to the modern mental model of blood circulation, instructional psychology has revealed the strengths and limits of a human brain, which is the product of thousands of years of evolution. When we plan instruction solely to leverage the latest technology gismo, we ignore the psychology of human memory, which, as we have learned again with cell phones and driving, has severe limits. In fact, technology today can deliver far more information faster than the human brain can absorb it.
10 • Chapter 1
Evidence Against the Technology Panacea For more than 70 years, instructional scientists have attempted to demonstrate the superiority of each new technology over traditional classroom instruction. One of the first published media comparison studies appeared in the 1940s. The U.S. Army believed it could improve instructional quality and reliability by replacing many instructors with films. To their credit, before setting policy based on this idea, the army tested it. They compared learning a simple procedure from a lesson delivered by film, by instructor, and by print. Each version used similar words and visuals. What do you think they found? qqInstructor-led training led to the best learning. qqPaper-based, the least expensive, led to the best learning. qqFilms could replace instructors since they led to the
best learning. qqLearning was the same with instructor, print, and film.
The army discovered that participants from all three lesson versions learned the procedure equally well. In technical terms, there were “no significant differences in learning” among the three groups. Since that early experiment, hundreds of studies have compared learning from classroom instruction with learning from the latest technology—the most recent being various forms of digital distance learning. In fact, so many media comparisons have been published that a synthesis of all of the results, called a meta-analysis, found the same basic conclusion that the army reported so many years ago: No major differences in learning from classroom lessons compared to electronic distance learning lessons (Bernard et al. 2004; U.S. Department of Education 2010). But wait! There is an important caveat to this conclusion. The basic instructional methods must be the same in all versions. In other words, if the classroom version includes graphics and practice exercises, the
Training Fads and Fables • 11
computer version must include similar graphics and practice opportunities. That’s because what causes learning are the psychological active ingredients of your lessons regardless of what media you are using. Rather than asking which technology is best for learning, you will find more fertile ground by using a blend of media that allows you to space out learning events, provide post-training performance support, and foster synchronous and asynchronous forms of collaboration. In fact, the U.S. Department of Education found a significant learning advantage to courses using media blends compared to pure classroom based or pure online learning (2010). The more than 70 years of media comparison research is the basis for my second recommendation.
Fads and Fables Guideline 2 Ignore panaceas disguised as technology solutions; instead, apply proven practices on best use of instructional methods to all media you use to deliver training. Select a mix of media that supports core human psychological learning processes.
As a postscript to this media discussion, what were once considered distinct and separate delivery technologies are increasingly converging. For example, there is now online access to multiple instructional resources. Hand-held mobile devices merge functionalities of computers, newspapers, telephones, cameras, radios, clocks, and context-sensitive performance support, to name a few. Perhaps the media selection discussion will evolve into a discussion of instructional methods, most of which can be delivered by a mix of digital media and in-person instructional environments.
12 • Chapter 1
Training Myth #3: The More They Like It, the More They Learn Do you collect student ratings at the end of your courses? More than 90 percent of all organizations use end-of-training surveys to gather participant evaluation of the quality of the course, the effectiveness of the instructor, how much was learned, and so on. These rating sheets are commonly called “smile sheets” or Level 1 evaluations. If you are an instructor or a course designer, chances are you have reviewed ratings sheets from your classes. You might also have a sense of how much learning occurred in that class. Based on your own experience, what do you think is the relationship between participant ratings of a class and the actual learning that occurred? qqClasses that are higher rated also yield greater learning. qqClasses that are higher rated actually yield poorer learning. qqThere is no relationship between class ratings and learning
from that class. To answer this question, researchers have collected student satisfaction ratings as well as lesson test scores that measure actual learning. They then evaluated the relationships between the two. For example, they considered whether higher ratings correlated with more learning or less learning.
Evidence on Liking and Learning A meta-analysis synthesized more than 1,400 student course ratings with correlated student test data. Sitzmann and others (2008) found a positive relationship between ratings and learning. But the correlation was very small! In fact, it was too small to have any practical value. Specifically, the research team concluded that “reactions have a predictive
Training Fads and Fables • 13
relationship with cognitive learning outcomes, but the relationship is not strong enough to suggest reactions should be used as an indicator of learning.” Do you think that learners rate lessons with graphics higher than lessons without graphics? Do you think that lessons with graphics support better learning than lessons without graphics? Sung and Mayer (2012a) compared student ratings and learning from lessons that included relevant graphics, distracting graphics, decorative graphics, and no graphics. They found that all of the lessons with graphics got better ratings than lessons lacking visuals even though only the relevant graphics led to better learning. In other words, there was no relationship between liking and learning. The next chapter will look at evidence on graphics and learning in more detail. Besides graphics, what other factors are associated with higher ratings? The two most important influencers of ratings are instructor style and human interaction. Instructors who are psychologically open and available—in other words, personable instructors—are associated with higher course ratings. In addition, the opportunity to socially interact during the learning event with the instructor as well as with other participants leads to higher ratings (Sitzmann and others 2008). A 2018 experiment compared a science lesson delivered via immersive virtual reality (IVR) with the same lesson content delivered via a PowerPoint presentation. Which version got better ratings? Which led to more learning? Parong and Mayer (2018) report that the IVR lesson got better ratings, but the slide presentation led to better test outcomes. Evidence from many studies that review the correlation between student ratings and student learning is the basis for my third recommendation.
14 • Chapter 1
Fads and Fables Guideline 3 Don’t rely on student ratings as indicators of learning effectiveness. Instead, use valid tests to assess the pedagogical effectiveness of any learning environment. Focus on instructional methods that lead both to liking and learning.
Training Myth #4: Learners Are Good Judges of Their Training Needs One of the potential benefits of e-learning is the opportunity to offer environments that move beyond the one-size-fits-all instruction typical of instructor-led training. Most e-learning courses offer choices, such as which lessons to take in a course, whether to study an example or complete a practice exercise, or how much time to spend on a given topic. E-courses with high levels of such options are considered high in learner control. How effective are courses with high learner control? Do your learners make good decisions regarding how much to study, what to study, and what instructional methods to select?
Evidence on Learner Decisions More than 20 years of research comparing learning from courses that are learner controlled with courses that offer fewer choices concludes that quite often learners do not make good instructional decisions. Some learners are overly confident in their knowledge and therefore skip elements that in fact they need. A case in point: Hegarty and her associates (2012) asked subjects to compare wind, pressure, or temperatures on either a simple or more complex weather map. The more complex map included geographical detail as well as multiple weather variables not needed to complete the assignment. Task accuracy and efficiency was
Training Fads and Fables • 15
better on the simpler maps. However, about a third of the time the subjects chose to use the more complex maps to complete the task. Dunlosky and Rawson (2012) provided technical term definitions and asked 158 students to judge their level of confidence in recalling the definition correctly. When students judged their response as correct, it was actually correct only 57 percent of the time. In other words, they were overconfident in their knowledge. Participants who were most overconfident retained fewer than 30 percent of the definitions, whereas those who showed little overconfidence during study retained nearly all of the definitions they had practiced. The authors concluded that judgment accuracy matters a great deal for effective learning and durable retention; overconfidence leads to the premature termination of study and to lower levels of retention. When left to their own devices, many students use ineffective methods to monitor their learning, which can produce overconfidence and underachievement. The overall picture is that many learners do not make accurate assessments of their learning and thus do not make accurate or efficient choices regarding what and how to study. With two exceptions, learners are often poor judges of their skill needs and will need support in courses that offer higher learner control. One exception is learners with higher prior knowledge of the content. As a result of greater background knowledge, these learners usually make better judgments about their learning needs. A second exception is control over pacing. All learners should have the opportunity to manage their rate of progress in e-learning using back and forward progress buttons. One way to improve outcomes in e-learning is to make important topics and instructional methods, such as examples and practice, a default rather than an option to be selected (Schnackenberg and
16 • Chapter 1
Sullivan 2000). In a default lesson, the continue button automatically leads to important instructional methods and the learner will have to consciously choose to bypass them. There are a number of ways to provide guidance to learners to help them more accurately assess their own needs. The bottom line is many learners new to the content will not make accurate self-assessments of their own knowledge and skills and overconfidence will lead to underachievement.
Fads and Fables Guideline 4 Don’t count on your learners to always make good decisions about their instructional needs. If your course builds in options, accompany those options with guidance.
Training Myth #5: Active Engagement Is Essential to Learning “Active learning” is one of the most cherished laws of workforce development. As a response to the pervasive use of noninteractive lectures, the training community has pushed active learning as an essential ingredient of effective instruction. By active learning they refer to overt behavioral activities on the part of learners. These include activities such as making content outlines, collaborating on problems, or labeling graphics. However, evidence points to a more nuanced definition of active learning. Engagement is essential, but it is psychological engagement rather than physical engagement that counts. And physical engagement can sometimes interfere with psychological engagement.
Training Fads and Fables • 17
The Evidence on Active Engagement Imagine two groups of learners studying a biology chapter. Group A is provided with a concept map developed by the chapter author as a support guide. Group B is provided with a blank concept map, which the learners are asked to fill in as they read. Clearly, Group B is more actively engaged. However, Group A learned more than Group B (Stull and Mayer 2007). Perhaps individuals in Group B did not complete the map correctly. Alternatively, perhaps the mental activity needed to complete the concept map absorbed cognitive resources needed for learning. A similar experiment by Leopold and others (2013) evaluated learning of a science text between learners who developed their own summaries with those who studied pre-prepared summaries. Best learning occurred among those who studied the predefined summaries. The authors suggest that learners who engage in behavioral processing may not have engaged in psychological processes. In contrast, those studying a predefined summary had more resources to invest in deeper psychological processing. Chapter 4 will look more closely at evidence on engagement in learning. For now, I offer the following guideline.
Fads and Fables Guideline 5 Behavioral activity during instruction does not necessarily lead to learning. It is psychological engagement that is most important.
Training Myth #6: Games, Stories, and Simulations Promote Learning Attend any training conference, look at the latest training books, or check out your favorite social media site. Chances are you will find real
18 • Chapter 1
estate devoted to mobile learning, games, immersive virtual reality, simulations, social media, or whatever is the technology or instructional method du jour. Training lore is full of claims and recommendations about the latest training methods like these. What’s wrong with these kinds of recommendations? First, using such broad terms for techniques makes statements about them often meaningless. Take games for instance. Do you mean puzzle games, quiz show games, strategy games, or simulation games? Do you mean individual paper and pencil games, video games, or group participation games? As a category, games include so much diversity that it is just about impossible to make any generalizations about their instructional effectiveness. I’ll have more to say about games in chapter 16. If you are especially interested in games, feel free to jump there now.
No Yellow Brick Road Second, even if we narrow down to a fairly specific set of criteria for any given instructional method, its effectiveness will depend upon the intended learning outcome and the learners. Is your goal to build awareness, to help learners memorize content, to teach procedural skills, to motivate, or to promote critical thinking? And what about your learners? Regarding learner differences, prior knowledge (not learning styles!) is the most important factor that moderates the effects of instructional methods. Techniques that help novice learners are not necessarily going to apply to a learner with more expertise. The lack of universal effectiveness of most instructional techniques is the basis for what I call the No Yellow Brick Road principle. By that I mean that there are few best practices that will work for all learners and for all learning goals. The chapters to follow will
Training Fads and Fables • 19
show that much contemporary research focuses on the conditions under which a given technique is most successful. For example, what type of graphic works best for specific learning goals and audience? The evidence that has accumulated over years of research on general categories like graphics and games is the basis for my sixth recommendation.
Fads and Fables Guideline 6 Be skeptical about claims for the universal effectiveness of any instructional technique. Always ask, “How is the technique defined? For whom is it useful? For what kinds of learning outcomes does it work?”
Over the 10 years since the first edition of this book was released, research evidence offers more nuanced recommendations. For example, rather than ask, “Are games effective?” studies focus on how games can be best designed to improve learning. As a result, even more is known today about how to implement instructional methods in ways that best improve learning. The research efforts of the last 50 years provide a foundation for a science of instruction—a science that can offer practitioners a basis for minimizing resources wasted on the myths in favor of practices proven to enhance learning.
The Bottom Line Let’s conclude by revisiting the responses you gave at the start of the chapter: A. To accommodate different learning styles, it’s best to explain a visual with words presented in text and in audio.
20 • Chapter 1
False. The benefits of using text and audio to describe visuals is a common misconception. Chapter 7 examines the evidence and psychology of how to best use words to describe visuals. B. Instructor-led classroom training results in better learning than computer-delivered instruction. False. Evidence from hundreds of media comparison studies shows that learning effectiveness does not depend on the delivery medium but rather reflects the best use of basic instructional methods. Because not all media deliver all methods, evidence suggests that blended learning environments are more effective than pure classroom or pure digital learning. Evidence-based best practices for instructional modes and methods will be reviewed in the chapters to follow. C. Courses that get higher student ratings generally produce better learning outcomes. True—but only marginally. There is a very small positive relationship between ratings and learning. However, it is too small to draw any conclusions about the learning value of a class from student ratings of that class. For example, students rate a lesson with any kind of graphic higher than a lesson without graphics, regardless of the learning value of the graphics. D. Learners make accurate decisions about their instructional needs. False. Many learners, especially those new to a content area are poor calibrators of their knowledge and skills and in
Training Fads and Fables • 21
instructional environments designed with learner control may not make optimal learning decisions. E. Active engagement is essential to learning. True and false. What’s important is psychological engagement that builds job-relevant knowledge and skills. Behavioral engagement can sometimes defeat appropriate psychological engagement and psychological engagement can occur in the absence of behavioral engagement, such as learning while reading or studying an example. See chapter 4 for more details. F. Games are effective instructional methods. False. The effectiveness of any instructional strategy such as a game will depend upon features of the game, the intended learning outcome, and the prior knowledge of the learners. See chapter 16 for more on games.
Applying Evidence-Based Practice to Your Training The evidence I review in this book can guide your decisions regarding the best instructional methods to use in your training. But more importantly, I will consider the book a success if you become a more critical consumer of the various training recommendations appearing in practitioner articles, social media sites, and conferences. My hope is next time you hear or read some generalizations about the latest technology or hot training method you will ask: qqWhat exactly are the features of the method under
discussion? qqWhat is the evidence for this method? qqHow valid is the evidence to support the method?
22 • Chapter 1
qqFor whom is the method most appropriate? qqHow does the method fit with our understanding of the
limits and strengths of human memory?
Coming Next To move beyond training myths, I recommend taking an evidence-based approach. What is an evidence-based approach? What kind of evidence should you factor into your training decisions? What are the limits of research data? These questions are the subject of the next chapter. FOR MORE INFORMATION Clark, R.E., and D.F. Feldon. 2014. “Six Common but Mistaken Principles of Multimedia Learning.” In Cambridge Handbook of Multimedia Learning, 2nd ed., edited by R.E. Mayer, 97-115. Boston, MA: Cambridge Press. The handbook includes many chapters written by researchers that are relevant to workforce learning professionals. Kirschner, P.A. 2017. “Stop Propagating the Learning Styles Myth.” Computers & Education 106:166-171. A concise and clear editorial directed to practitioners. Pashler, H., M. McDaniel, D. Rohrer, and R. Bjork. 2008. “Learning Styles Concepts and Evidence.” Psychological Science in the Public Interest 9:105-119. A comprehensive and readable review of research on learning styles. U.S. Department of Education Office of Planning, Evaluation, and Policy Development. 2010. Evaluation of Evidence-based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. Washington, D.C.: U.S. Department of Education. A very readable update on media comparison research. Available free of charge online.
Chapter 2
What Is Evidence-Based Practice? Evidence-Based Practice for Instructional Professionals Academic Versus Practitioner Evidence Do Graphics Improve Learning? Comparison Experiments How Is Learning Measured? Are Graphics Effective for Everyone? Factorial Experiments Does the Type of Graphic Make a Difference? Do Learners Like Lessons With Graphics More Than Lessons Without Graphics? A Correlational Study How Confident Are We About the Positive Effects of Visuals? Synthetic Evidence The Psychology of Graphics: Eye-Tracking Data The Psychology of Graphics: Brain Activity Measures Read the Fine Print: Limits of Evidence-Based Guidelines How Has Evidence Improved in the Last 10 Years? Applying Evidence-Based Practice to Your Training
23
24 • Chapter 2
“An evidence-based approach offers the most helpful way to answer questions . . . because it is self-correcting. As research evidence begins to accumulate, we can reject unhelpful accounts of learning . . . and construct more useful ones.” (Mayer 2014)
This book was written to replace myths with evidence in which to ground your instructional decisions. Chapter 1 presented six prevalent training myths; this chapter defines evidence-based practice and looks at some examples of how evidence can guide your instructional decisions. I’ll introduce two categories of evidence—academic research and practitioner research—then focus on the types of evidence I include based on academic research. I’ll use research on graphics to illustrate five basic types of academic research. Graphics are a universal and common instructional method. From photographs to line drawings, from stills to animations, graphics populate PowerPoint slides, training manuals, and e-learning screens. In addition, there are sufficient research studies and research approaches to graphics to illustrate the core tools of academic research that are the foundation for this book.
What Do You Think? Let’s start by evaluating your knowledge of evidence-based use of graphics by marking each statement you think is true. A. Adding visuals to text improves learning. qq B. Some learners benefit from visuals more than others. qq C. Some types of visuals are more effective than others. qq D. Learners like training materials with graphics. qq
What Is Evidence-Based Practice? • 25
Evidence-Based Practice for Instructional Professionals In the last part of the 20th century, the medical profession was the first applied field to formally adopt the incorporation of evidence into clinical decisions. Sackett and colleagues (1996) define evidence-based medicine as the “conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” How can performance improvement and training specialists adapt this definition to our professional practice? What kinds of evidence are most helpful and what limitations should you consider? This chapter will answer these questions. Evidence-based practice is the application of data-based guidelines as one factor when making decisions regarding the requirements, design, and development of work and instructional environments designed to optimize individual or organizational goals. Let’s review this definition in more detail.
Application of Data-Based Guidelines As One Factor When making decisions about performance support and training, you must consider many factors including budgets, timelines, and technology. Evidence-based practice recommends that data and guidelines based on research findings are weighed along with these other factors. Taken broadly, relevant data can be quantitative or qualitative and derive from many sources, such as experiments, research reviews, interviews, observations, and surveys.
26 • Chapter 2
Decisions Regarding Requirements, Design, and Development of Performance Environments Workforce learning practitioners, in partnership with their clients, decide what combination of resources will optimize individual and team performance in ways that help organizations achieve operational objectives. Requirements definition involves gathering data that will guide the selection and specification of solutions, including training that will promote organizational goals. For example, to optimize sales results, training and sales professionals may evaluate current sales metrics, interview and observe top sales performers, review social media discussions on best practices, and collect benchmarking data. The analysis of this data may suggest a solution system to include refinements to hiring criteria, training, new or revised reference resources, and alignment of staff goals and incentives to organizational priorities. Once solutions are identified, evidence is again applied to design and develop those solutions. For example, top sales performers may have developed their own client management and assessment techniques, which can be converted into on-demand training and mobile working aids. In the design and development of training, researchbased principles that guide content organization, communication, and learner engagement are applied.
Academic Versus Practitioner Evidence I recommend two different sources of evidence: academic and practitioner (Figures 2-1 and 2-2). Academic research refers to evidence gathered and published by research professionals using scientific methodologies to ensure validity and reliability. In contrast, practitioner research refers to evidence gathered and disseminated by workforce learning professionals and their clients typically to support a
What Is Evidence-Based Practice? • 27
Figure 2-1. Five Sources for Academic Research Evidence
Experiments
Psychological Mediators
Factorial Experiments
Academic Evidence
Correlaonal Studies
Syntheses
Figure 2-2. Four Sources for Practitioner Research Evidence
Performance Analysis - Interviews - Work observaons - Operaonal data
Design and Development - Prototype tesng - Design experiments
Praconer Evidence Evaluaon
Return on Investment
- Learning metrics
- Financial benefits
- Surveys - Work observaons
- Intervenon costs
specific organizational goal or resolve a problem. Practitioner research can be conducted prior to defining solutions, as in a performance assessment; during the design and development of solutions, as in prototyping tests; and after solutions are deployed, as in return-oninvestment studies. Most large organizations have processes in place to
28 • Chapter 2
assess performance needs and to design and develop solutions. There are a number of resources readily accessible to practitioners that focus on the tools and techniques to gather and analyze this type of evidence. In contrast, although there is plentiful evidence from academic sources, it is relatively inaccessible to practitioners. Research professionals have their own communities of practice, including conferences and technical research publications. Most practitioners lack the time to gather, review, and interpret academic evidence. Therefore, my goal is to help fill that void by providing guidelines derived from academic research. I will draw primarily on five main research genres: experimental comparisons, factorial experiments, correlational studies, qualitative research such as eye-tracking data, and synthetic research such as a meta-analysis. Note that these are not mutually exclusive, and any given research study may include a combination of two or more of these approaches. You may not be that interested in the type of experiment that lies behind any given evidence-based claim. However, each method has strengths and weaknesses; as a consumer of evidence-based guidelines, be aware of the tradeoffs among these approaches. The remainder of this chapter will illustrate these approaches with examples of research on graphics in instructional materials.
Do Graphics Improve Learning? Comparison Experiments Is there any evidence to support the benefits of adding graphics to your instructional materials? In a comparison experiment involving graphics, a researcher creates two lessons with the same words and adds visuals to one of them. Then she randomly assigns learners to each lesson. For example, 25 may take the lesson with graphics and a different 25 take the same lesson minus the graphics. After completing their
What Is Evidence-Based Practice? • 29
lesson version, each learner takes the same test on the content. The test score averages and standard deviations from the two groups are compared using statistical tests of significance to determine whether the differences are likely due to chance or are likely real. Experimental research is the foundational evidence-based method that addresses whether specific instructional techniques are effective. Two critical features of experimental research are random assignment of learners to different lesson versions, called treatments by researchers, and comparison of learning from an experimental lesson to learning from an identical control lesson that does not include the instructional method being evaluated. Take a look at Figure 2-3, which shows a segment from two lessons on how a bicycle pump works. The version at the top explains the content with words alone. The version below uses the same words but adds a simple diagram. Which version resulted in better learning? You are probably not surprised to see in Figure 2-4 that the version with graphics was more effective. Figure 2-3. Text vs. Text Plus Graphic As the rod is pulled out, air passes through the piston and fills the area between the piston and the outlet valve. As the rod is pushed in, the inlet valve closes and the piston forces air through the outlet valve.
TEXT ONLY
HANDLE As the rod is pulled out,
As the rod is pushed in
TEXT + GRAPHIC air passes through the piston
PISTON INLET VALVE
the inlet valve closes
OUTLET VALVE HOSE and fills the area between the piston and the outlet valve.
From Mayer (2009).
and the piston forces air through the outlet valve.
30 • Chapter 2
Figure 2-4. Learning Is Better From Words Plus Graphics Than From Words Alone 100
Percent Correct on Transfer Test
80
Text + Graphics
60
40
Text Alone
20
0
Adapted from Mayer (2001) and Clark and Mayer (2016).
How Is Learning Measured? An important element to consider in both practitioner and experimental research is how learning is measured. For example, in the bicycle pump experiment, the test could ask learners to label the parts of a bicycle pump, describe from memory how a bicycle pump works, or solve problems related to bicycle pumps. Tests typically measure either memory—requiring the learner to recall presented content—or application—requiring learners to apply the content to a problem or situation. Because application is the goal of organizational training, the most relevant type of test for our purposes is one that emphasizes application. I primarily rely on transfer test results as the basis for evidence-based guidelines in this book. For example, the data shown in Figure 2-4 are based on an application test on bicycle pumps with questions focusing on how a pump could be designed to be more efficient or the effects of a defective inlet valve on the operations of the pump. Application questions require the learner
What Is Evidence-Based Practice? • 31
to demonstrate a deep understanding of the content. For workforce learning practitioners, application tests ask learners to apply knowledge and skills in the context of the job. For example, a test to evaluate learning in a customer-service course would include performance scenarios that would be recorded and scored by instructors using a best practices checklist. In summary, when you hear or review claims about the learning benefits of an instructional product or method, it’s always a good idea to take a look at the type of test used to measure results. The experimental evidence on graphics I’ve reviewed so far suggests the following guideline.
Evidence-Based Graphics Guideline 1 Add visuals to a textual description to improve learning.
Are Graphics Effective for Everyone? Factorial Experiments Chapter 1 reviewed the myth of audio and visual learning styles. Although visual learning styles don’t have credibility, might there be other individual differences that influence the effectiveness of graphics? To gather evidence on this type of question, researchers conduct a factorial experiment. In a factorial experiment, two or more versions of a lesson, such as those shown in Figure 2-3, are assigned to two or more different types of learners, such as learners with and without prior knowledge of the content. The results of this type of experiment are shown in Figure 2-5.
32 • Chapter 2
Figure 2-5. The Effects of Graphics on Novice and Experienced Learners .5
Novice
.4
Experienced
.3 .2 .1 0
Text Only
Text + Graphics
Based on data from Mayer and Gallini (1990).
What does the bar chart tell you about the benefits of graphics for high and low prior knowledge learners? Which of the following guidelines does this data suggest? qqGraphics are equally beneficial for high and low prior
knowledge learners. qqGraphics are most beneficial for low prior knowledge
learners. qqGraphics can depress learning of high prior knowledge
learners. From the data, you can see that the performance of low prior knowledge learners was boosted to that of high prior knowledge learners by the addition of a relevant graphic. High prior knowledge learners neither benefited nor were hurt by the addition of graphics. Most likely, high prior knowledge learners can form their own images when they read or hear the words. Therefore, adding a graphic does not contribute to their learning. For reasons that will be explored more in chapter 3, novice learners will often benefit
What Is Evidence-Based Practice? • 33
from different instructional methods than learners with background in the content. Based on data from this type of factorial experiment, the previous guideline is modified as follows.
Evidence-Based Graphics Guideline 2 Add visuals to a textual description to improve learning of individuals lacking prior knowledge about the content of the lesson.
Does the Type of Graphic Make a Difference? We’ve seen that a simple relevant visual of a bicycle pump improved understanding of novice learners. However, there are many different types of graphics or ways to render a visual. For example, in the bicycle pump lesson, the graphic could be more realistic, such as a photograph or a more detailed sketch of a bicycle pump. Alternatively, the graphic could be an animation of how the pump works rather than a series of still visuals. Do the features of a particular graphic make a difference? To gather evidence on this question, several different versions of the graphic could be produced and given to novice learners randomly assigned to the different versions. Testing the learning results from different types of graphics gives some clues. For example, in a lesson on mitosis (cell division in biology), learners were assigned either a photograph or a line drawing of the different stages, as shown in Figure 2-6. Which version do you think led to better identification of the stages and better understanding of the process of mitosis?
34 • Chapter 2
Figure 2-6. Schematic Drawings vs. Video Animation of Mitosis
From Scheiter et al. (2009).
The research team found the line drawing to be more effective— even for identification tests using actual photographs of the stages that learners in the line drawing group never saw. A number of experiments have shown that a simpler visual is often more effective than a more realistic or complex rendering. Naturally, the learning goal must be considered. For example, to learn about the functions of the heart, a schematic line drawing led to better understanding than a more realistic 2-D version (Butcher 2006). However, if the goal was to teach anatomy, a more realistic rendering might be more effective. Based on this data, the evidence-based guidelines on visuals can be further refined.
Evidence-Based Graphics Guideline 3 Use visuals in the simplest format congruent with the intended learning outcome.
What Is Evidence-Based Practice? • 35
Do Learners Like Lessons With Graphics More Than Lessons Without Graphics? A Correlational Study Do you own a dog or a cat? Do you think dogs are better pets than cats? A survey by the National Opinion Research Center found that owners of dogs were twice as likely as cat owners to say they are very happy (Ingraham 2019). Does this mean that dogs cause more happiness? Not necessarily. There could be other factors that explain this relationship. For example, dog owners are more likely to be married and own their own homes than are cat owners. Correlational data show the degree of relationship between two sets of data such as owning a dog and rating happiness. Correlations can be positive, negative, or zero. For example, in an idealized graphic representation such as Figure 2-7, there is a positive correlation between test scores and hours spent studying, a negative correlation between test scores and hours spent playing video games, and zero correlation between test scores and hours spent walking. Figure 2-7. Graphic Illustration of Correlations: A Positive, B Negative, and C Zero
36 • Chapter 2
How learners perceive their instruction is one indicator of motivation to initiate and complete learning events. Chapter 1 introduced research by Sung and Mayer (2012a) in which they assigned learners versions of the same lesson using different types of graphics or no graphics. Some graphics were relevant and others were distracting or decorative. Learning was best among those studying a version with relevant graphics. However, Sung and Mayer report a positive correlation between student satisfaction ratings and any form of visual. In other words, students enjoyed lessons with graphics of any type, including graphics that did not promote learning. Correlational studies point to relationships but do not provide cause-and-effect conclusions. So why do them? For practical, ethical, or safety reasons, sometimes experimental studies cannot be conducted. A classic example is the relationship between smoking and disease in humans. The negative effects of smoking are based primarily on correlational data in which the association between the amount of smoking and the incidence of disease has been found to be high. In other situations, a correlational study may suggest a relationship that can be later tested in an experiment. For example, Sitzmann and others (2008) showed a high positive relationship between student satisfaction ratings and social presence during the training. A follow-up study could randomly assign learners to two versions of the same class with high and low social presence and then compare student satisfaction ratings.
What Is Evidence-Based Practice? • 37
How Confident Are We About the Positive Effects of Visuals? Synthetic Evidence So far we have reviewed just a few isolated experiments showing that novice learners profit from lessons with added graphics. These few studies are encouraging. However, we cannot put too much confidence in guidelines derived from just a few studies. When there are many experiments that use different topics and different visuals, we can use synthetic methods to draw conclusions derived from multiple experiments. Synthetic methods involve research that synthesizes multiple sets of data. Synthetic research can take the form of a systematic review, such as when a research team aggregates a number of experiments on a specific question and summarizes overall guidelines based on their analysis. Alternatively, synthetic research can take the form of a meta-analysis in which statistical techniques are applied to data from many research studies that focus on a similar issue, such as the effects of graphics on learning. Since the first edition of this book was released in 2009, the number of meta-analysis reports has increased dramatically, giving us more confidence in the effects of various instructional methods deployed in diverse settings. The main output of a meta-analysis is an average effect size. An effect size is a multiplier for the amount of variation (standard deviation) found around a set of test scores. An effect size of 1.0 means that on average, a learner who uses a tested instructional method can expect one standard deviation of improvement in their results compared to an individual who does not use that method.
38 • Chapter 2
Table 2-1 presents a general guideline for interpreting effect sizes. Table 2-1. Effect Sizes and Practical Significance Effect size
Practical significance
The instructional method results in
Example: Average score of 70% with standard deviation of 10 points would change to
Below .3
LOW
less than three tenths of a standard deviation improvement in learning
Average of less than 73%
.3-.8
MODERATE
a 1/3 to 4/5 of a standard deviation improvement in learning.
Average of 73% to 78%
.8 to 1.0
HIGH
a 4/5 to 1 standard deviation improvement in learning
Average of 78% to 83%
Above 1.0
VERY HIGH
a greater than one standard deviation improvement in learning
Average of more than 83%
Because many experiments have been conducted that compared learning with and without graphics, we have the benefit of effect sizes from meta-analyses of these experiments. For example, based on 11 different experiments that focused on the effects of graphics on scientific or mechanical processes, Mayer (2011) reported a median effect size of 1.5, which, as you can see from the table, is very high. A second benefit of a meta-analysis is the opportunity to define the conditions under which an instructional method is most effective. The meta-analysis team identifies a number of factors that might influence the effect size. For example, they might ask whether graphics are as effective for children as for adults. Alternatively, they may consider whether some types of content, such as mechanical processes, might benefit more from graphics than other types of content, such as procedures. The research team codes each study they review for the variables
What Is Evidence-Based Practice? • 39
they have identified. For example, they could aggregate all of the studies that involved children separately from those that involved adults, and so on. They then determine the effect sizes of the subsets. In this way, a meta-analysis not only gives us general information on the overall benefits of an instructional method but can also give indications of the boundary conditions or situations that qualify those differences. Remember the No Yellow Brick Road theme of chapter 1. A metaanalysis can help us narrow the conditions under which a given method is most effective. Meta-analyses are becoming so prevalent that Hattie (2009) wrote a book called Visible Learning, which synthesized more than 800 meta-analyses on 138 factors related to students, curricula, teaching practices, home influence, and school programs. You might want to look at a more recent version of this book, Visible Learning and the Science of How We Learn, by Hattie and Yates (2014), which takes a less technical approach to present the meta-analytic data of the first book. While experimental research can tell us whether and under what conditions a given instructional method is effective, it typically has little to say about how that method works. Eye tracking and brain activity measures are two relatively recent research techniques used to get insight on the psychology of instructional methods.
The Psychology of Graphics: Eye-Tracking Data Eye tracking has been used for many years in advertising research. Eye tracking involves tracing the learner’s duration and location of eye fixations as they view an object. By viewing the eye fixation patterns in two different displays, the research team can make inferences
40 • Chapter 2
about how subjects allocate their attention. One study compared eye-tracking patterns between two versions of a newspaper layout. As you can see in Figure 2-8, the integration of text with visuals was better in Version B, in which text and visuals were more integrated on the page. Eye-fixation data offer clues about why a particular display is more efficient or effective than an alternative display. There will be more to say about the layout of text and visuals in chapter 7. Figure 2-8. Eye Tracking Showing Different Patterns of Attention Allocation Between Two Layouts
With permission from Holsanova et al. (2008).
The Psychology of Graphics: Brain Activity Measures Electroencephalography (EEG) is a common neuroimaging technique used to measure gross electrical activity of the brain. EEG measures during learning have been used to evaluate the mental load imposed by a lesson. Makransky and others (2019) found differences in EEG alpha band activity based on the complexity of lessons for novice and experienced learners. Measures of brain activity during learning are
What Is Evidence-Based Practice? • 41
relatively new and should lead to additional insights regarding neurological processes associated with learning. In summary, this book will draw primarily on academic evidence in the form of experiments, factorial experiments, correlational data, reviews of evidence, and qualitative methods, including eye tracking, brain activity, and learner ratings. Together these methods can tell us: • whether a given instructional method leads to more learning than a lesson without that method • whether a given instructional method is more effective for some types of learners or some types of content • whether there is a relationship between two or more sets of data • how a given instructional method may influence psychological learning processes, such as focus of attention or reduction of extraneous mental load.
Read the Fine Print: Limits of Evidence-Based Training Guidelines Most research reports include sections on experimental procedure and testing methods—often in smaller font size than the rest of the report. Many journals now require a section entitled “Limitations” at the end of the report. Although I’m a passionate advocate of evidence-based practice, there are some constraints in the fine print or in the limitations section that we practitioners should consider as we review academic research. These are some of the more salient issues: Length of lessons. In many experiments, lessons last one hour or less. In some cases, they are only a few minutes long. To what extent can we generalize the results from short lessons to longer instructional environments typical of workforce training?
42 • Chapter 2
Immediate learning. With a few exceptions, most experiments measure learning immediately following completion of the lesson. In workforce learning, our goal is longer-term learning—ideally transferring to the job over time. Learner characteristics. Most experiments conducted in academic settings use college-aged students for subjects. The United States, Europe, and Australia are the source for much of this type of research. Therefore most of the data reflects results from younger generation, higher education Western culture populations. You will need to consider the extent to which these results will apply to your learners. Instructional context. Since experimental research attempts to control all variables other than those being tested, laboratory settings are common. There are a few studies conducted in a more naturalistic environment such as a classroom, but these are the exception. Learning measures. I mentioned the role of testing measures earlier in the chapter. For quite a long time most research tested recall of content. Happily this has changed over the past 20 years. Today many experiments use tests that tap different knowledge levels such as a recall test and an application test. The majority of experimental outcomes I include in this book are based on application tests as these are most relevant to workforce learning goals. Ceiling effects. As you review the research I’ve presented throughout the book, you may notice that the outcome test scores of both the experimental group and the control group are relatively low. For example, an experiment may conclude that a method resulting in an average score of 65 percent is more effective than a lesson lacking that method resulting in an average score of 42 percent. However, you might wonder whether 65 percent is really a very successful outcome. In order to detect differences in experimental treatments, researchers
What Is Evidence-Based Practice? • 43
design tests that are challenging. If a test is quite easy, both the experimental and the control group may score high—an outcome known as a ceiling effect. Ceiling effects may obscure the benefits of an experimental method. Learning domain. Particular instructional methods may be more effective in some domains than in others. For example, a recent meta-analysis of educational games found them to be most effective for second language learning and science but not as effective in math or social studies (Wouters et al. 2013). As you review a research study, consider the extent to which the mental demands of the domain in the research are equivalent to those in the types of tasks in your training. The solution to address some of these constraints is prototyping or design iterations with your training solutions. In other words, test out the principles in this book with your learners, learning domains, and contexts. This brings us to the interaction between academic and practitioner research. One is not superior to the other, and data from academic and practitioner research should guide your design and development decisions.
How Has Evidence Improved in the Last 10 Years? Since the first edition of this book published in 2009, many research studies have broadened their methods and scope in ways that make them even more helpful for practitioners. I will summarize progress made regarding: • time of testing • types of learning measures • motivation and learning
44 • Chapter 2
• individual differences • revealing the psychological processes behind instructional methods.
Time of Testing As mentioned in the previous section, historically most research measured outcomes immediately after learners complete an instructional episode. In contrast, many studies now report both immediate and delayed measures of learning. In general the delayed measures are approximately one week after the study session. For example, in a report by Schmidgall and others (2019) that focused on why drawing activities can improve learning, testing was administered both immediately and after a delay of one week. Their lesson focused on the psychological processes involved in improved learning after drawing lesson content—the biomechanics of swimming.
Type of Test Historically, many experiments reported lower level memory tests such as recall or recognition of lesson content. More recent studies include both memory and higher-level cognitive tests that evaluate understanding—generally more applicable to workforce performance. For example, the drawing experiment mentioned in the previous paragraph included three tests: recognition questions, such as “Muscles are connected to the bones through the myofibrils—right or wrong?”; transfer questions with open-ended queries to assess the ability to apply lesson knowledge to a different context, such as “How can you explain the lift of an aircraft?”; and a drawing test involving filling in missing items in drawings seen in the lesson, such as “Please draw in the missing fibers
What Is Evidence-Based Practice? • 45
of a muscle with series connection.” The results I summarize in this book are based on application rather than memory tests.
Motivation and Learning Most research that focused on instructional methods to improve learning measure learning with test items similar to those mentioned above. Increasingly, however, the research community recognizes the important role of learner motivation as a driver of learning. Therefore, more recent studies include and report indicators of motivation typically via some form of rating scale. For example, in the biomechanics lesson learners gave a one- to five-point rating on questions such as “I enjoyed how I was working with the lesson content.” To date, a deep understanding of the role of motivation in learning is just beginning to emerge. There’s more about motivation in chapter 4. For now, it’s a promising direction to see motivational indicators included in research studies.
Individual Differences As mentioned previously in this chapter, graphics improve learning of those new to the content compared to those with background knowledge. Many research studies now incorporate a variety of individual difference measures in order to define how specific instructional methods such as graphics might exert different effects on different learner characteristics. In the drawing experiment, the research team included measures of verbal ability, spatial ability, and prior knowledge as control variables. Their goal was to ensure that the experimental groups did NOT differ regarding any of these features thereby ensuring the applicability of their findings to a diverse learner population.
46 • Chapter 2
Psychological Processes Now that we have a number of guidelines that you will read throughout this book on the use of instructional methods such as graphics to improve learning, we need to know how these different methods exert their effects. For example, previous studies have shown that creating a drawing when learning a spatial process such as biomechanics of swimming leads to better learning. Schmidgall and others (2019) compared learning among four groups with assignments as follows: • create a sketch of the process (drawing group) • write a verbal summary of the process (active learning group) • review the text with drawings already provided • re-read the text. The goal was to determine whether the benefits of a drawing activity reflect an active-learning assignment which the summary assignment also achieved. Alternatively, perhaps the benefits reflected having a visual representation, in which case both the drawing assignment and the text with drawings provided would yield better learning. The results of this research are summarized in chapter 3 on the psychology of learning. In summary, research has evolved over the past 10 years to report outcomes more relevant to practitioners, including application learning, delayed learning, individual differences, and motivation.
The Bottom Line Let’s return to the questions from the opening of the chapter, and examine which of them are true: A. Adding visuals to text improves learning. False. While the evidence we have reviewed so far supports the use of visuals, we will see in chapters 5 and 9 that some
What Is Evidence-Based Practice? • 47
visuals can actually depress learning. The benefits of a visual will depend on the type and rendering of the visual, the prior knowledge of the learner, and your instructional objective. B. Some learners benefit from visuals more than others. True. Evidence suggests that although most learners prefer materials with visuals, novice learners benefit the most. C. More realistic or detailed visuals are generally better for learning and performing. False. For many purposes, a simpler graphic is more effective than a more complex version. We will see the reasons for this in the next chapter on mental load in learning. D. Learners like materials with graphics. True. Learners prefer materials with visuals even though a given visual may not be optimal for learning or performance. As chapter 1 showed, liking often does not correlate with learning.
Applying Evidence-Based Practice to Your Training The goal of this chapter is to define evidence-based practice and to describe some of the different types of evidence we will review. This book will focus on academic research in the form of comparison experiments, factorial experiments, correlational studies, quantitative or qualitative reviews of research, and adjunct measures such as eye tracking and learner ratings of mental load for example. In reviewing academic research, some relevant issues for you to consider include: qqDoes an experimental study use random assignment to a
test lesson and a control version?
48 • Chapter 2
qqAre the treatment and control groups similar except for the
methods being tested? qqWhat is the duration of the experimental lesson? qqWhat are the characteristics of the learner population? qqAre conclusions based on a single study or on a synthesis of
multiple studies? qqWhat types of research reports are used as the basis for a
synthesis review? qqWhat are the reported effect sizes in an experiment or
meta-analysis? qqWhat boundary conditions influence the effects of a
given method? qqAre the learning outcomes of an experiment based on recall
or application tests? qqDo the learning domains and contexts in an experiment
reflect yours? Overall, we can have greater confidence in conclusions based on multiple studies that use experimental methods. However, often we will find that guidelines regarding instructional methods will have boundary conditions and will need to be qualified regarding features of the method, the content, and the learner population, to name a few.
Coming Next Now that we have looked at instructional fables and evidence-based guidelines, we will take a look at the psychology of learning with a short review of how learning occurs in our working and long-term memories. Understanding learning processes will help you adapt the guidelines in this book to your own instructional decisions.
What Is Evidence-Based Practice? • 49
FOR MORE INFORMATION Clark, R., and R.E. Mayer. 2016. E-Learning and the Science of Instruction, 4th ed. Hoboken, NJ: Wiley. See Chapter 3. Our book directed toward practitioners includes many of the same topics in this book in the context of e-learning design and development. Clark, R., and C. Lyons. 2011. Graphics for Learning, 2nd Ed. Hoboken, NJ: Wiley. A detailed and comprehensive review of how best to use visuals to support workforce learning goals in different media. Makransky, G., T.S. Terkildsen, and R.E. Mayer. 2019. “Role of subjective and objective measures of cognitive processing during learning in explaining the spatial contiguity effect.” Learning and Instruction, 51, 23-34. An interesting technical article that combines subjective and objective learning measures to shed light on the why behind the benefits of integrating text with visuals. A good example of the use of multiple measures to explain outcomes. Schmidgall, S.P., A. Eitel, and K. Scheiter. 2019. “Why do learners who draw perform well? Investigating the role of visualization, generation and externalization in learner-generated drawing.” Learning and Instruction, 60, 138-153. This research seeks to define the mechanisms behind the benefits of drawing—whether it is the activity of drawing or the presence of a graphic. A good example of research examining the mechanisms behind instructional methods. Van Gog, T., and K. Scheiter. 2010. “Eye tracking as a tool to study and enhance multimedia learning.” Learning and Instruction, 20, 95-99. This article is an introduction to a special journal issue devoted to eye tracking research.
Chapter 3
How People Learn The Learning Process Motivation and Learning Instruction and Learning Processes About Working Memory Long-Term Memory and Learning Expertise and Instruction Bypassing Working Memory Limits via Automaticity Cognitive Load and Your Training Applying Learning Psychology to Your Training
51
52 • Chapter 3
“Forty individuals drove a car in a simulator under four conditions: no distractions, talking on a handheld cell phone, talking on a hands-free cell phone, and intoxicated to 0.08 percent blood alcohol level. On a simulated freeway, a pace car braked 32 times during the 10-mile ‘trip.’ The three participants who collided into the pace car were talking on cell phones, both handheld and hands free; none of the drunk drivers crashed.” —Strayer and others (2006)
Results like these reveal the limits of the human brain. We are often overconfident in our capabilities because we don’t realize the severe processing limits that affect all mental activities, including human learning. Why should we invest our time to consider how people learn? Evidence-based practice benefits not only from experimental data as discussed in chapter 2. By understanding how instructional methods affect learning processes you can adapt and extend the guidelines in this book to your own situations. Recent research studies measure learning as well as track brain activity, eye movements, and learner ratings to define not only what promotes learning but also which mental processes are involved. Our understanding of learning psychology is based on two memory systems: working memory and long-term memory, and five processes that transform content into expanded knowledge and skills. In addition, a previously neglected factor—motivation—is increasingly being measured in research studies. This chapter considers how learning happens and how to facilitate learning with instructional methods.
How People Learn • 53
What Do You Think? Let’s check your understanding of human learning processes by marking the items you think are true: A. Individuals with more experience have a greater qq memory capacity. B. Learners with higher self-confidence will be more qq motivated. C. Working memory has a capacity of five to seven items. qq D. Lesson design can positively affect mental load qq of learners.
The Learning Process As summarized in Figure 3-1, learning involves five key processes: attention to words and visuals, integration of words with corresponding visuals, rehearsal of content in working memory resulting in integration with prior knowledge in long-term memory, and finally retrieval of new knowledge from long-term memory back into working memory when needed. In addition, motivation acts like an engine that drives learning processes. Your instructional techniques need to support (or avoid defeating) one or more of these processes. For example, chapter 2 showed how eye-tracking measures are used in experiments to determine where learners focus attention on a page or screen. Figure 3-1. Human Learning Processes Rehearsal
Lesson Words & Visuals
Attention
LONG TERM MEMORY Integration with Prior knowledge
WORKING MEMORY
Integration with words and visuals
Retrieval
54 • Chapter 3
Most of these processes take place in working memory and longterm memory—two memory systems that define our mental processes. All of these processes require invested time and effort, which is fueled by motivation.
Motivation and Learning Learner motivation is the engine that drives learning. Motivation is responsible for initiation and selection of learning goals, investment of effort to achieve those goals, and persistence to invest effort until the learning goal is realized. If you have spent any time as an instructor, you have experienced the differences among motivated and unmotivated learners. Understandably, workers who are required to attend a class that they perceive as unrelated to their work roles are often demotivated. One popular theory of motivation known as Expectance-Value focuses on learners’ belief that they will succeed at a learning task (expectancy) combined with the degree to which they value the goal achievement. In other words, learners ask themselves “Can I do this task?” and “Do I want to do this task?” Motivation to engage may be the success factor behind instructional events such as games, which may be a less efficient learning vehicle than a traditional tutorial but at the same time may be more enjoyable. As mentioned in chapter 2, since the first edition of this book was released more learning experiments measure motivation with some type of survey. For example, Parong and Mayer (2018) found that although learning from an immersive virtual reality biology lesson was less effective than learning the same content from a slide show, learners liked the immersive version more. Learners rated the
How People Learn • 55
immersive version less boring, more engaging, and more enjoyable than the slide show. In short, it was more motivating.
Instruction and Learning Processes Effective instruction embeds methods that support attention to relevant visuals and words, integration of related words and visuals, integration of new ideas into existing knowledge, and later retrieval from long-term memory into working memory when needed on the job. For example, learners may benefit from cues such as highlighting to direct their attention to the relevant portion of a complex graphic (Figure 3-2). Figure 3-2. Highlighting to Cue Relevant Section of Heart Diagram
From DeKoning, Tabbers, Rikers, and Paas (2007).
Mental integration of visuals and words can be promoted by physical integration of visuals with related text on the screen or page. Rather than separating a legend from elements of a pie chart, it’s
56 • Chapter 3
better to label the segments within each chart segment. Rehearsal of information can be stimulated by assignments that require learners to process new content in a relevant manner. For example, asking learners to plan and conduct a peer teach-back on new skills has been shown to improve learning. We will review many experimental methods throughout the upcoming chapters that improve learning by fostering one or more of these five key processes as well as motivation. All of these processes take place in working memory or long-term memory and are constrained by the properties of these two memory systems.
About Working Memory Working memory is the center of conscious thought, including learning. It is here that words and visuals are combined, integrated with existing knowledge in long-term memory, and retrieved later in the form of new knowledge and skills. As the center piece of cognition, understanding the strengths and limits of working memory is important. This section looks at three key features of working memory: active processor, capacity limits, and dual channels (Table 3-1). Table 3-1. Three Key Features of Working Memory FEATURE
DESCRIPTION
Active Processor
Working memory is where conscious thought takes place including problem solving, thinking, and learning.
Limited Capacity
When active, the memory capacity of working memory is about 3-5 chunks of information
Dual Channel
Working memory has separate systems for storage of visual and auditory data
How People Learn • 57
Feature 1: Active Processor First and foremost, working memory—as its name implies—is an active processor. It is the conscious part of your brain—the part that thinks, solves problems, and learns.
Feature 2: Capacity Limits If I read you a list of 24 words and ask you to write down as many as you can, chances are you would not recall all 24 words. Working memory has pretty severe restrictions regarding how much information it can hold. And the limits are even more stringent when working memory is processing. Recent estimates set a limit of around three to five items when working memory must also be actively engaged in other activities. You are likely to recall more words in the first part of any list I read because your working memory has capacity to process those initial words. However, as you add more words, that processing capacity is soon exceeded. As each new word enters memory, it replaces a previous word with minimal opportunity for processing.
Feature 3: Dual Channels In chapter 2 we saw that adding a relevant visual to text improves learning. The benefits of visuals are based on the dual-processing feature of working memory. Dual channel refers to working memory’s two centers: one for storing and processing auditory information and a second for visual information. When you read a concrete word such as flower, you are more likely to process it in two ways: as phonetic data and also as the image that your mind forms when reading the word. In contrast, a word such as moral is not as easy to visualize, leaving you with only an auditory trace. Concrete words that can be encoded in two ways have a greater probability of being stored in memory and recalled later.
58 • Chapter 3
These three features—active processing, limited capacity, and dual channels—are the prime determinants for what works and what does not work in your training. We need working memory capacity to process new information for learning to occur. But when we load it up with content or irrelevant work, that processing is corrupted. We call this cognitive overload. You have no doubt experienced cognitive overload in your learning history. There are a number of techniques you can use to minimize cognitive overload that we will review throughout the book.
Long-Term Memory and Learning While working memory is the star of the learning show, we can’t leave out its supporting partner, long-term memory. Take a look at the chess board in Figure 3-3. Now imagine you looked at it for about five seconds and then were asked to reconstruct it using a real chess board and all of the pieces. How many times would you need to refer to the original chess board? Figure 3-3. A Mid-Play Chess Board
How People Learn • 59
You won’t be surprised that those unfamiliar with chess needed about seven to nine referrals to get most of the pieces correctly placed. However, what about a chess expert? Would they need to look back more or fewer times? Again, not surprising, chess masters got most of the pieces in place with about four tries. Why do you think the chess masters had better memory for placement of chess pieces? Is it because they are more intelligent, have better visual memory, or are experienced with chess? To answer these questions, the research team repeated the chess experiment with a crucial difference. Instead of using a realistic midplay chess board, they substituted a scrambled board. The same number of pieces was placed on the board but in a random order. Again, they asked expert and novice chess players to recall the pieces. What do you think happened this time? Do you think the master players still had an advantage and needed fewer glances back at the model board? Or would the master and expert be equivalent in their memory? The results were somewhat surprising. First, as you might expect, the novices needed about the same number of referrals as they did in the first experiment—around seven to nine repetitions. But how about the experts? The experts actually needed even more opportunities to refer to the chessboard than did the novices! These results suggest that it is experience rather than intelligence that makes the difference. For most novices, a chess board holds about 25 chunks of information—each piece being a chunk. Since working memory can hold only around four to five chunks, it takes quite a while to recall all of those pieces correctly. However, for an expert, there are many fewer chunks of information on the board. Why? Because the expert has played so much chess, they have a repository of play patterns stored in long-term memory.
60 • Chapter 3
Psychologists have estimated that chess experts store about 50,000 chess patterns in their long-term memory. Rather than recall each piece as a single entity as I did, they can recall whole clusters of pieces corresponding to various play patterns. However, when the pieces are placed randomly, those patterns not only don’t help—they actually hinder recall performance. The chess expert looks at a world that has become topsy-turvy and the conflict with their stored mental models actually makes their recall worse than novices. From experiments like these we learn that unlike working memory, long-term memory has a huge capacity for information. During learning—either formal or informal—the processing in working memory produces new or expanded patterns stored in long-term memory. These patterns can be brought back into working memory when needed and thereby endow working memory with a larger virtual capacity.
Expertise and Instruction Recall from chapter 2 that adding relevant graphics to a text description improved learning of individuals unfamiliar with the topic. However, the graphics had little effect on learners with topic expertise. Since experienced learners can form their own images as they read the words, instructional images add little learning value. As we review the various instructional methods in the chapters to follow, you will find that a common qualifying factor is prior knowledge. Methods that work well for novice learners have no effect—or in some cases even depress the learning of those with background experience. In some situations, expertise can get in the way of an end goal. Have you ever watched a subject matter expert teach a class? Quite often they overload the learners’ working memory with too much
How People Learn • 61
content, unfamiliar terms, and lengthy lectures. Experts just don’t realize that their memories can hold and process information much more efficiently than novices’. That’s why an instructional specialist who knows less about the content is a good partner for a subject matter expert to help the expert break down their content into smaller pieces and add instructional methods that reduce cognitive overload.
Bypassing Working Memory Limits via Automaticity We’ve seen that the patterns accumulated in long-term memory over years of experience afford experts the luxury of greater virtual working memory capacity. There is a second explanation for expert proficiency: It’s called automaticity. Any task—physical or mental—that is repeated hundreds of times becomes hardwired into long-term memory. Any automated task can be performed with little or no working memory resource. As you read this paragraph, you decode the words and sentences automatically and can allocate working memory to processing the meaning. Watch any first or second grader read and you will see a very different picture. Because word decoding is still not automated, reading is a very effortful process for these little ones. Only through years of reading and writing practice are the many underlying skills of reading automated, allowing fast scanning of entire paragraphs in the mature reader. Automaticity is the other secret of expert performance in any complex domain. Over years of practice, many layers of skills have become automated, allowing the expert to devote working memory to the coordination and problem solving needed to perform complex tasks. Automaticity allows multitasking. While familiar tasks such as decoding words are performed on automatic, the freed-up capacity
62 • Chapter 3
in working memory can be devoted to higher level tasks such as abstracting meaning. However, as we have learned from the data on cell phones and accidents, automaticity is a dual-edged sword. For experienced drivers, routine driving is an automatic task, freeing working memory to perform other tasks, such as talking on a cell phone. However, when the basic driving tasks suddenly require the conscious attention of working memory, you may not be able to switch working memory resources quickly enough to make a safe response. It is the psychological load imposed by talking on a cell phone as much as it is the physical activity of holding the phone that leads to cognitive overload. That’s why a hands-free phone can be just as dangerous as a handheld one.
Cognitive Load and Your Training Understanding the psychology of learning will help you make informed decisions about your training. An important psychological theory of learning and instruction known as cognitive load theory is based on the features of working memory and its relationship with long-term memory. According to cognitive load theory, learning requires rehearsal in working memory, and when working memory gets overloaded, learning is disrupted. There are three types of cognitive load: intrinsic, extraneous, and germane.
Intrinsic Cognitive Load Intrinsic cognitive load relates to the complexity of your instructional goals and materials. For example, in a language lesson if you are asked to write the meaning of a vocabulary word such as amici, the intrinsic load is relatively low. On the other hand, if you are asked to respond to a question from a native speaker, the intrinsic load is much higher.
How People Learn • 63
That is because you need to do several things quickly. First, you need to translate the question. Second, you need to formulate a response that is grammatically correct. Third, you need to speak your response using a reasonably correct accent. Lessons with higher intrinsic load will by definition impose greater cognitive load and require techniques to reduce irrelevant cognitive load in the learning environment. Simpler lessons such as the vocabulary exercise will not demand as much cognitive load management.
Extraneous Cognitive Load As the name implies, extraneous load is work imposed on working memory that does not contribute to learning. Extraneous cognitive load is imposed by instructional design decisions that make learning more demanding. For example, in Figures 3-4 and 3-5, both screens include the same visuals and words. Which version do you think imposes more extraneous cognitive load? Figure 3-4. Version A Layout of Text and Graphic
64 • Chapter 3
Figure 3-5. Version B Layout of Text and Graphic
As we discussed, to promote psychological integration of words and visuals, it is better to physically integrate them on the page or screen. Version B imposes less extraneous cognitive load because the learner can see the graphic and the explanation together. When graphics and text explanations are separated, as in Version A, extra mental effort must be devoted to integrating the two messages. First the learner reviews and makes sense of the words. Then while holding the words in memory, the learner looks at the diagram and attempts to interpret it. Working memory is being asked to hold information while reviewing and integrating the pie chart. This mental effort depletes working memory capacity, making learning less efficient. One of the main themes of this book is a focus on evidence-based techniques that minimize extraneous cognitive load, especially when intrinsic load is high.
How People Learn • 65
Germane Cognitive Load Germane cognitive load (also called generative processing by some researchers) is the good stuff. Learning requires psychologically active processing in working memory, which draws on limited memory resources. Instructional methods such as relevant practice exercises that promote effective active processing are examples of germane cognitive load. Chapter 2 reviewed research showing that adding a simple visual to a textual explanation of how a bicycle pump works improved learning of novices. The graphic served as a form of germane cognitive load. Adding practice exercises or “clicker questions” to a lecture are common techniques that can promote germane cognitive load. In general, any productive engagement aligned to the learning goal should promote germane cognitive load.
Your Job As a training professional, your job is to support the five learning processes and optimize the three types of cognitive load. As summarized in Figure 3-6, when intrinsic load is high, you need to do everything you can to reduce extraneous cognitive load and maximize germane cognitive load. As you read the remaining chapters, I will describe evidence-based guidelines for doing just that. Figure 3-6. Optimizing Cognitive Load in Your Instruction Intrinsic Load
Extraneous Load
Manage When Intrinsic Load Is High
Germane Load
66 • Chapter 3
The Bottom Line Now that we have reviewed some basic principles of human learning, compare my answers to your own understanding. A. Individuals with more experience have a greater memory capacity. True. Relevant experience allows individuals to form larger chunks in working memory. B. Learners with higher self-confidence will be more motivated. Not necessarily. Self-confidence in achieving an instructional goal is one factor, but the value the learner puts on that goal is equally important. C. Working memory has a capacity of five to seven items. True. However, those items can be of greater or smaller size depending on prior knowledge of the learner. D. Lesson design can positively affect mental load of learners. True. Lesson design can minimize extraneous mental load, freeing up capacity for learning and maximizing germane load.
Applying Learning Psychology to Your Training Having an understanding of both what works in instruction and why it works will help you adapt the guidelines in the chapters to follow. Using techniques that direct attention, stimulate integration of new content with existing knowledge, and optimize cognitive load will all contribute to improved training outcomes. Based on the limits of working memory, its relationship with long-term memory, and cognitive load theory, you will understand the rationale of instructional
How People Learn • 67
methods that manage intrinsic cognitive load, reduce extraneous load, and use working memory capacity deliberately to promote learning.
Coming Next We saw in this chapter that learning is based on attention, integration of words and visuals, and rehearsal of new content to result in formation of expanded knowledge and skills. All of these processes are based on active engagement between learner and instruction. An essential prerequisite of all learning is active engagement—that is, active processing in working memory that will lead to encoding of new knowledge and skills in long-term memory. However, as shown in chapter 1, physical engagement is not equivalent to psychological engagement and, in fact, may even depress effective psychological engagement. When it comes to active learning, many training events are unbalanced. Lectures in which too much information is provided to passive learners are too often considered successful training. In other cases, such as some games, high levels of physical activity detract from the actual learning goal. The next chapter introduces you to the foundational principles behind productive active learning. FOR MORE INFORMATION Cowan, N. 2014. “Working memory underpins cognitive development, learning, and education.” Educational Psychological Review, 26, 197-223. A comprehensive review of working memory that includes its historical roots, debates about working memory mechanisms, and the role of working memory in learning. Feldon, D. F., G. Callan, S. Juth, and S. Jeong. 2019. “Cognitive load as motivational cost.” Educational Psychology Review, 1-19. A review article that focuses on motivational effects of cognitive load.
68 • Chapter 3
Mayer, R.E. 2014. “Cognitive theory of multimedia learning.” In R.E. Mayer, (Ed). The Cambridge Handbook of Multimedia Learning, 2nd Edition. New York: Cambridge University Press. Very readable review chapter on psychological processes involved in learning. Sweller, J., J. J. van Merriënboer, and F. Paas. 2019. “Cognitive architecture and instructional design: 20 years later.” Educational Psychology Review, 1-32. A nice historical review of cognitive load theory after 20 years of research written by researchers who originated it.
Chapter 4 Active Learning
When Activity Leads to Less Learning The Engagement Grid Instructional Methods and the Engagement Grid Applying Engagement Principles to Your Training
69
70 • Chapter 4
Consider two quite different learning environments designed to teach the same content. The instructional goal is learning basic electromechanical principles, such as how a wet cell battery works. One lesson consists of an approximately 20-minute PowerPoint presentation that provides definitions and illustrations of the main concepts and processes. The alternative lesson is the narrative game called Cache 17 shown in Figure 4-1. In the game, you assume the role of an explorer searching a World War II bunker for valuable lost art. To navigate through the bunker, you need to open doors and overcome barriers by applying electromechanical concepts such as constructing a wet cell battery. The same information summarized on the PowerPoint slides is available to you as you navigate the bunker. Which lesson version is more engaging? Which would take longer to complete? Which would lead to better learning? Figure 4-1. Cache 17: A Thematic Game to Teach Electromechanical Principles
From Adams et al. (2012).
This chapter takes a closer look at active learning and the difference between psychological and behavioral activity. We will overview
Active Learning • 71
several proven engagement techniques, with emphasis on those that are more time and cost effective. Learner transformation of lesson content is an underlying engagement technique introduced here and detailed in chapters to follow.
What Do You Think? Mark each statement you think is true: A. Active engagement is essential for learning. qq B. Underlining text is an effective study strategy. qq C. High-engagement games promote learning. qq D. Teach-backs promote learning. qq
When Activity Leads to Less Learning Games are often touted as effective for learning because they elicit high learner engagement. Are all games effective? Check out the summary results of the Cache 17 game versus slide experiment described in the chapter introduction, shown in Figure 4-2. Figure 4-2. Learning and Time to Complete Game and Slide Lessons
N = 17
14
ES = .31
Significant difference
12 SD
8
Game
45
Slides
35 25
6 4
15
2
5 Pre-Test
Based on data from Adams et al. (2012).
Post-Test
Time
Minutes
Score
10
50
72 • Chapter 4
As you can see, a highly engaging lesson in the form of a thematic game not only took longer to complete but also led to slightly less learning than a slide show that presented the same content. What are some possible reasons for this outcome? Perhaps the theme of the game was irrelevant to the learning goal and therefore imposed extraneous cognitive load. The mechanics of trying to open doors and move equipment from one bunker area to another may have distracted players from learning. In contrast, learners viewing the slides could focus all of their mental resources on the content. Another possibility is that the game was only played one time. In a meta-analysis of learning games, Wouters and others (2013) conclude that games are more effective when played multiple times. In either case, a high engagement environment led to less learning than a “passive” slide show. Active engagement actually depressed learning. Note also that the game version consumed more than twice the time required to view the slides. In this experiment, a narrative game was both less efficient and less effective for learning than a slide presentation. Similar results have been found in other comparisons of learning from high and low engagement learning environments. For example, in chapter 1, we reviewed research reported by Stull and Mayer (2007) in which learning was better from a text accompanied by an authorprovided organizational graphic than when the learners filled in a blank graphic. Perhaps completing the blank graphic imposed too much extraneous load. Or perhaps learners entered incorrect information into the graphic. This research as well as the game experiment suggest that student activity alone does not necessarily lead to learning. In fact, it may even become a barrier. As we review a few instructional methods that do and do not promote learning, we find that many successful
Active Learning • 73
methods involve learner transformation of content. Transformation may be from one mode to a different mode. For example, learners may create a drawing of a process described in words. Or transformation may involve learners putting the content into their own words—for example, by teaching lesson content to others.
The Engagement Grid The grid shown in Figure 4-3 is a useful tool to illustrate a continuum of psychological and behavioral engagement. Along the horizontal axis, overt behavioral activity from low to high is illustrated. Along the vertical axis, psychological activity that leads to learning from low to high is shown. Figure 4-3. The Engagement Grid
Psychological Activity
HIGH
Mentally Active Quadrant 1 Behaviorally and Mentally Active Quadrant 2
Read
Practice Spanish
Mindless and Passive Quadrant 4
LOW
Mindless Activity Quadrant 3
Watch Car Racing
Cache 17 Game
Behavioral Activity
LOW
HIGH
Adapted from Clark and Mayer (2016).
Look at the four states of engagement—one in each quadrant. The Cache 17 game would fall into quadrant 3—high behavioral engagement but lower relevant psychological engagement. In contrast, the
74 • Chapter 4
slide presentation would fall into quadrant 1—low behavioral engagement but high psychological engagement that led to learning. Note that the upper grids shaded in a lighter color correspond to higher or more relevant psychological activity. Productive active learning methods fall into quadrants 1 or 2, both of which involve some form of transformation of lesson content. Quadrant 2 has an advantage because it includes contexts in which learners are both behaviorally and psychologically engaged. Although learning takes place with quadrant 1 methods, the advantage of behavioral engagement in quadrant 2 is the generation of a visible product, such as a response to a question, a case study solution, or putting a golf ball. A visible action or product provides a basis for feedback to correct errors or provide guidance. Although psychological engagement characteristic of quadrant 1 can generate learning, with no visible learner response it is difficult to assess learning progress or provide feedback. Evidence derived from a comparison of high and low engagement environments leads to our first guideline:
Active Learning Guideline 1 Incorporate frequent behavioral activities that promote content transformation.
Instructional Methods and the Engagement Grid This section summarizes a few examples of instructional activities that have been shown either ineffective or effective. The results
Active Learning • 75
are interpreted using the engagement grid and the process of learner transformation of lesson content.
Do Underlining and Highlighting Improve Learning? If you look at a typical college text or lecture notes, you will see plenty of markups in the form of underlining or highlighting. How effective is underlining for learning? Underlining would definitely be considered a high behavioral activity. And it would suggest an activity that supports attention to important content. In spite of its popularity and high behavioral engagement, a review of many studies showed highlighting to be relatively ineffective (Dunlosky et al. 2013). Therefore, I would put highlighting into Quadrant 3—high behavioral but low psychological engagement. Some potential reasons for the lack of effectiveness include: • Learners may not have enough background knowledge to underline the most important or relevant content. • Learners may highlight too much information thus diminishing any cueing effect. • While underlining draws attention to the marked sentences, it may not encourage organization or integration processes critical to learning. In fact, one experiment showed that learners who underlined a history chapter scored lower on inference test questions (but not on factual test questions) on a two-month delayed test (Peterson 1992). The Dunlosky and others (2013) review concluded that “in most situations that have been examined and with most participants, highlighting does little to boost performance.” Overall, highlighting involves minimal transformation of content.
76 • Chapter 4
Active Learning Guideline 2 Discourage the common study technique of underlining in favor of more productive activities.
Do Questions Improve Learning From Explanations? Engagement through questions can occur in a number of contexts; for example, self-study, such as when learners create and study flashcards to learn facts and concepts, or assigned exercises, such as questions posed throughout a lesson. These instructional methods all involve behavioral activity and when questions are effectively framed will promote transformation of content. Evidence supports use of questions. Instructional psychologists refer to these kinds of activities as “practice testing”—exercises or learning activities completed in class or independently as low-stakes or no-stakes learning exercises. The benefits of practice exercises have been shown for recall learning but also for inference and problem-solving. McDaniel and others (2012) assigned psychology students a weekly online practice activity. The online activity included a mix of assignments. Some of the activities involved practice questions with feedback while others requested “restudy” of class content. Some of the course content was not included in either of the online practice assignments. Subsequent unit exams included questions presented during practice sessions as well as new questions. The summary of results is shown in Figure 4-4. This research showed the benefits of practice tests not only for the same questions that later appeared on the exam but also for new questions related to the content practiced. A review of multiple research studies led the Dunlosky team to rate the utility of practice exercises as high. “Testing effects have been demonstrated across an impressive
Active Learning • 77
range of practice-test formats, kinds of material, learner ages, outcome measures, and retention intervals.”. Based on the benefits of various forms of practice tests, I classify them in Quadrant 2—high behavioral and high psychological engagement. Figure 4-4. A Comparison of Learning from Online Activity assignments. Practice + feedback Restudy
100
Course Exam Grade in Percent
No practice 75
50
25
10
Questions in Practice or Restudy
New Questions
Based on data from McDaniel and others cited in Dunlosky (2013).
Do Questions Improve Learning From Textual Explanations? Questions placed at the end of textbook chapters or throughout online explanations represent one format for practice testing. Research from the 1970s found better learning on a final test when students were required to answer questions compared to no question control groups. Greatest benefit occurred when questions were placed after rather than before the reading because students focused not only on questioned information but also on additional content in the chapter. Further, deeper learning was found to accrue from higher level questions compared to questions that asked for recognition or regurgitation of verbatim text information.
78 • Chapter 4
Roelle and Berthold (2013) conducted an experiment that compared learning of computer textual explanations on the topic of management theory among learners who answered questions in an on-screen box versus learners who could make their own notes in the box but were not asked questions. Some learners had completed prework that increased their prior knowledge before reading the explanations. The team found that answering questions promoted learning among low prior knowledge learners but not among those who had greater prior knowledge as a result of prework. Perhaps individuals with higher prior knowledge found the questions redundant, turning question responses into an extraneous cognitive load.
Do Clicker Questions Improve Learning From Classroom Lectures? Let’s apply the research on questions added to textual explanations to large lecture settings. Mayer, Stull, and others (2009) found that learners who answered questions in a lecture with a response clicker gained on average a one-third grade point improvement compared to learners who either answered questions on paper or who were not asked questions. Response systems such as clickers are applicable to lecture formats common in higher education, classroom training settings, and conferences. For best results, display a conceptual or prediction type objective question (such as multiple choice) periodically throughout a presentation. Allow learners time to respond and then project the voting results. Promote discussion of the answers and conclude with an explanation of the correct answer. Clickers have been demonstrated to have positive learning benefits by promoting psychological processing of information through a simple behavioral response and feedback mechanism.
Active Learning • 79
Although you could implement a similar technique using a show of hands, the clicker technology indicates when everyone has responded and displays the aggregated responses in a bar or pie chart on the screen, making the feedback discussion more focused. Clickers are one way to move a passive Quadrant 4 environment into a Quadrant 2 context. Zhu and Urhahne (2018) evaluated the use of clickers for five weeks in several math classes taught by different teachers. They found that not only did clickers lead to better learning but also that teachers using clickers had a better assessment of student learning. Better judgments of learning will focus instructors on additional help for either the whole class or for individuals as needed. Shapiro and Gordon (2012) and Anderson and others (2013) also report data supporting the benefits of clickers. Taken together, we have consistent evidence for the value of response systems in classroom settings.
Active Learning Guideline 3 Use questions throughout your training events to promote learning not only of questioned content but adjunct content as well.
Do Self-Explanations Improve Learning? So far, we have reviewed the effectiveness of assigned questions. What if learners spontaneously generate their own questions and responses? The answer to this question has led to one of the most powerful and broad-based engagement methods that applies across many topics and media: self-explanations. Imagine two students studying a physics text—in particular, studying some example problems in the text. One student takes time
80 • Chapter 4
to explain the example to herself by relating the specifics of the example to the principles of the lesson. The second student looks over the example but gives it only a shallow review, focusing on the specifics of the example rather than the principles it reflects. Which student would learn more? In a classic experiment, Chi and others (1989) compared learning processes among physics students given problems to study. Those who spontaneously generated many self-explanations scored twice as high on the post test compared with those who generated only a few explanations. Since then, many studies have validated the benefits of self-explaining. In fact, self-explanations will be a fundamental method that we will revisit through out the book. A 2018 meta-analysis analyzed 69 experiments on selfexplanations involving nearly 6,000 students (Bisra et al. 2018). It found an overall effect size of 0.55 among learners who responded to questions or were directed to self-explain content. These findings put self-explanations among a small handful of highly effective instructional methods. An analysis of the various experiments found that self-explanations were more effective than providing learners with instructional explanations—partially as a result of the content transformation sparked by self-explanations. Self-explanations can be promoted by asking learners to respond to questions as discussed in the previous paragraphs. In addition, students can be trained to self-explain on their own. Ainsworth and Burcham (2007) compared understanding of blood circulation in the heart between a group of students that received self-explanation training and a group that did not. The training included examples of self-explanations that might be generated while reading a text. Those trained to self-explain demonstrated better learning on all post-test questions.
Active Learning • 81
Recent research compared learning from a video lesson among three groups of learners: One group wrote verbal explanations, a second group drew sketches of the content, and a third group rewatched the video. On a transfer test, learners who generated explanations significantly outperformed those who drew sketches as well as those who rewatched the video (Fiorella et al. 2019). The various experiments we have reviewed show the benefits of self-explanations of text-based readings, lectures, and video lessons. While some of your learners may be spontaneous self-explainers, many are not. Further, research shows that it is not the amount of self-explaination that is important so much as the quality of those explanations. For most situations, I recommend that the instructional designer or trainer promote self-explanations by adding questions to the lesson that will promote productive self-explanations.
Active Learning Guideline 4 Encourage self-explanations of instructional content, including text explanations, video lessons, examples and diagrams.
Do Teach-Backs Promote Learning? A teach-back is an assignment that asks participants to review instructional materials and prepare a short lesson for individuals unfamiliar with the content. Five recent reports found similar benefits from teach-back assignments (Fiorella and Mayer 2014; Hoogerheide et al. 2014, 2016, and 2018; Fiorella and Kuhlmann 2019). Three groups of students were given assignments as follows: 1. study to prepare for a test
82 • Chapter 4
2. study to prepare to teach the main ideas to others (but did not actually teach the content) 3. study to prepare to teach the main ideas to others and then actually do a teach-back. Note that in this experiment, group two expected to teach but in fact did not. The evidence showed best delayed learning among those who both prepared to teach and actually taught with an effect size of 0.56. In another report, Hoogerheide and others (2018) taught circuit troubleshooting using two worked out demonstrations followed by a practice problem. Following the lesson, half the learners taught a new worked example in front of a camera to a fictitious peer group and the other half spent the same amount of time reviewing the new worked example. Consistent with prior experiments, those in the teaching conditions scored an average of 72 percent compared to the study group which scored an average of 57 percent. Measures of arousal using electrodermal activity showed higher levels of arousal as well as higher levels of reported germane cognitive load in the teach-back group. These two indicators suggest greater psychological engagement. Taken together we have consistent evidence for the learning value of teach-back assignments—even video-recorded teach backs lacking a live audience. Teach-backs will require mental processing of newly learned knowledge and skills. Having established the value of teachback assignments, future experiments will evaluate the best conditions for the teach-back. For example, Fiorella and Kuhlmann (2019) report better learning from teach-backs when the learners provided an explanation while also creating drawings compared to those who only provided an explanation or only provided a drawing.
Active Learning • 83
Active Learning Guideline 5 Assign learners content resources to convert into lessons for others unfamiliar with the content. Ask learners to provide an explanation along with relevant drawings. Be sure that learners follow preparation with actual teaching.
Do Drawing Activities Lead to Better Learning? Most any activity that involves a transformation of ideas from the lesson content into the learner’s own words or visuals is a good candidate for productive engagement. One form of transformation involves conversion of textual information into a visual representation. For example, a textual description of a business or mechanical process could be converted into a flow chart or simple sketch. Much of the evidence showing the benefits of drawing has involved science topics in educational settings. To minimize extraneous cognitive load, research suggests that the instruction provide elements of a drawing and ask learners to assemble it or complete it. This type of activity could be useful as a drag and drop computer exercise. The accuracy of the learner’s representation is important. Therefore having an opportunity to compare their drawing with an author-provided drawing would offer a feedback opportunity. A recently reported experiment by Schmidgall and others (2019) indicated that the presence of an image rather than construction of that image led to better learning. Learners read a text on biomechanics of swimming and were assigned to construct a drawing, write a summary, read text with drawings provided, or read text only. They found learning was equally good among those who either created a
84 • Chapter 4
drawing or reviewed the text with drawings provided. Both the conditions involving visuals led to better learning than the summary assignment. This result supports the value of visuals but suggests that provided drawings are as effective as learner-constructed drawings and would be both more efficient and consistent compared to individual learner drawings. In previous paragraphs we saw that a teach-back assignment that required both an explanation and a drawing led to better learning than a teach-back assignment that involved only an explanation or only a drawing. We will need more research to determine the conditions under which it makes sense to assign drawing activities. A different form of transformation involves reorganizing words into a spatial relationship with some form of graphic organizer. Tree charts or concept maps could be used to show inter-relationships in the content. Nesbit and Adelsope (2006) report an overall effect size of 0.87 for learner-created concept maps. As with the drawing activity described previously, to save time, the instruction could provide a pre-prepared blank map outline for completion. In their review of generative learning activities, Fiorella and Mayer (2015) find strong evidence for the utility of mapping strategies in improving student learning from expository text passages.
Active Learning Guideline 6 Consider assignments that involve transformation of textual information into visual-spatial representations such as flow charts or simple sketches.
Active Learning • 85
The Bottom Line Compare your responses from the start of the chapter to my answers below. Make a note of any instructional techniques you may want to add or edit in your learning environment. A. Active engagement is essential for learning. True. However, it is psychological engagement that involves transformation of content rather than behavioral engagement that is most important. B. Underlining text is an effective study strategy. False. Evidence indicates that although learners are behaviorally active and believe underlining to be effective, the benefits to learning are minimal. Underlining requires a minimal amount of information transformation. C. High-engagement games promote learning. It depends. Many games are high in behavioral engagement. However, it is important that the game design promote the psychological responses needed to achieve the target skills. See chapter 16 for more discussion on games. D. Teach-backs promote learning. True. Evidence shows that preparing to teach a lesson that includes both an explanation and drawings followed by actually teaching the lesson (on video or to a live audience) leads to more effective learning than asking students to study for a test.
Applying Engagement Principles to Your Training Learning relies on psychological engagement and transformation of lesson content into unique representations. A combination of behavioral
86 • Chapter 4
activity that promotes appropriate psychological engagement followed by feedback will optimize learning. In Chapter 11, we will review several evidence-based laws regarding practice exercises including what kind of practice to develop, how much practice is needed, where to place practice in your instruction. Based on the evidence described in this chapter, apply the following guidelines to your training to maximize the benefits of engagement in learning: qqEarly in the planning process, allow time and resources to
design and deliver environments that promote behavioral and psychological engagement. qqPlan transformation activities such as teach-backs that
require learners to convert lesson content into unique visual and verbal representations. qqInsert questions into explanations that will promote under-
standing aligned to the instructional goal. »» Use clickers during presentations. »» Add questions to explanations. qqEncourage learners to self-explain during study in place of
less productive techniques such as underlining. qqAssign learners to plan, create, and deliver lessons on new
content, for example, teach-backs. qqEnsure that engagement activities do not impose extrane-
ous load.
Coming Next This chapter completes part 1 of this book in which we have migrated from common instructional myths reviewed in chapter 1 to the foundations for evidence-based practice with three core themes that make up the foundational ideas of this book. The main themes are:
Active Learning • 87
• types of evidence that you will read in this book, as discussed in chapter 2 • learning processes and cognitive load principles summarized in chapter 3 • transformative basics of productive engagement discussed in this chapter. All training professionals use a mix of text, graphics, and audio to communicate their content and promote engagement. In Part II, we will review evidence that will guide your best use of these communication modalities in classroom training, video lessons, e-learning, and student workbooks. FOR MORE INFORMATION Dunlosky, J., K. A. Rawon, E. J. Marsh, M. J. Nathan, and D. T. Willingham. 2013. “Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology.” Psychological Science in the Public Interest, 14, 4-58. A comprehensive and readable review on evidence-based active learning. Fiorella, L., and R. E. Mayer. 2015. “Eight Ways to Promote Generative Learning.” Educational Psychology Review, 28, 717-741. Practical and readable review of instructional methods linked to active learning. Hoogerheide, V., L. Fiorella, A. Renkl, F. Paas, and T. van Gog. 2018. “Enhancing example-based learning: Teaching on video increases arousal and improves problem-solving performance.” Journal of Educational Psychology, 111, 45-56. Research using biometric and survey data to explore why teach-backs are effective active learning vehicles. An example of using multiple measures to define the why behind instructional methods.
Part 2
Evidence-Based Use of Graphics, Text, and Audio
Chapter 5
Visualize Your Content The Lost Potential of Graphics What Is a Graphic? Graphics and the Brain Do Visuals Promote Long-Term Learning? Who Benefits From Graphics? Beyond the Pumpkin Slide Complex Versus Simple Visuals Engaging Learners in Content-Specific Graphics Engaging Learners in Content-General Graphics Applying Visuals to Your Training
91
92 • Chapter 5
If you are like many trainers, you are very comfortable in a world of words. Since preschool, your education has focused on verbal literacy. You’ve devoted many hours to reading and writing from the primary grades though college, but you’ve likely had no training in visual literacy. You think and communicate with words rather than graphics. As instruction is increasingly delivered through slides and screens, though, visualization is more important than ever before.
The Lost Potential of Graphics Why are so many instructional slides or screens filled with text as in Figure 5-1 or with decorative visuals? Chances are you will hear one of the following reasons: • Text is much faster to create. • I’m not an artist. • My content does not lend itself to visualization. • Decorative clip art is easy to find and livens up the slides. • Our learners like materials with visuals. Figure 5-1. The Lost Potential of Graphics
Visualize Your Content • 93
I have a colleague who produces nature videos that feature animals in exotic settings. I wondered how she got those beautiful visuals that fit the documentary theme and narration so perfectly. “Oh no,” she chuckled. “We shoot lots and lots of visuals and then we write the story to match the most engaging shots.” In video production (and many newscasts), the visuals drive the story. In the world of workforce learning, job roles must drive the story. But the more you can visualize the relevant content, the better for learning. Some of the best e-learning developers have a background in video production. Why? Video relies so heavily on the eyes that one has to think visually from the start. But you don’t need to be a video production expert to leverage the power of graphics. You just need to apply some of the evidence-based guidelines in this chapter.
What Do You Think? Put a check next to each statement about visuals that you believe is true: qqA. Not all visuals are equally effective.
qqB. Decorative visuals added for interest increase motivation
and learning. qqC. Often a simpler visual, such as a line drawing, is
more effective than a more realistic depiction, such as a photograph. qqD. Learner-generated drawings can improve learning.
What Is a Graphic? For learning purposes, a graphic is a visual representation of lesson content. Graphics include three major categories—semantic, pictorial, and relational (Figure 5-2). Semantic visuals bear no similarity to the objects depicted but rather organize words in a spatial diagram to show
94 • Chapter 5
relationships among lesson topics. The tree diagram shown is a hierarchical semantic visual. A temporal visual uses words along with shapes and arrows to summarize a process that takes place over time. A relational visual, such as a table or concept map, illustrates coordinate connections among concepts. Figure 5-2. Types of Graphics
A pictorial graphic resembles the objects it illustrates and can be static or animated. Pictorial graphics include typical lesson visuals intended to directly represent elements of the work environment, such as a screen capture from a software application or a line drawing of equipment. Pictorial graphics may be simple, such as a line drawing, or complex, such as a photograph. In addition, graphics may incorporate cues, such as arrows or colors, to direct the learner’s eye to important elements. As you can see, you have a wide range of choices in the type of visuals to illustrate your content. In most cases, adding visuals to your lessons will increase production time and costs of your materials. However, as we saw in chapters 1 and 2, learners prefer lessons with
Visualize Your Content • 95
visuals and some types of visuals lead to increased learning—especially for novice learners. Furthermore, as instruction is increasingly delivered on screens, visualization of content takes full advantage of the media. Investing resources in visuals can be a win-win, resulting in lessons that get better ratings and promote learning. An important question is: What kinds of visuals help and what kinds of visuals hinder learning?
Graphics and the Brain Chapter 3 reviewed basic processes involved in learning, including attention, organization, and integration. Also mentioned was the role of motivation to prompt learners to invest effort in these processes. Effective visuals and visual elements, such as arrows, support one or more of these processes.
Use Cues to Draw Attention to Graphic Elements Novice learners often are distracted by more salient visual features rather than those most relevant to the learning objective. Eyetracking research shows that adding cueing devices can help learners attend more rapidly and longer to important elements in a visual display. Cues can be visual, such as arrows or color, or auditory, such as a deeper intonation to important words in a narration. Research reported in 2018 by Zie and others showed better learning from a science lesson that incorporated both visual and coordinated auditory cues than from lessons that used only visual or only auditory cues. Eye tracking showed that learners viewing lessons with visual and auditory cues attended faster to the cued elements than those studying versions with only visual or only auditory cues.
96 • Chapter 5
Animated visuals may especially benefit from cues since a great deal of visual information is presented often out of learner control of pacing. However, many of the cues, such as arrows and circles, that are effective for static visuals do not help in animated graphics. In chapter 6, techniques for cueing animations are reviewed.
Use Semantic Visuals to Promote Organization As part of the learning process, the brain makes important connections underlying related concepts or principles. Tree charts and tables are two forms of semantic visuals that summarize lesson relationships.
Use Visuals to Aid Integration Learners must integrate words and visuals in the lesson with one another and with pre-existing knowledge in long-term memory. Successful integration will prompt learners to generate inferences that extend their understanding beyond the words and visuals shown. Butcher (2006) compared three lesson versions on blood circulation through the heart. One version included no graphics. A second version included a simple line drawing, and a more realistic two-dimensional graphic was used to illustrate the third version. The two graphics are shown in Figure 5-3. To analyze psychological processes while studying, Butcher recorded and categorized comments students made while reviewing the lessons. One category was inferential comments that connected and extended ideas stated in the lesson. For example, from a text stating: “As the blood flows through the capillaries in the body, carrying its supply of oxygen, it also collects carbon dioxide. The blood that empties into the right atrium is dark colored. It has picked up carbon dioxide from the body cells,” one student made the following inference: “The blood is dark because of
Visualize Your Content • 97
the carbon dioxide. . . . Oxygen probably enriches the red color of the blood” (Butcher 2006). Note that this relationship was not explicitly stated in the text but was inferred by the learner. A summary of the percentage of statements that included productive inferences is shown in Figure 5-4. Figure 5-3. A Simple and Detailed Diagram of Blood Circulation
From Butcher (2006).
Figure 5-4. Inferences Produced in Three Lesson Versions on Blood Circulation
Based on data from Butcher (2006).
98 • Chapter 5
The simple line drawing graphic led to best learning. Learners studying the lessons with text and diagrams made a higher percentage of productive inferences compared to learners reviewing text alone. Extensive evidence showing the learning benefits of adding visuals to text supports my first recommendation.
Graphics Guideline 1 Promote attention, organization, and integration by using both visuals and text to communicate content.
When diagrams are added to text, learners are able to develop a more complete, more accurate mental model by making inferences about the relationships presented in the lesson. Relevant visuals work because they offer the brain an additional opportunity to build new skills.
Do Visuals Promote Long-Term Learning? Chapter 2 showed that many experiments measure learning immediately following lesson study. However, in workforce development, our goal is to promote learning that will apply to the job over a long time period. Do the benefits of visuals extend beyond immediate completion of a lesson? Schweppe and others (2015) evaluated learning of a pulley system presented with only written text or with text and a picture of the pulley system. Learners were tested immediately after completing the lesson as well as one or two weeks later. Consistent with previous research, immediate learning was more efficient and effective among those studying the version with graphics. But the good news is that this benefit was also seen one or
Visualize Your Content • 99
two weeks later! We look forward to more experiments that measure longer-term learning.
Who Benefits From Graphics? Chapter 2 presented evidence comparing the learning of novice versus experienced students from a diagram of a bicycle pump. The data showed that novice learners benefited from graphics more than learners with background knowledge of the content. Rather than visual or auditory learners, all learners who are new to a content domain benefit from a relevant visual. Several other experiments have shown similar results—that is, visuals that benefit novices don’t help experts. For example, Brewer and his colleagues (2004) compared understanding of judicial instructions presented with words alone (audio instructions) with instructions presented with words and visuals (audio-visual instructions). If you have ever served on a jury, you know that the judge gives a verbal explanation regarding the legal aspects of the case to consider during deliberations. Brewer’s experiment included two types of mock juries: one made up of typical citizens and a second consisting of law students. The lesson included about 10 minutes of a judge’s auditory instructions for a self-defense trial. The audio-visual version added a flow chart and visuals that corresponded to the judge’s explanations. For example, to illustrate the requirement that the accused believes that their conduct was necessary and reasonable, animations depicted a man pointing a knife at a woman’s throat and the woman kicking the man believing she had no choice. In a second contrasting animation based on the same scenario, three other people approached and the man dropped his knife. The woman still kicked him, although it was apparent that alternative action was possible. After hearing or hearing
100 • Chapter 5
and viewing the instructions, all jurors were tested with a self-defense scenario. As you can see, the scores of novice jurors but not legal students improved in the AV version (Figure 5-5). The law students had sufficient legal knowledge to understand the judge’s words without the additional support of visuals. Figure 5-5. Visuals Lead to Better Jury Understanding of Legal Concepts by Novices but Not Experts
Based on data from Brewer et al (2004).
Experiments that compared the effects of visuals on learning of experts and novices are the basis for my next recommendation.
Graphics Guideline 2 Emphasize visuals for novices more than for learners with prior knowledge of the content.
Beyond the Pumpkin Slide Imagine you are responsible for updating the sales force on the new printer features. Because time is short, you decide to use an explanatory presentation to be delivered through your virtual classroom. With the help of the engineering division, you quickly pull together
Visualize Your Content • 101
slides that summarize the features of the printer. As you review your first draft, however, it seems pretty boring; most of the slides contain text and more text along with a few equipment diagrams. At this stage, you happen to glance at the calendar and realize that it’s October and October means Halloween. Aha! You quickly do a clip art search and find some great jack-o’-lanterns and fall trees to add to your slides. Yes, your slides are more colorful, but what effect, if any, will your embellishments have on learning? If you have used decorative art to enliven your slides, you are not alone. In an analysis of visuals used in school textbooks, Mayer, Sims, and Tajika (1995) found that pages were about evenly divided between text and illustrations. When they analyzed the type of illustrations used, the overwhelming majority served no useful instructional purpose. In other words, they were “pumpkin” graphics. In more formal terminology, visuals like these are called decorative graphics. Decorative graphics are designed to add visual interest or humor to the material. For example, the visual shown in Figure 5-6 was created for a customer service e-learning course. Figure 5-6. Decorative Graphic Example A photograph looks nice on this screen but is irrelevant to the instructional content.
With permission from Training & Development Magazine.
102 • Chapter 5
The course developer’s organization set a standard that all e-learning use only images of happy people in the form of photographs for illustrations. While photographic visuals resulted in a consistent look and feel among the slides, this standard limited the visual options that could be used to support the instructional message. To improve the potential learning value of the visual, the revision is designed to summarize the concept of gift certificate access (Figure 5-7). While the revised visual may not be as “pretty,” chances are it will promote learning more effectively. Figure 5-7. Revised Graphic Example A revised graphic illustrates the concepts described in the text.
With permission from Training & Development Magazine.
Can Decorative Visuals Defeat Learning? You might agree that a decorative visual does not promote learning, but does it really do any harm? Let’s see what the evidence tells us. Mayer and his colleagues (2009) evaluated learning from two lessons on how lightning forms. A basic version included text and visuals that illustrated the process of lightning formation. A spiced-up version added some interesting visuals and discussion about lightning. For example,
Visualize Your Content • 103
a visual of an airplane struck by lightning was accompanied by a brief description of what happens to airplanes in the presence of lightning. Another visual showed the burns on a football player’s uniform caused by a lightning strike. The research team added several interesting visuals like these to the basic lesson. Select all the statements that you think summarize the outcomes from this experiment: A. Learners found the enhanced versions more interesting. qq B. Learning was better from the enhanced versions. qq C. Learning was worse from the enhanced versions. qq D. Learning was the same since the core content was the qq same in all versions. It’s true that learners did find the versions with added anecdotes and visuals more interesting than the basic versions. Recall the research reviewed in chapter 1 showing that students gave higher ratings to all lesson versions with any form of graphic, including irrelevant and relevant visuals (Sung and Mayer 2012a). Unfortunately, higher ratings do not correlate with better learning. Whether presented in a paper-based version or via computer, the basic lesson versions that omitted the interesting visuals led to better learning! In five experiments that compared a concise with an embellished version, the median effect size of 1.66 (a high effect) favored the concise versions. That means that a lesson that resulted in an average score of 75 percent with a standard deviation of 10 would boost performance to more than 91 percent if extraneous graphics were cut! The correct answers to the previous question are A and C. Why do decorative visuals—even visuals related to the topic— defeat learning? Imagine that your brain starts to form an understanding of warm air rising and producing an updraft, then cooling and condensing into a cloud. Then on the next screen, you review some text
104 • Chapter 5
and visuals about airplanes struck by lightning. Following the airplane story, you see that the top of the cloud extends above the freezing level, leading to the formation of ice crystals. Next you review some statistics and visuals about the hazards of lightning to golfers. Well, you get the idea. You can see that just as you are building an understanding of lightning formation, you are distracted by an interesting but irrelevant visual. Because visuals are so powerful, graphics unrelated to your instructional goal at best do not contribute to understanding and at worst actually depress learning! Decorative or unrelated visuals may be most detrimental to the learning of those inexperienced with the content. Rop and others (2018) reported that mismatched visuals initially depressed learning compared to lessons with matched visuals or no visuals. However, as the learners gained more experience with the content, they increasingly ignored mismatched pictures, gradually extinguishing the negative effects. As the pulley research by Schweppe and others (2015) that measured immediate and delayed learning showed, we benefit from research studies that evaluate learning outcomes over time—not just immediately after learning.
Are Decorative Visuals Ever Useful? Research published in 2018 considered whether some decorative visuals might be effective for learning. Four different decorative graphic styles were compared to a no-graphic lesson on the human body. Some graphics were designed to invoke positive feelings, while others generated negative reactions. Some graphics were strongly associated with the lesson topic, while others similar to the airplane graphic reviewed in the lightning research were weakly associated with the topic. Examples of strongly associated positive graphics as
Visualize Your Content • 105
well as weakly associated negative graphics are shown in Figure 5-8. Did visuals that elicited positive feelings lead to better learning than those that were more negative? Take look at the results summarized in Figure 5-9. Figure 5-8. Positive Strongly Associated vs. Negative Weakly Associated Graphics in Lesson on Human Body
From Schneider and others (2018).
Figure 5-9. Transfer Learning for Positive (+), Negative (-), Strong, or Weakly Associated Graphics
Based on data from Schneider and others (2018).
106 • Chapter 5
Compared to the version without graphics, positive decorative visuals were more effective, especially positive visuals that were strongly related to the text. In contrast, negative visuals, whether strongly or weakly related, were not more effective than the version with no visuals. The research team concludes that “interspersed decorative pictures are conducive to the learning process when such illustrations elicit positive emotional states and foster cognitive processing by a strong connectedness to the presented text” (Schneider et al. 2018). In general, for skill-building lessons, I recommend the following guideline.
Graphics Guideline 3 Emphasize graphics that are strongly related to the content. Decorative visuals that induce positive feelings and are related to the content can improve learning compared to lessons with no graphics.
Complex Versus Simple Visuals Is a photograph more effective than a line drawing? In other words, is a realistic representation more effective for learning than a simpler version? Recall the research on blood circulation comparing learning from lesson versions that added a simple and complex visual to the text. While both simple and complex visuals led to better learning than no visuals, the simple version was more effective than the more complex version. Keep in mind that the learning goal was to understand the process of blood flow through the heart. Had the goal been to identify the parts of the heart, a more realistic rendition may have been more effective. Several experiments have shown that often a simple line
Visualize Your Content • 107
drawing is more effective for learning than a more realistic graphic, such as a photograph or detailed sketch. Naturally, the benefits of a simpler graphic will depend on the background knowledge of the learner and the instructional goal. For novice learners and an instructional goal of understanding a process, evidence suggests simpler versions.
Graphics Guideline 4 Use simple rather than complex graphics, which may add extraneous cognitive load.
So far, we have focused on visuals and visual techniques, such as cueing devices like arrows to direct attention, that promote learning. You can extend the benefits of visuals by engaging learners with them. The next section reviews research on making contentspecific visuals engaging and using organizational visuals to improve content-general learning.
Engaging Learners in Content-Specific Graphics A content-specific graphic is one that represents concepts specific to the subject area. A diagram of the heart in a physiology class or a schematic of electrical circuits in a troubleshooting class are two examples.
Learning From Drawing Chapter 4 reviewed evidence on the benefits of learner-generated drawings. Do you think that learning from drawing would be: A. Better due to higher engagement with the text? qq
108 • Chapter 5
B. Depressed due to cognitive load of having to create a qq drawing? C. Dependent on the accuracy of the drawing produced? qq Schwamborn and colleagues (2010) asked students to read a scientific text explaining the chemical process of cleaning with soap and water that included the concepts of surface tension and the effects of detergent on water. Some students were directed to create drawings as they read. Rather than draw from scratch on a blank slate, learners were given elements of the drawing and the background (Figure 5-10). Learners with drawing assignments scored more than double compared to learners who read the text with no drawing assignment on a problem-solving test. The research team evaluated the student drawings and divided them into high- and low-accuracy depictions. They found that those producing the higher accuracy drawings scored almost double on a problem-solving test compared to those who drew with lower accuracy. The answers to the previous question are A and C. Figure 5-10. Support Elements Provided for a Drawing Assignment of a Chemical Process
From Schwamborn and others (2010).
In a review of 15 research studies that compared learning from drawing textual information with learning from other activities, including reading the text, reading the text with illustrations, or
Visualize Your Content • 109
writing summaries, Van Meter and Garner (2005) identified three conditions that maximize the learning benefits of drawing. First, the student-generated drawing must be accurate. To ensure accuracy, learners should get feedback on their drawings, such as by viewing an accurate version and correcting their own drawing. Second, learners require support to manage cognitive load. For example, in the drawing assignment illustrated in Figure 5-10 rather than draw from scratch, learners used pre-prepared elements and a pre-prepared background. Having access to prepared elements of the diagram reduced cognitive load that otherwise would be devoted to creating those elements. As a result, learners can devote their working memory resources to depicting the relationships among the elements. Third, drawings benefit conceptual understanding and problem solving more than recall-level performance. Most of the research reports in their review used science or mathematics content involving students ranging from kindergarten to college age. A 2018 review of research by Fiorella and Zhang on the effects of drawing identified several conditions that influence learning. Just about all experiments that compare drawing with a group that reads or re-reads lesson text report positive effects of drawing on learning with medium to high effect sizes. Perhaps the act of drawing engages learners more than re-reading text. In fact, when drawing is compared to other study techniques, such as writing text explanations of the content or reviewing instructor-provided drawings, results have been mixed. The reasons for different outcomes may reflect differences in study time between groups, such as a drawing group devoting more time than groups reviewing provided drawings. In addition, the amount of guidance provided to learners in various experiments ranged from minimal (“Draw the following . . .”) to extensive, including drawing
110 • Chapter 5
training, using a partially completed drawing, or comparing learner drawings to instructor drawings. Keep in mind that general prescriptions (for example, “drawing improves learning”) must be considered with moderating factors, such as time studying, amount of guidance, and how the comparison group studies. The authors suggest “when an appropriate illustration can be made available to students, asking students to instead create their own drawings may be inefficient.”
Learning by Self-Explaining Visuals The previous experiments showed how learners can gain understanding by constructing their own graphics. Alternatively, you could insert short questions that prompt learners to give verbal explanations of a lesson diagram. The value of self-explanations was discussed in chapter 4. For many years, we’ve known that learners who generate selfexplanations of content learn more than learners who mostly paraphrase it. Can learning from graphics be encouraged by self-explanations? Figure 5-11. A Diagram and a Text version of a Lesson on Blood Circulation
From Ainsworth and Loizu (2003).
Visualize Your Content • 111
Ainsworth and Loizu (2003) compared learning the circulatory system by college-aged students who studied from a text description or from labeled diagrams that used arrows and color to illustrate blood flow. Examples of the diagram and the text lesson versions are shown in Figure 5-11. Participants were asked to generate explanations as they studied either the diagram or text version. These explanations were recorded and analyzed. All students took a test that required them to draw the blood path as well as answer multiple choice items that included factual and inference questions. Not surprisingly, those in the diagram conditions scored better on the drawing test. However, they also scored higher on the multiple-choice test. Those studying the diagrams generated more self-explanations than those studying text even though they devoted significantly less time to studying the materials. This research showed that study of labeled and cued diagrams led to more self-explanations and better learning than study of text alone. Notice that the best learning arose from assignments in which learners were required to convert a visual modality to a verbal modality. Perhaps having to transform ideas from one modality to another, such as visual to verbal, promotes deeper processing of the graphic. Based on evidence to date, I recommend the following guideline.
Graphics Guideline 5 Engage learners by asking them to convert verbal explanations to visuals (with help and feedback) or generating verbal explanations of diagrams.
112 • Chapter 5
Engaging Learners in Content-General Graphics A content-general graphic is a semantic visual that can be used to organize content from different domains. These graphics can be applied to diverse types of content in contrast to content-specific graphics. A tree diagram that learners must complete while reading a text is a typical example.
Using Visual Organizers to Plan Position Papers or Presentations Imagine you had to learn relationships among a number of different items, such as company products. Your job role requires you to explain the pros and cons of different products and make a reasoned recommendation. Completing tasks of this type can be aided by asking learners to engage with visual organizers.
Using Tables to Categorize Related Information Asking learners to organize content with tables has been found to be more effective than either preparing an outline or taking free-form notes. In one experiment, a text described a number of features about various wildcats. Not into wildcats? Think product knowledge. Kauffman and others (2011) asked learners to take online notes on the content using either an open-ended text box, an outline that listed wildcat names, or a table that listed names across the top row and characteristics down the left column. The table format resulted in best learning. A table can offer clues as to completeness of notes. Empty cells signal missing information. Reviewing tabularized information also helps learners to compare and contrast a series of concepts across their features. For example, a series of products could be listed across the top
Visualize Your Content • 113
row and features, such as benefits, dimensions, recommended uses, and pricing, could be listed along the left column.
Graphics Guideline 6 Use semantic graphic formats to guide the learner’s organization of content.
The Bottom Line Now that you have reviewed the key evidence-based guidelines for visuals that best support learning, revisit the questions below and see if you changed any of your ideas from the chapter introduction. A. Not all visuals are equally effective. True. Recall the No Yellow Brick Road theme from the first chapter. The benefits of a visual will depend on your learning goal, the design of the visual itself, the background knowledge of your learners, and the engagement of learners with the visual. B. Decorative or thematic visuals increase interest and learning. True and False. Evidence indicates that when used in excess, decorative visuals that are unrelated to the instructional objective at best do not improve learning and in the worst case can degrade learning through distraction. However, learners do like materials with visuals, and research has shown that decorative visuals that generate positive feelings and are related to the content can be beneficial compared to lessons with no visuals.
114 • Chapter 5
C. Often a simpler visual, such as a line drawing, is more effective than a more realistic depiction, such as a photograph. True. In general, render a graphic in its simplest form congruent with the learning objective. D. Learner drawings can improve learning. True. Learner-generated graphics can improve learning if the drawing mechanics do not consume extraneous mental load and are accurate. To ensure accuracy in learner representations, provide support and feedback.
Applying Visuals to Your Training Whether you are using slides in the classroom or preparing screens for e-learning, visuals can be one of your most powerful allies. Although you will need to invest more time and effort compared to producing materials in which words or decorative graphics predominate, when your learners are new to the content and when your goal is meaningful learning, we have ample evidence for a return on investment in relevant graphics. Consider the following questions as you review your content: qqWhat relationships are important in my lesson? qqAre there visuals I can use to reinforce those relationships? qqDo I have multiple topics that could benefit from a seman-
tic graphic such as a flow chart, table, or tree diagram? qqAm I trying to illustrate a process or procedure that
involves changes over time and could benefit from a temporal semantic or pictorial visual? qqHave I included visuals that will distract from the learning
goal? If yes, either delete or move those visuals out of the body of the lesson.
Visualize Your Content • 115
qqHave I selected or been assigned graphic standards that are
counterproductive to learning? qqHow can I engage learners with visuals either by asking
them to create visuals, explain visuals, or use visuals as an organizational aid?
Coming Next Animations, including videos and rendered graphics, are increasingly easy to produce on slides and in computer-based learning materials. Instructional research scientists have focused sufficient research on animations to warrant a separate chapter. The next chapter discusses how and when to use animations in ways that improve learning. FOR MORE INFORMATION Clark, R. C., and C. Lyons. 2011. Graphics for Learning, 2nd edition. San Francisco: Pfeiffer. A detailed and comprehensive book on the use of visuals in instructional materials. Brewer, N., S. Harey, and C. Semmler. 2004. “Improving Comprehension of Jury Instructions With Audio-Visual Presentation.” Applied Cognitive Psychology, 18: 765-776. A real-world context exploring the benefits of adding visuals to jury instructions. Butcher, K. R. 2006. “Learning from Text with Diagrams: Promoting Mental Model Development and Inference Generation.” Journal of Educational Psychology, 98:1, 182-197. A readable comparison of different graphic formats and text regarding their impact on learning. Examines the psychological mechanisms of graphics.
Mayer, R. E. 2017. “Instruction Based on Visualizations.” In Handbook of Research on Learning and Instruction, edited by R.E. Mayer and P.A. Alexander, New York: Routledge. A recent review of the psychology and evidence behind graphics. This handbook includes a number of chapters useful for practitioners. Renkl, A., and K. Scheiter, K. 2017. “Studying Visual Displays: How to Instructionally Support Learning.” Educational Psychology Review, September, 29:3, 599-621. A readable and practical review article focusing on techniques to optimize learning from visuals. Van Meter, P., and J. Garner. 2005. “The Promise and Practice of LearnerGenerated Drawing: Literature Review and Synthesis.” Educational Psychology Review, 17:4, 285-325. A review that draws on multiple experiments to derive major conditions for which drawing assignments will lead to effective learning. Includes a section on classroom applications as well as a section on empirical research.
Chapter 6
Learning From Animations Still Visuals Versus Animations Animations for Different Topics and Goals How to Maximize Learning From Animations to Teach Procedures Who Benefits Most From Procedural Animations? Is Learning Better in Immersive Virtual Reality? How to Manage Mental Load in Animations Applying Animations to Your Training
117
118 • Chapter 6
As graphics capabilities have evolved, animations—from simple motion cues to three-dimensional virtual immersive environments— are common elements of multimedia instruction. Are animated visuals more effective than stills? Do some topics or learning goals benefit from animations more than others? Are immersive virtual reality (IVR) animated environments as effective as simpler presentations, such as slide shows? Are there some techniques that maximize learning from animations? These are some of the questions to be reviewed in this chapter.
What Do You Think? Suppose you are developing a lesson on how a mechanical process works—for example, how a toilet tank flushes. You could present the process with a series of still visuals or with an animation. Which do you think would lead to better comprehension? A. The animated lesson qq B. The lesson with still visuals qq
Still Visuals Versus Animations The blood circulation experiment summarized in chapter 5 showed that a simpler visual depiction of blood circulation through the heart resulted in better learning than a more complex visual. How do still visuals and animations affect learning? Imagine that you want to teach how a toilet works—a potentially very useful piece of knowledge. You could show a series of still visuals explained by text, or you could play an animation with audio narration. At first glance, the animated version seems like a more effective approach; it depicts a more realistic picture of the movement among the various stages. But this is not what the experiments revealed. In four different lessons involving
Learning From Animations • 119
explanations of toilet flushing, lightning formation, how brakes work, and wave formation, the still versions led to learning that was better than or as good as the animated versions (Mayer, Hegarty, et al. 2005). An animation conveys a great deal of complex visual information that, in the experiments described, ran continuously. In contrast, the still graphics were simpler and the learner could control the rate at which they viewed them. Animated visuals can present a flood of information that quickly overloads memory capacity. In addition to the sheer amount of information, it is also possible that learners are more actively engaged in the stills than in the animations. Often when we view an animation, we go into “couch potato” mode—a mindset that sparks little psychological engagement. In contrast, as we view a series of stills, our brains are more actively engaged to put together a story from the pieces. Ironically, what we would commonly classify as an old-fashioned medium (still pictures), might engage the brain more than a modern animated illustration. Along the same lines, Paik and Schraw (2013) found that learners viewing an animated version of how a toilet flushes tended to rate the content as less difficult than still versions. Rating the content as less difficult leads to an overly optimistic assessment of one’s learning and thus less invested mental effort. In short, there are several possible downsides to blanket use of animations in your lessons.
Animations for Different Topics and Goals Do some instructional goals warrant animations more than others? In all of the experiments summarized in previous paragraphs, the goal was to build an understanding of how something works. Understanding requires a deep level of processing to build a mental model. However, what if you are teaching someone how to perform a procedure,
120 • Chapter 6
such as tying knots, assembling equipment, or working with new software? Might animations be more effective for this type of outcome?
Animations Versus Still Visuals to Learn Procedures Ayres and others (2009) compared learning motor skills, such as tying a knot or solving a 3-D puzzle problem from a series of stills versus an animated demonstration. No practice was included. Learners viewed the demonstrations and then were tested without reference to the visuals. For both motor tasks, learning was about twice as good from the animated demonstrations.
Animations Versus Still Visuals to Illustrate Dynamic Change Principles that involve dynamic changes, such as acceleration, may benefit from animations that explicitly illustrate those changes. For example, Kuhl and colleagues (2018) evaluated learning the law involving velocity of planets in an elliptical orbit either from an animation that illustrated different speeds of planets in orbit or from a series of stills. Understanding was better among those viewing the animation, which clearly illustrated changes in orbit speeds. The research team added questions to groups using the stills and the animations, thinking that perhaps answering questions would compensate for the information disadvantages of the still visuals. Learners reviewing the animations were better able to answer the questions than those reviewing the static visuals, which in turn led to better understanding. The research team recommended providing animations to illustrate content that requires an understanding of dynamic changes.
Learning From Animations • 121
Comparing Animations, Still Graphics, and Text Instructions As Performance Support Most of the experiments reviewed so far targeted understanding and application as a learning goal. However, the use of performance support to guide task operations is a popular strategy in workforce environments. One form of performance support is a job aid including guided instructions to be used when needed on the job. The overall goal is accurate task completion—not necessarily learning. Performance support is especially helpful for tasks that are not routine, such as troubleshooting a copy machine. Watson and others (2010) compared animated, static, and text work instructions as performance support for an assembly sequence of a small device. They evaluated build time performance over five trials with instructions presented in animation (no words), static diagrams (no words), or text (no graphics). Take a look at Figure 6-1. Which representation was most efficient (lowest time) on first build? How about on the fifth build?
Total Build Time (Seconds)
Figure 6-1. Average Build Times in Seconds Over Five Building Trials 1100
Text
1000
Diagrams
900
Animation
800 700 600 500 400 300 200 1
2
3 Build Number
Adapted from Watson et al. (2010).
4
5
122 • Chapter 6
The data for the first build is most relevant to a performance support application. The first build data is equivalent to one-time assembly situations. Next in importance is data from subsequent builds, which would be applicable to situations that involve repeated performance. After several builds, all instructions were equally effective. We can see that the text instructions were least efficient for the first build. Although animations led to faster build times than still diagrams, the difference was not statistically significant. In other words, the animations and still diagrams were equally effective during the first builds. Naturally, over several builds, efficiency improved in all conditions because performers learned the task. Any spatial representation (still diagrams or animations) is more effective than text descriptions as guidance for a spatial procedural task that will be performed on rare occasions. For tasks that will be performed routinely, a textual description will be slower initially, but after a couple of task completions will not differ from visual representations as workers learn the procedures. I would opt for still visuals because they are relatively inexpensive to produce and boost performance efficiency from the start. Perhaps the best solution would involve a series of still visuals accompanied by simple text—a condition not tested in the experiment. In summary, it seems that the benefits of animations will depend on the desired outcome. The evidence available at this point on the use of animations for different outcome goals suggests the following guideline.
Animation Guideline 1 Use animations to illustrate procedures and dynamic motion change and a series of still visuals to illustrate processes.
Learning From Animations • 123
How to Maximize Learning From Animations to Teach Procedures Take a look at the two demonstrations (Figures 6-2 and 6-3). Figure 6-2. Third-Person Perspective in Demonstration of Circuit Assembly
From Fiorella et al. (2017).
Figure 6-3. First-Person Perspective in Demonstration of Circuit Assembly
From Fiorella et al. (2017).
Which version do you think would be better for learning to perform a circuit assembly task like this? As you can see, the main difference is the perspective the learner has when viewing the demonstrations. Fiorella and others (2017) compared learning of the circuit
124 • Chapter 6
assembly tasks shown from either a third-person or first-person perspective. They measured assembly accuracy and time to complete the assembly for lower and higher complexity circuits from both perspectives. For the higher complexity circuits, assembly accuracy was faster and better with first-person perspective (Figure 6-3). Similar results were shown for a knot-tying procedure illustrated from either an overthe-shoulder or third-person perspective (Garland and Sanchez 2013). Based on these two reports that involved three experiments, I recommend the following guideline.
Animation Guideline 2 To illustrate how to perform complex tasks, create animated demonstrations from the performer’s perspective rather than a third-person viewpoint.
When you prepare an animated demonstration of tasks similar to the circuit assembly example, does it matter whether you show the hands as part of the animation? Marcus and others (2013) compared adults learning to tie knots from animated demonstrations. One group viewed a demonstration with hands, one without hands, and a third viewed static screen shots from the animation with no hands. Both animated versions led to better learning than the static images, which is consistent with the research discussed so far. Learning was the same in versions with and without hands, although using hands resulted in greater efficiency in learning and performance. The research team concluded: “While it is not critical to include the hands in instructional animations, there are potentially more benefits if hands are included.”
Learning From Animations • 125
Animation Guideline 3 Demonstrate manual procedures with over-the-shoulder perspectives that include hands.
Who Benefits Most From Procedural Animations? Spatial ability is an individual difference that might make a difference in learning from animations. Individuals with high spatial ability are able to visualize spatial rotations and changes. Professions that tend to attract individuals with higher spatial aptitudes include architects and engineers. Would spatial ability influence learning from animations? Lee and Shin (2012) tested the effects of animation or still visuals on learning to perform a 13-step task involving printer cartridge replacement. They found that the animated versions helped low-spatial ability learners more than high-spatial ability learners, who learned equally from either stills or animations. Presumably, those with high-spatial ability are able to perform their own mental animations when viewing a series of still visuals. Since animations did not hurt high-spatial ability learners, I recommend providing animated demonstrations that will help low-spatial ability learners.
Is Learning Better in Immersive Virtual Reality? More recent technologies include virtual worlds (VW) and immersive virtual reality (IVR). Is there evidence on the effectiveness of these higher-fidelity environments? Virtual worlds (VW) are computersimulated 3-D environments that include representations of elements
126 • Chapter 6
such as humans, landscapes, and product brands. Participants adopt avatars to navigate in space, create and manipulate objects, and interact with other avatars. Immersive virtual reality involves use of a head-mounted display or glasses and hand controllers, such as Oculus Rift. Learners can experience a fully immersive sensory experience in almost any space imaginable. VW was the buzz several years back but so far has not gained major traction in workforce learning. Yoon and George (2013) reported a survey showing that organizations were not adopting VW because competitors who did adopt it did not benefit and many adopters had abandoned their VW projects. IVR is a more recent evolution of virtual reality environments. Is there evidence for benefits of IVR? Parong and Mayer (2018) compared learning of how cells in the human bloodstream work presented either in a 12-minute immersive animation of the circulatory system or in a self-paced PowerPoint slideshow that was viewed for approximately eight minutes. Review the examples of each (Figures 6-4 and 6-5). The slide show group took less time to learn and scored better on the post-test with a high effect size of 0.92. However, the students in the IVR group gave higher motivational ratings, indicating more positive feelings about learning from an IVR lesson compared to a slideshow. The authors concluded that “students who viewed an immersive VR lesson reported significantly higher ratings of motivation, interest, engagement, and affect than students who viewed a slideshow lesson covering the same material, but scored significantly worse on a post-test.”
Learning From Animations • 127
Figure 6-4. An Example from Immersive Virtual Reality Lesson on Blood Cells
From Parong and Mayer (2018).
Figure 6-5. An Example From a Power Point Presentation on Blood Cells
From Parong and Mayer (2018).
The diminished learning outcomes could reflect extraneous information in the immersive lesson because the 360-degree view was not essential to understand the content. For example, when the learner is in the bloodstream, various blood cells were constantly moving past the learner and the learner could look in any direction to see these movements. Secondly, the IVR experience was a continuous lesson that learners could not control, whereas learners could control the rate of presentation in the slideshow. Parong and Mayer (2018) concluded: “There may not be strong research evidence to invest in the costly job of converting conventional science
128 • Chapter 6
instruction or even simulations in desktop virtual reality into simulations in immersive virtual reality.” Van der Land and others (2013) evaluated different visual representations on a decision-making task involving selecting an apartment. Participants viewed one of three different representations of the apartment space: a 2-D layout, a 3-D static layout, and a 3-D immersive representation. College students were asked to select their preferred apartment solo and then in a team of three. They found that the immersive environment best supported individual selection but the 3-D static representation was more effective for a team decision. The immersive representation received a higher rating of cognitive load. It may be that immersive environments will be most useful for learning goals with emotional context. For example, firing an employee repeatedly in an immersive environment may help prepare learners for a real-life episode. We will look for future research on best applications of immersive learning.
Animations Guideline 4 The evidence to date shows that learners are excited about immersive technology, although these environments may not lead to better learning or group decision making.
How to Manage Mental Load in Animations Because they are transient and can display a great deal of information in a short time, animations can overload working memory capacity. Since the first edition of this book was published 10 years ago, animations and animation modifications have been a popular research focus.
Learning From Animations • 129
From this research I summarize here some evidence-based techniques you can use to manage the mental load imposed by animations.
Segment the Animation. Mayer (2017) reports a large effect size of 1.0 for lessons that have broken down content into chunks that are accessed at the learner’s desired pace. This technique is called segmentation. Spanjers and others (2011) contrasted the effects of segmentation of animations on novice and experienced learners. They reported that segmentation benefited low prior knowledge but not high prior knowledge learners. This makes sense, as the animation is likely to impose more cognitive load on novice learners than those with some background experience. In a follow-up study, Spanjers and others (2013) compared learning among four versions of an animated lesson on statistical probability: a nonsegmented version and three versions segmenting with either brief pauses, darkening of the screen, or pauses and darkening of the screen. Only the versions with pauses led to better learning.
Add Visual Cues to Direct Attention to Relevant Portions of the Animation Because there can be so much movement in an animation, the learner may not know where to direct their attention. More obvious movements will tend to draw the eye more than subtle movements, which might actually be more relevant to the instructional goal. Several experiments have investigated the instructional benefits of adding visual cues to animations. Boucheix and others (2013) found that color cueing led to better understanding of an animation. Color was used to illustrate the progression of movement in a mechanical process. For example, a red line illustrated movement among one group of elements
130 • Chapter 6
while a blue line illustrated movement among a second independent group of elements. Another form of effective cueing involves highlighting the relevant portion of the animation as shown in Figure 6-6. De Koning and others (2007) highlighted different parts of a heart diagram in an animated lesson on blood circulation. They reported better understanding from the lesson versions that highlighted important elements of the animations as they were discussed. Figure 6-6. Contrast Highlighting As Cues in an Animation
From De Koning et al. (2007).
A third form of cueing involves zooming in on important information at each step of the process. Amadeiu and others (2011) compared learning a neurological process from an animation that did or did not include a zoom that focused attention to relevant parts of the display. Learning was best with the zoom effect but only after three exposures to the animation.
Learning From Animations • 131
Engage Learners in Animations The fundamental benefits of active learning were discussed in chapter 4. Two experiments show how engagement techniques aid learning from animations. One experiment tested the benefits of learner drawings after viewing an animation, and the other evaluated the effects of self-explaining after viewing a cued animation.
Drawing Assignment Mason and others (2013) presented an animated demonstration of Newton’s Cradle. You’re probably familiar with Newton’s Cradle but did not know what it was called. Five pendulums are hung from a string in a row. The outmost pendulum is pulled back and released and the effects of this pendulum on the movements of the others are depicted in the animation. Three different groups viewed the animation and then were assigned one of the following activities: • Sketch six drawings that depicted the initial state and the subsequent phases of pendulum movement. • Trace the given states by joining-the-dots on a prepared worksheet. • No drawing assignment. They tested learners immediately after the exercise and two months later. Those who generated drawings showed the best understanding both upon immediate and delayed testing. In fact, there was no difference between those who traced the drawing and those who did no drawing. No doubt a tracing activity did not demand much mental investment. Upon evaluating the quality of the self-generated drawings, it is not surprising that the more accurate drawings were
132 • Chapter 6
correlated with high test scores. Therefore, feedback on student drawings is important to ensure accuracy.
Self-Explanation Assignment In previous paragraphs, we reviewed the benefits of cueing animations to help draw attention to important elements. In a follow-up study, De Koning and others (2011) combined cueing with self-explanations of an animated lesson on the circulatory system. The lesson versions were animations with and without cueing (similar to Figure 6-6), combined with instructions for either self-explaining or no assignment. First, the learners viewed a static diagram with the parts of the circulatory system labeled. Next, they studied one of two versions (with or without cues) of a five-minute animation on how the human cardiovascular system worked. Neither animation included words or pauses and both ran without navigational control. For each of the two animation versions, half the learners were asked to self-explain as they viewed the animation and half were not asked to self-explain. The inference test included open-ended questions to assess the causal relationships in heart circulation, such as: “What causes the valves of the heart to open?” Students who engaged in self-explaining learned more. The combination of self-explaining a cued animation yielded more inferences than those self-explaining uncued animations. In the experiment on immersive virtual reality described earlier, Parong and Mayer (2018) evaluated whether adding engagement to a biology immersive lesson (shown in Figure 6-4) led to better learning. They found that asking learners to summarize segments of the immersive lesson increased learning and did not dampen motivation and interest. Based on the evidence regarding management of mental load in animations, I offer the following guideline.
Learning From Animations • 133
Animation Guideline 5 When developing animations, manage cognitive load for novice learners with segmenting, cues, and engagement activities.
The Bottom Line This chapter started with a comparison of viewing either an animated demonstration or a series of stills depicting a mechanical process (toilet flushing). Evidence indicated that the lesson with still visuals was more effective for comprehension of process stages. However, these results do not apply to learning of procedures or of principles involving dynamic motion changes. Animated presentations shown from an over-the-shoulder perspective support better learning of procedural tasks. To maximize benefits, manage mental load in animations by inserting pauses, cues, or brief engagement activities. We don’t have enough evidence yet to make a judgment on immersive virtual environments. As the technology matures, we will most likely find content and outcomes that benefit from immersive experiences as well as techniques to manage potential mental overload.
Applying Animations to Your Training Compared to still visuals, animations can be effective, especially for learning procedures and relationships involving dynamic changes. Use the following checklist as you start to plan your animations and when reviewing a draft lesson: Do my animations: qqIllustrate how to perform a procedure?
134 • Chapter 6
qqIllustrate content that involves dynamic motion changes
that cannot be readily shown in still graphics? qqUse an over-the-shoulder perspective? qqManage mental load by inserting pauses at logical places
and by including visual cues such as highlighting? qqManage mental load through assignments such as drawing
or explaining?
Coming Next Now that we have reviewed evidence on graphics, we will turn to ways to best explain them. Should you present words with text, audio, or both text and audio? Whether preparing for classroom or computer-mediated instruction, this research will guide your decisions. FOR MORE INFORMATION Clark, R. C., and R. E. Mayer. 2016. E-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning, 4th edition. Hoboken, NJ: Wiley. See Chapter 4. Our book includes a number of chapters that focus on evidence-based methods in the context of e-learning. Fiorella, L., T. van Gog, V. Hoogerheide, and R. E. Mayer. 2017. “It’s All a Matter of Perspective: Viewing First-Person Video Modeling Examples Promotes Learning of an Assembly Task.” Journal of Educational Psychology, 109:5, 653-665. Very practical research study that demonstrates the value of creating demonstrations from a first-person viewpoint. Mayer, R. E. 2017. “Instructions Based on Visualization.” In R. E. Mayer and P. A. Alexander (eds). Handbook of Research on Learning and Instruction. New York: Routledge. A readable historical review of research and guidelines for use of visuals. This handbook includes a number of chapters related to topics in this book.
Learning From Animations • 135
Parong, J., and R. E. Mayer. 2018. “Learning Science in Immersive Virtual Reality.” Journal of Educational Psychology, 110:6, 785-797. An interesting research report focusing on motivation and learning from a new technological approach.
Chapter 7
Explaining Visuals The Power of Explanations Use of Text and Audio to Explain Visuals When Not to Use Words Should You Explain Visuals With Text, Audio, or Both? Maximizing the Benefits of Audio Narration When to Explain Visuals With Text and Audio Explanations and the Brain When to Explain Visuals With Text Alone Where Should Text Be Placed? Applying Explanations to Your Training
137
138 • Chapter 7
Imagine that you have lined up a set of visuals for your lesson. You have identified some screen captures to illustrate a computer procedure. Next you need to write the words to describe the actions in the visuals. Many trainers believe that words placed in text plus audio narration of those words accommodate different learning styles as well as learners with visual or auditory disabilities. Do you usually explain visuals on slides or screens with audio, text, or a combination? This chapter reviews evidence that recommends optimal ways to express words that describe visuals.
The Power of Explanations Chapters 5 and 6 showed that relevant visuals can dramatically improve learning. However, most visuals are not self-explanatory. For meaningful learning, you need both visuals and words to communicate your content. First, you need to determine if your visual needs an explanation. If it does, you need to decide whether to present words in audio, text, or both text and audio. If you decide to use text, you need to decide where to place the text in conjunction with the visuals. For example, should you place it at the bottom or on the side of each slide to be consistent? Or should you position the text nearby the visual? Or does it matter where you place text as long as it is consistent? Whether you use text or audio, you need to decide how detailed and lengthy to make your explanations.
What Do You Think? Look at these technical examples (Figures 7-1, 7-2, 7-3, and 7-4). The content is basically the same in all of them. What varies is how and where words are presented. After you review each example, mark your opinion about which versions present words most effectively.
Explaining Visuals • 139
Figure 7-1. Version A: Use of On-Screen Text to Explain a Visual
CLARK TECHNICAL TRAINING: Engine Maintenance
c
a. Main Assembly b. Regulators
b
c. Discharge Valve d. PMSVT
a d
The main assembly on the 555-C Model engine includes two hydraulic pumps that share the same regulators and discharge valve. The two pumps in the new model are replaceable MGT type pumps that deliver oil to the main chassis and control
Figure 7-2. Version B: Use of On-Screen Text and Audio to Explain a Visual
CLARK TECHNICAL TRAINING: Engine Maintenance
c
a. Main Assembly b. Regulators AUDIO: The main assembly on the c. Discharge Valve 555-C Model engine includes two hydraulic pumps that share the d. PMSVT same regulators and discharge valve. The two pumps in the new ...
b
a d
The main assembly on the 555-C Model engine includes two hydraulic pumps that share the same regulators and discharge valve. The two pumps in the new model are replaceable MGT type pumps that deliver oil to the main chassis and control
140 • Chapter 7
Figure 7-3. Version C: Use of On-Screen Text to Explain a Visual
CLARK TECHNICAL TRAINING: Engine Maintenance Discharge Valve
Regulators The Main Assembly on the 555-C Model engine includes two hydraulic pumps that share the same regulators and discharge valve. The two pumps in the new model are replaceable MGT type pumps that deliver oil to the main chassis and control
PMSVT
Figure 7-4. Version D: Use of Audio to Explain a Visual
CLARK TECHNICAL TRAINING: Engine Maintenance Discharge Valve
AUDIO: The Main Assembly on the 555-C Model engine includes two hydraulic pumps that share the same regulators and discharge valve. The two pumps in the new model are replaceable MGT type pumps that deliver oil to the main chassis and control
Regulators
Main Assembly PMSVT
Explaining Visuals • 141
Which version do you think presented the words most effectively? qqVersion A (Figure 7-1) because the layout is the cleanest. qqVersion B (Figure 7-2) because the explanations are
presented in text and in audio narration of that text. qqVersion C (Figure 7-3) because the explanation is presented
in text that is located near the visual. qqVersion D (Figure 7-4) because the explanations are
presented in audio only.
Use of Text and Audio to Explain Visuals This chapter focuses on words used to explain visuals. The words may be presented in text or in an audio narration or both. The combination of the visual and the words should support the learning objective. For example, when teaching a procedure, your demonstration uses words to describe the actions illustrated in the visuals. When teaching a principle, such as how to defuse a tense customer situation, you debrief a video demonstration with words. When teaching a business process, such as the performance appraisal cycle, you explain a cycle chart with words. When describing parts of equipment, you use words to label the equipment and to explain how it works (Figures 7-1, 7-2, 7-3, and 7-4). In all of these instances, the combination of a relevant visual and words in text or in audio or both are used to communicate the content.
When Not to Use Words Take look at the three-part sequence (Figure 7-5). Would it be best to explain that visual with words in audio or text? The answer is neither! Research has shown that adding words to a self-explanatory visual adds extraneous cognitive load and slows down learning
142 • Chapter 7
or performance. Prior to playing Minecraft, my grandsons built structures out of Legos. I was amazed to see some very elaborate structures using hundreds of small Lego pieces. Most surprising are the directions that rely on visuals alone, such as the example shown (Figure 7-6). Since Legos are intended for younger children as well as multilingual players, it makes sense to rely on self-explanatory, step-by-step visual instructions. Figure 7-5. A Visual on Buckling Your Seatbelt
Figure 7-6. A Portion of Legos Instruction
Available online from Legos http://cache.lego.com/bigdownloads/buildinginstructions /4120526.pdf.
Explaining Visuals • 143
Explanation Guideline 1 Do not add words in any format to a self-explanatory visual.
Should You Explain Visuals With Text, Audio, or Both? As we saw in chapter 1, it’s a common misconception that some individuals are labeled “visual” learners while others are called “auditory” learners. Therefore we need to support both styles by presenting words in text as well as narration. The example shown in Figure 7-2 describes the equipment with text plus audio narration of that text. If the lesson is delivered online, the text would appear on the screen and the audio would be delivered through narration. If the lesson is delivered in a classroom, the text would appear on the slide and the instructor would read it aloud. Is this a good idea or would learning be better if you used only text or only audio to describe the visual? Fortunately, we have evidence to guide your choice of modalities to present explanations of visuals.
When to Explain Visuals With Audio Narration Many experiments have compared learning from visuals explained by words in audio with learning from the same visuals explained by the same words in text. In fact, enough of these studies have been done to support a meta-analysis (Ginns 2005) and a systematic review of the data (Moreno 2006). The experimental lessons synthesized in the meta-analysis included various content including mathematics, electrical engineering, environmental science, and explanations of how brakes and lightning work. The Ginns meta-analysis reported that learning was consistently better when visuals were explained by
144 • Chapter 7
audio narration with a high effect size of 0.72. This means that if a lesson that used text to explain visuals resulted in an average score of 70 percent with a standard deviation of 8, the same lesson that used audio would yield an average score of about 76 percent. In a more recent review, Mayer (2017) reports that in 17 experiments he found strong and consistent support for use of audio narration to explain graphics with a high effect size of 1.0. The variety and amount of evidence on the use of audio leaves us with a high degree of confidence in the second recommendation:
Explanation Guideline 2 Explain visuals with audio narration rather than text to maximize learning.
When teaching in the classroom, incorporate relevant visuals on your slides and have the instructor discuss them. When teaching via self-study e-learning, place relevant visuals on the screens and explain them with audio narration.
Maximizing the Benefits of Audio Narration There are some exceptions to Guideline 2. Audio explanations will be most effective when: The lesson includes a visual that requires an explanation. If the visual is self-explanatory, such as those shown in Figures 7-5 and 7-6, use no words at all. If there is no visual, words are better comprehended when presented in audio with key text phrases written on-screen or slide (Adesope and Nesbit 2012).
Explaining Visuals • 145
The narration is brief. Words presented in audio remain in working memory longer than words presented in text and thus provide an additional sensory code like a brief mental echo. However, with explanations of more than five sentences, the benefits of audio disappear. Schuler and others (2013) found no differences in learning the stages of mitosis when animations were accompanied by six paragraphs of six sentences each presented in either written text or audio. Adesope and Nesbit (2012) conclude in their meta-analysis that a combination of text and audio is better than audio alone for longer passages. The visual is complex and learners are novices to the content. A relatively complex visual imposes more cognitive load than a simple visual. Therefore, an audio explanation is most beneficial with a more complex visual, such as an animation or the engine visual shown at the beginning of this chapter. A complex visual incorporates cues. A cue such as highlighting or color directs attention to the relevant part of the visual as the narration discusses that part. For example, if complex equipment is described, the relevant part of the equipment could be highlighted as the narration describes that part. The explanation is in the native language of the learners. Learners with a different primary language may benefit more from text and audio narration of that text than audio alone (Adesope and Nesbit 2012).
Explanation Guideline 3 Explain visuals with brief audio narration rather than text when the visual is complex, cues are used to draw attention to elements of the visual, learners are novices, and the words are in the primary language of the learners.
146 • Chapter 7
When to Explain Visuals With Text and Audio As mentioned earlier, it’s common practice to explain a visual with words in text and audio narration of that text to accommodate different learning styles or to be compliant with requirements for learners with visual or auditory disabilities. There is, however, quite a bit of evidence showing that in most situations, learning is depressed when you deliver the same words in identical text and audio (Adesope and Nesbit 2012). Experiments compared learning from lessons in which visuals were explained with audio alone (Figure 7-4) with lessons in which visuals were explained with audio plus on-screen text that repeated the words in the audio narration (Figure 7-2). The results? Mayer (2017), summarizing five experiments conducted in his laboratory, found an average effect size of 0.7 when redundant onscreen text was removed from a slide or screen. The ample evidence from many experiments is the basis for the next recommendation.
Explanation Guideline 4 Explain visuals with audio narration or text but not identical text and audio.
How can you apply this principle and accommodate learners with disabilities or perhaps individuals learning in a secondary language? I recommend that as a default, you explain a visual with brief audio— as narration in e-learning or as instructor explanations in the classroom. At the same time, provide an option for the hearing impaired. In e-learning, you can include an “audio control” icon. When audio is turned off, on-screen text appears. In synchronous e-learning, use closed captioning. In a classroom, sign language or a handout with
Explaining Visuals • 147
slides and text can augment the instructor’s presentation. Are there any situations in which a combination of text and redundant audio does not harm learning? I often see a combination of brief text and audio to explain a visual, such as the example shown in Figure 7-4. In the visual, you see callouts in text that are excerpted from the audio narration. Evidence has shown that the use of limited on-screen text elaborated upon by brief audio does not hurt and can even help learning. Yue and others (2013) tested three versions of a lesson about the life cycle of a star that included nine slides with a total of 24 sentences of explanations. The visuals were explained with audio alone, audio plus brief on-screen text excerpts, or on-screen text identical to the narration. A fourth group had no visuals—audio only. You can see the results of the three versions on an application test in Figure 7-7. How do you interpret these results? Learning was best from explanations of visuals that were given in: A. Audio alone qq B. Audio plus brief text excerpts qq C. Audio plus identical text qq Figure 7-7. Percentage Correct on Transfer Test Animation + Audio
Percent Correct on Transfer Test
0.4
Animation + Audio + Identical Text
0.35 0.3 0.25
Animation + Audio + Abridged Text
0.2 0.15 0.1 0.05 0
Based on data from Yue et al. (2013).
Audio Only
148 • Chapter 7
As you can see, learning in the abridged text group was better than the other three conditions. Option B gave the best results.
Explanation Guideline 5 Explain visuals with brief audio narration of a slide or screen that includes a visual and short text phrases drawn from the narration.
Explanations and the Brain Chapter 3 showed that working memory has two processing centers— one for auditory information and one for visual. When you explain a complex visual with text, the visual center of working memory is overloaded (Figure 7-8). In contrast, when you explain a complex visual with brief audio, you divide the load between the visual and the auditory centers (Figure 7-9). In this way, you maximize the limited capacity of working memory to process information. This interpretation is supported by cognitive load and eye-tracking evidence reported by Liu and others (2011). In comparing visuals explained by on-screen text, on-screen text and narration of that text, or narration alone, they found that students rated cognitive load lowest in the version with audio-only explanations. Compared to versions with text, visual attention was focused solely on the graphic in the audio-only version. When a complex visual is explained by text, attention is generally directed to the text. In the case of an animation, the learner will miss much of the visual while reading the text explanation.
Explaining Visuals • 149
Figure 7-8. Text and Visuals Can Overload Visual Center in Memory
Figure 7-9. Use of Narration to Explain Visuals Balances Memory Load
Synchronization of lengthy text with narration of that text poses an additional load on the brain. Each of us reads at our own rate, which in turn is different from the rate of the narrator. Trying to synchronize our own reading rate with that of the narrator adds unnecessary mental overhead. To compensate, most of us either ignore the text and listen to the narration or turn off the narration and read the text. The bottom line is: Use brief audio to explain complex visuals and avoid a combination of text and redundant narration of that text. However,
150 • Chapter 7
learning is not hurt and may be helped by adding a few words in text on the screen drawn from the narration. So what about audio and visual learning styles? The reality is that we are all visual learners—that is, we can all profit from a relevant visual when learning unfamiliar knowledge and skills. Furthermore, we are all auditory learners—that is, we can all benefit when a complex visual is described by brief audio narration rather than overloading our visual center with text.
When to Explain Visuals With Text Alone Of course, there are situations where you cannot use audio to explain a visual. A book like this one is an example. However, even if you can use audio, if the content is complex and the explanation is lengthy, evidence leans in favor of giving explanations in text. Because a lengthy audio explanation is transient, by the time the last words are heard the initial words will be forgotten. In contrast, text can be read and reviewed at the learner’s own pace. The best solution is to keep explanations brief or chunk complex explanations into short segments, each focusing on a different element of a complex visual.
Where Should Text Be Placed? When you use text to explain a visual, where should you position the text? For example, it could be placed at the bottom of the screen as in Figure 7-1. Alternatively, it could be integrated into the visual as in Figure 7-3. Does the placement of text matter? Have you ever been reading a book and found an important diagram located on the back of the page describing that diagram? To get a full understanding of the content, you need to flip back and forth to read the text and then review the visual. How does that make you
Explaining Visuals • 151
feel? Most of us find this an annoying experience! That annoyance is actually your working memory complaining about the extra burden of having to hold content in its limited capacity while viewing related content and trying to integrate both elements. This experience is not just your imagination. Researchers have measured learning from layouts like the examples in Figures 7-1 and 7-3. For example, Johnson and Mayer (2012) tracked eye movements and measured learning from three versions of a lesson on how brakes work that included words and visuals. In one version the description of the process was placed under the diagram. In a second version the words were integrated into the diagram (Figure 7-10). A third version was similar to the second but added part labels to the top diagram. They found that there were more eye movements between the text and the related portion of the diagram in the second and third version (integrated versions) than in the separated version. This division of attention between text and diagram illustrates attempts by the learners to integrate the content. In addition, learning was better in the integrated versions. Figure 7-10. The Lesson Version That Integrated Text and Visual.
2. a piston moves forward inside the master cylinder. 1. When the driver steps on the car’s brake pedal
3. The piston forces brake fluid out of the master cylinder and through the tubes to the wheel cylinders. 4. In the wheel cylinders, the increase in fluid pressure makes a smaller set of pistons move outward.
6. When the brake shoes press against the drum, the wheel stops or slows down.
From Johnson and Mayer (2012).
5. These smaller pistons activate the brake shoes.
152 • Chapter 7
A 2018 meta-analysis that evaluated learning from integrated versus separated words and pictures found a high effect size of 0.63 in favor of integrated text (Schroeder and Cenkci 2018). The authors conclude that “Integrated designs in multimedia-based instruction benefited learning regardless if the material was presented through a computer or through paper and pencil materials” (p. 697). Along the same lines, Mayer (2017) reported that in eight experimental comparisons, integrated text benefited learning with a high effect size of 1.0. The synthesis of multiple experiments forms the basis for the sixth principle.
Explanation Guideline 6 Align text close to the relevant portion of the visual; at a minimum, keep visuals and text visually accessible on the same spread or screen.
Integrated Text and the Brain A 2019 report by Makransky and others focused on multiple measures of learning and cognitive processes from a lesson that either separated or integrated text with the diagram. Consistent with preceding paragraphs, learning was best among those who studied the integrated version. The research team found that subjective ratings of extraneous cognitive load were higher among those studying the separated text versions. Eye tracking showed that those studying the separated text version spent more time on irrelevant elements of the graphics. Research that uses multiple measures, such as learning, student ratings, and eye tracking, will help define not only what works (integrated rather than separated text) but also why a method works (extraneous load reduced).
Explaining Visuals • 153
There are many common violations of this simple principle in all delivery media. For example, in e-learning, a scrolling page shows a visual at the top of the page and the text underneath the visual. When you view the visual, you can’t see the text and vice versa. On a paper job aid, text that describes a procedure is placed under the illustration. A better solution is to insert text captions right into the relevant portion of the visual. Another common violation is to split an exercise between a workbook and a computer application. The learner reads the steps or the data in the workbook and tries to apply them on the computer. A better solution is to include all of the steps on the computer. No matter your delivery medium, separating text and related diagrams leads to split attention and imposes unnecessary mental load on your learners. Mayer (2009) summarizes the problem: “The format of much instruction is determined by tradition, economic factors, or the whim of the instructor. Cognitive factors are rarely considered, resulting in instructional designs in which split attention is common.” That said, I’m hoping you won’t experience too many instances of separated text and visuals in this book. In all of my books I have found that publisher templates often place text on one page and the relevant visual on the back of that page. It would be better if some figures could be reduced in size and placed on one side of a page with text wrap. Customization of layouts, however, incurs additional cost and, as a result, is rare. As an author, I have worked with book proofs to minimize split text and visuals by editing out text or eliminating figures so that the remaining text and figures end up on the same page spread. I mention this not just to make excuses but to illustrate how practical realities, such as the limitations of your delivery media or budget or time, will shape your products. Evidence is never the only factor to be considered in design and development of instructional environments!
154 • Chapter 7
The Bottom Line Based on Figures 7-1 to 7-4, you marked the following statements indicating which presentation of words was best for learning. Revisit this exercise to see if your responses have changed. • Version A (Figure 7-1) because the layout is the cleanest. Ineffective. This version is likely to lead to split attention because two different text segments are displayed separately from the visual. • Version B (Figure 7-2) because the explanations are presented in text and in audio narration of that text. Ineffective. This option will likely prove to be the worst for learning. We have seen evidence that learning is depressed when explanations are presented in text and audio narration of that text. In addition, the on-screen text is placed at the bottom of the screen, leading to split attention. • Version C (Figure 7-3) because the explanation is presented in text that is located near the visual. Effective. If audio is not an option, this example is the most effective because the text is integrated into the figure. • Version D (Figure 7-4) because the explanations are presented in audio only. Effective. This version is the most effective because it uses audio narration to describe a complex visual. It also displays new technical terms in text callouts placed on the visual.
Applying Explanations to Your Training Regardless of your delivery media, you will generally need words to explain your visuals. Apply the following guidelines to maximize learning from those words.
Explaining Visuals • 155
If your delivery environment supports audio: qqUse brief audio narration alone to describe a complex visual
on the screen. qqUse brief audio narration plus a few words in text taken
from the narration placed in bullet points nearby the visual. qqAdd visual cues, such as highlighting or colors, to your
graphic to correspond with the narration. qqAvoid using text sentences and identical narration of those
sentences to explain a visual. Avoid using audio alone to describe visuals under the following conditions: qqNew technical terms are introduced; use text and audio. qqLearners need to refer back to the words, such as directions
to complete an exercise; use text. qqThere are no visuals on the screen or slide; use audio and
key terms in text. qqYour learners are experienced and not likely to be overloaded. qqThe lesson is in a second language; use text and audio. qqThe explanation is lengthy—for example, more than five
sentences per screen or screen section; chunk content or use text. qqThe visual is self-explanatory; do not use any words.
If your delivery environment does not support audio: qqEnlarge the visual on the slide and integrate text into the
visual. qqUse callouts to place multiple text statements nearby rele-
vant portions of the visual.
156 • Chapter 7
Coming Next Chapter 8 continues the discussion of explanations by showing how you can maximize learning by leveraging human social instincts. FOR MORE INFORMATION Clark, R. C., and R. E. Mayer. 2016. e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning, 4th edition. Hoboken: Wiley. See Chapters 5,6,7. Our book presents many of the same guidelines reviewed in this chapter with a specific focus on e-learning. Kalyuga, S. 2012. “Instructional Benefits of Spoken Words: A Review of Cognitive Load Factors.” Educational Research Review, 7:2, 145-159. A comprehensive technical review of the use of audio to explain visuals in the context of cognitive load theory. Makransky, G., T. S. Terkildsen, and R. E. Mayer. 2019. “Role of Subjective and Objective Measures of Cognitive Processing During Learning in Explaining the Spatial Contiguity Effect.” Learning and Instruction, 61, 23-34. An interesting experiment that includes multiple measures including learning, eye tracking, and student self-reported cognitive load. An example of experiments of the future that will focus on the mechanisms behind methods that improve learning. Mayer, R. E. 2017. “Instruction Based on Visualizations.” In R.E. Mayer and P.A. Alexander, eds. Handbook of Research on Learning and Instruction, 2nd edition. Routledge: New York. This chapter reviews evidence for guidelines relating to graphics and explanation of graphics. Schroeder, N. L., and A. T. Cenkci. 2018. “Spatial Contiguity and Spatial Split-Attention Effects in Multimedia Learning Environments: A Meta-Analysis.” Educational Psychology Review, 30:3, 679-701. A recent analysis of 58 experiments which reports overall effect sizes and reviews variable contexts in which spatial contiguity is beneficial.
Chapter 8
Make Learning Personable Leveraging Social Presence Techniques to Increase Social Presence What Is Personable Training? Course Ratings and Social Presence Personalization and the Brain A Postscript on Social Media Applying Personalization to Your Training A Personalization Checklist
157
158 • Chapter 8
From Facebook to LinkedIn, social media has become mainstream. The 286 million unique monthly visitors to Facebook I reported in 2009 has grown to 2.23 billion active users in 2018! In spite of privacy and “fake news” concerns, various social media platforms are here to stay. The explosion of social media reflects our ancient imperative for human-to-human communication. How can your training environments leverage social presence to improve learning? What evidence do we have about the role of social interaction in learning? This chapter looks at evidence-based methods to leverage social presence through basic techniques, such as how you address your learners, best use of online (pedagogical) agents, and incorporation of collaboration into your instructional events. As educational media has generated more remote learning environments, students still like personal experiences.
Leveraging Social Presence One of the unique outcomes of human evolution is our universal sociability. It grew from the survival need for cooperation. Cooperation relies fundamentally on communication—listening, processing, and responding to our social partners to achieve mutual goals. So embedded is this instinctive response that even self-study e-learning that incorporates some features of human persona can activate our social responses. We know in our heads that the computer is inanimate. But we are unconsciously compelled to process information more deeply when the lesson embeds social cues.
Make Learning Personable • 159
What Do You Think? The content in the Excel lesson screens in Figures 8-1 and 8-2 is similar. However, they differ regarding social cues. Compare them and put a check by the statements you think are true. Figure 8-1. Version A of an Excel Lesson
Figure 8-2 Version B of an Excel Lesson
160 • Chapter 8
Version A will lead to better learning than Version B. qq Version B will lead to better learning than Version A. qq Versions A and B will lead to equivalent learning since the qq content is the same. Both screens include instructional visuals and instructional words. Version A projects a more conversational tone primarily through the use of first and second person pronouns, polite language, and a realistic on-screen virtual tutor. Version B is a bit more formal and does not include an on-screen tutor. You might feel that Version B is more serious and projects greater credibility. Alternatively you might feel that the casual tone in Version A is more engaging. Or perhaps the most effective social treatment might depend on the content of the lesson or the cultural background of the learner. In this chapter we will look at the evidence and psychology for how learning is affected by social cues.
What Is Personable Training? Personable training refers to instructional environments that embed social cues. Social cues can take the form of: • use of first and second person language—for example, I, you, we • polite phrases • responsive agents that support learning processes in digital lessons • voice quality of e-learning narration • instructors as hosts • a narrative rather than an expository writing approach • instructor responses to discussion boards
Make Learning Personable • 161
• social engagement, such as collaborative problem solving during training events. Although the potential for social presence is high in a face-toface classroom setting, often this potential is unrealized. Lectures in a large classroom can be delivered in a traditional academic manner with minimal interaction between learners and instructor or among learners. Alternatively, instructors can create a more intimate environment through their tone, through self-revelation, through body language, and by including collaborative interactions during the session. Personable training increases social presence. Do learners prefer higher or lower social presence? Is learning better in environments with high social presence? If so, which techniques have proven most effective? These are the questions this chapter will consider.
Course Ratings and Social Presence Most likely your learners rate the training at the end of a learning session. Typical questions ask about learners’ perception of the instructor and the learning environment. A meta-analysis that included more than 28,000 ratings noted three main factors in the training environment associated with higher student ratings: motivation to attend the training, instructor style, and social presence during learning (Sitzmann 2008). Regarding instructor style, trainers who were psychologically “available” to learners and who projected a relaxed friendly persona received higher learner satisfaction ratings. In addition, opportunities to engage with other class members during the learning event were also correlated with higher ratings. A 2013 meta-analysis found that instructional materials that were written in a conversational
162 • Chapter 8
style were rated as friendlier and perceived by learners as being easier to process than materials written in a formal style (Ginns et al. 2013). Although higher class ratings don’t necessarily translate into more learning, by including social cues proven to increase learning you can get better course ratings and improved learning outcomes. The evidence on course ratings is the basis for my first guideline.
Personable Guideline 1 Promote higher learner satisfaction by incorporating techniques that support social presence in your instructional environments.
Personalization and the Brain How do social cues promote deeper learning? The key lies in the greater attention and mental effort we invest in social compared to impersonal messages. This mental effort translates into more effective attention and organization, and integration processes as discussed in chapter 3. Attending to and deeply processing interpersonal messages likely evolved through eons in which survival depended on mutual cooperation. Your brain is tuned to social cues and unconsciously devotes more attention and mental effort to processing content couched in a more social context. Compare your own mental state when watching TV, attending a keynote speech at a conference, and conversing with a friend. A television program often gets only cursory attention and rather shallow processing. In fact, for many, TV viewing is a route to relaxation by
Make Learning Personable • 163
shutting down effortful processing. A keynote speech may generate a bit more attention, especially if the delivery techniques are effective and content is relevant. Even there, however, you may find your mind wandering sometimes because you go down your own mental rabbit holes or because you know there will be no overt response requirement from the speaker. But when conversing with family and friends either in a group or one-on- one, your level of attention and processing investment is higher because of the mutual satisfaction and unspoken conventions of social engagement. The idea behind the personalization effect is that even in a computerized study environment, learners may feel they are in an implied conversation when the materials are written in a personal style. Let’s review some techniques to build social presence as well as the evidence supporting those techniques.
Techniques to Increase Social Presence Whether you are teaching in an instructor-led class or developing asynchronous online learning materials, building social presence into your events will improve learner satisfaction and can also enhance learning. Let’s review six evidence-based techniques to make your learning environments more personable.
1. Is Conversational Language Better Than Formal Language? Compare the two introductions to a botany simulation game (Figure 8-3). What differences do you see? Which do you think would lead to better learning?
164 • Chapter 8
Figure 8-3. Two Introductions to a Botany Game
Version A This program is about what type of plant survives on different planets. For each planet, a plant will be designed. The goal is to learn what type of roots, stem, and leaves allow plants to survive in each environment. Version B You are about to start on a journey where you will be visiting different planets. For each planet, you will need to design a plant. Your mission is to learn what type of roots, stem, and leaves will allow your plant to survive in each environment. From Mayer (2009).
In a series of experiments in which the words were delivered either by text or by audio narration, the more conversational versions that used first- and second-person language resulted in greater learning! In 14 of 17 experimental lessons that used different content and compared learning from first- and second- person language with learning from third-person constructions, results favored use of first and second person (Mayer 2014). Some lessons were standard online tutorials, and some were games. The median effect was 0.89, which is a high effect. That means that on average you could expect a solid boost in learning in lessons that speak directly to the learners using “I” and “you” statements. A 2013 meta-analysis of 74 studies comparing learning from personalized to more formal instructional materials reported a medium effect size of 0.54 on application tests (Ginns et al. 2013). Experiments with personalized materials in English, German, Flemish, or Turkish were included. They reported that at least in these languages, personalization had a similar positive effect on transfer learning. Additional studies in other languages and cultures are needed to evaluate how
Make Learning Personable • 165
cultural or language translations might affect the benefits of personalization. Based on their meta-analysis, the authors suggested that personalization is most effective in lessons of 30 minutes or less with diminishing effects in longer lessons.
2. Politeness Pays A related form of conversational language is politeness. Wang and others (2008) compared two versions of a lesson focusing on engineering problems through a one-hour computer game called Virtual Factory. The game included a virtual tutor who made suggestions and offered feedback in either a directive or polite manner. For example, in the direct wording version the tutor might say “Save the factory now” compared to “Do you want to save the factory now?” The more polite version resulted in better learning. Based on strong and consistent evidence to date, I recommend the following.
Personable Guideline 2 Promote deeper learning by addressing learners in a conversational manner using first and second person and polite phrases.
3. Be a Learning Host In many learning environments such as books, asynchronous digital lessons, or even traditional classroom settings, the author or instructor remain aloof from the learners. They stick to their content and don’t make any self-revelations, such as how they may personally feel about the content or specific experiences they have had related to the skills.
166 • Chapter 8
Imagine reading a text on parasitology in which the author scientifically explains the adverse effects of various parasites, including how they access their host organisms. Now imagine reading a similar passage in which the author describes a time he was infected even though he was very careful with what he ate and drank. He found that the hotel housekeeping staff rinsed the ash trays in the toilet, and he had laid his cigarette on one of these ash trays. Stories are very memorable and must be used carefully to promote the instructional goal. I will have more evidence about stories in the next chapter. In addition to stories, the injection of personal opinion or reactions to content has been shown to improve learning. For example, an instructor might present two views on an issue and reveal their own personal opinion. Mayer (2009) refers to self-revealing episodes as a “visible author” technique and has found that learning improved with the use of a “visible author.” Whether writing for a text book, for e-learning, or for lecture notes for an instructor-led class, you can improve learning by thinking of yourself as a “learning host.” A good host makes guests feel comfortable and engages them in the event. Good hosts make themselves available both physically and emotionally to their guests. In face-to-face environments good instructors demonstrate availability in how they dress, how they greet learners, and by using diverse social cues to set an informal but productive tone throughout the event. In a similar manner, when authoring texts or e-learning courses, use hosting techniques proven to increase learning and generate better student ratings.
Make Learning Personable • 167
4. Leverage Social Presence in Distance Learning Sung and Mayer (2012b) identified these core elements that promote social presence in distance education courses that rely on asynchronous discussion boards: • Social respect—No one likes to feel ignored. Online learners need to feel that their postings are being read and their time and input are valued. Instructors and class participants can show social respect by responding promptly to postings and expressing appreciation for participation. • Social sharing—Online learners and instructors can exchange values, beliefs, and professional interests related to the course goals. • Open mind—All participants need to feel free to post relevant opinions about course issues and give constructive feedback to others. • Social identity—Participants value being addressed by name. • Intimacy—Social presence is increased when learners and the instructor share personal experiences relevant to the course content. These five factors were derived by analysis of survey data. The next research step is to validate them by conducting experiments that compare learning and participant satisfaction when engaged with discussion boards that do and do not embed these core elements. Student ratings and learning evidence are the basis for my third recommendation.
168 • Chapter 8
Personable Guideline 3 When communicating instructional content in texts, on the computer, in discussion boards, or in the classroom, adopt “hosting”’ techniques by using social cues that make the environment personable.
As noted in chapter 2, most learning research has been conducted with Western culture learners, usually with college-aged subjects. Other cultures and generations might respond to different cues that are considered the social norm by them. Although I have found that workshop participants in the United States are 99 percent receptive to collaborative activities, group discussions in a workshop I led in Norway were not well received. They felt I was the expert and they did not want to “waste” their time listening to their colleagues. We will need additional research to see how findings regarding social presence are best contextualized to different cultures and instructional contexts.
5. Do Online Agents Improve Learning? In the in-person classroom, the instructor’s persona pretty much dictates the image and voice. However, in asynchronous e-learning, on-screen avatars called pedagogical agents have become popular, and many authoring systems offer a library of characters with easily changeable expressions. For example, Figure 8-1 shows an avatar in an Excel online class. What evidence do we have for the use of on-screen agents to promote learning in digital self-study lessons? A 2011 review of the effects of on-screen agents found 15 well-controlled experiments. Of the 15, only five experiments reported a positive effect of on-screen agents on learning (Heidig and
Make Learning Personable • 169
Clarebout 2011). The research team concluded that asking whether agents improve learning is too broad a question. Instead, a closer look is needed to consider the conditions under which an agent is effective. Specific elements, such as the agent’s appearance, voice, and function, are a few variables that might make a difference to agent effectiveness. For example, among studies that have compared different types of agents, one consistent finding is that learning is better when the agent expresses themselves through audio narration rather than with on-screen text. The benefits of audio were discussed in chapter 7.
Should Agents Be Realistic? Mayer and DaPra (2012) compared the learning and motivational effects of two different agents and a no-agent control in a short slide lesson on how solar cells convert sunlight into energy. The agent was placed on the left side of the screen and the content including graphics and brief text appeared on the right. One agent incorporated a number of social responses, including changes in posture and facial expressions, gesturing, and eye gazes to direct attention. The other agent used the same image but remained relatively static on the screen. Both agents used audio narration to explain the information on the slide. Take a look at the learning results in Figure 8-4. Figure 8-4. Learning From an Animated Agent, Static Agent, or No Agent Score 5
Animated Agent
4 3 2 1
14 = High score
Based on data from Mayer and DaPra (2012).
No Agent Static Agent
170 • Chapter 8
The agent that exhibited human social behaviors such as gestures and changes in facial expressions promoted best learning. The static agent actually led to less learning than no agent. Perhaps the static agent became a distracting screen element? The research team suggested that agents are most effective when they are perceived as a social partner—a response more probable in the presence of a more realistic agent. Learners gave the more realistic agents higher ratings, supporting the idea that social cues promote social presence and, in turn, learning. In 11 out of 11 experiments, agents who reacted in a more human-like manner led to better learning than less responsive agents with a median effect size of 0.36 (Mayer 2014).
How Should Agents Support Learning Processes? Online agents can potentially serve a number of functions. For example, they might provide an explanation of a graphic. Or they might provide hints or feedback to a problem. Alternatively, they might direct learner attention by pointing to on-screen objects. In a series of experiments, Wang and others (2018) compared the learning effects of agents directing attention to relevant parts of on-screen graphics (Figure 8-5). In this lesson on synaptic transmission, the agent in Version 1 used gestures to point out elements of the graphic discussed in the narration. Version 2 used the same narration and graphic but did not include an agent. Eye tracking was used to verify where and for how long learners directed their attention.
Make Learning Personable • 171
Figure 8-5. A Science Lesson With and Without an Agent
Transfer learning was better with the gesturing agent with an effect size of 0.88—a high effect. Eye tracking showed a longer time spent on areas of the graphic signaled by the agent. Follow-up experiments by Li and others (2019) found that the agent shown pointing to specific elements of the visual focused learner attention and led to better learning compared to agents that pointed in nonspecific ways to the visual, made gestures such as moving their arm, or made no gestures. In these experiments, a realistic animated agent supported learning by signaling relevant content in the visual. Additional research is needed to define other functions that pedagogical agents can serve.
Should Learners Choose Their Agents? The Mayer and DaPra (2012) report included one experiment in which some learners were told that they were matched to an agent exhibiting features they preferred (such as outgoing or calm) while others were told they were not matched. In actuality, all agents were the expressive version described. No differences in learning were noted, suggesting that learner choice of agent may not be as important as ensuring that the agent exhibits the social cues described. Similar results were reported by Linek and others (2010) in an experiment in which learners were given
172 • Chapter 8
a choice of a male or female narrator. Regardless of learner choice, the female speaker led to better learning of a mathematical skill. Many questions remain about agents. Are the effects of agents subject to cultural differences among learners? Are some accents better received than others? Should the agent be male or female? Does the content make a difference regarding the effectiveness of an agent? What are the most productive functions of an agent? Based on evidence to date, I recommend the following guideline.
Personable Guideline 4 Use on-screen agents that exhibit social cues, such as changes in expression, posture, and gestures. Ensure that the agent fulfills a valid instructional role, such as guiding attention, giving feedback, or providing explanations.
Does Voice Quality Matter? Based on the modality principle described in chapter 7, the agent should communicate with audio narration rather than on-screen text. In comparing a narration given by a native English speaker using a friendly tone with the same narration from a high-quality speech synthesizer, learning was much better from the human voice version. Linek and others (2010) compared learning of probability theory from examples that were illustrated with narrated animations. The narrations used either a male or female voice. The versions narrated by a female speaker received higher ratings and resulted in better problem-solving on the post-test compared to the versions with male narration. However, the positive effects of female narration may have different effects depending on the content.
Make Learning Personable • 173
Accents are another element of voice quality that need more research. Rey and Steib (2013) compared learning from four versions of a lesson on computer networks. Two used conversational narration, and two used formal. In addition, two used standard German, and two used an Austrian dialect. Consistent with the personalization effects discussed at the start of this chapter, conversational versions led to better transfer learning. However, there was no significant effect of dialect on transfer learning. Mayer (2009) reported better learning from a standard accented voice compared to a Russian-accented voice among U.S. college-aged learners. We will need more research to verify the conditions under which voice gender or accent affect learning.
6. Does Collaboration Improve Learning? At the start of this chapter, I noted that classes incorporating opportunities for learners to engage with one another as well as with the instructor get higher student ratings than classes that minimize social engagement. In addition to learner satisfaction, does collaboration during study improve learning? As with many other instructional methods, the answer is: It depends. Consider some of the following questions about collaborative assignments. Is the collaborative group small (two members) or large (four or more)? Is the collaborative assignment structured with defined processes and roles or is it open ended? Are the tasks challenging or simple? Do the learners have some prior knowledge of the content, or are they primarily novices? Is the collaboration in a face-toface environment (synchronous) or via an online asynchronous environment? These are just a few of the variables that influence outcomes. So rather than decide that collaboration is good or bad, it’s better to ask under what conditions collaboration is an effective method. There
174 • Chapter 8
is a great deal of evidence-based research on collaborative learning— enough to justify a book or at least a chapter in itself. Take a look at some of the end-of-chapter references for in-depth information. Evidence shows that depending on one or more of the mentioned conditions, collaboration can lead to better learning compared to an individual learning solo. The reverse is also true. Sometimes the mental costs of collaboration—for example, listening to others and tracking ideas— outweigh the benefits. The effects of collaboration may also depend on a combination of factors. For example, novices facing a challenging task may benefit from expertise in the group. In contrast, experts facing the same task may find group work less efficient with no learning benefits.
Collaborative Problem Solving In two separate reports, Kirschner and colleagues (2011a, 2011b) evaluated learning from problem solving in which students either worked solo or in a small collaborative team. In the experiments they adjusted the complexity of the problems to be either relatively simple or relatively complex. Their experiment compared four conditions: solo students working simple problems or complex problems and small teams working simple problems or complex problems. The researchers found that for simpler problems, individual work led to better learning, whereas for more complex problems, teams were more effective. The research team suggested that while working collaboratively benefits from the input of several individuals, there are also mental costs in communicating. Therefore, for simple problems, the mental load imposed by collaboration took away capacity that could have been used for learning. In contrast, more challenging tasks benefited from a sharing of the mental load imposed by the problems.
Make Learning Personable • 175
Reviews on Collaboration and Learning Three recent meta-analyses on collaboration report the following: • Under favorable conditions, collaboration results in better learning than solo learning with medium effect sizes of 0.3 (Pai et al. 2014), 0.5 (Sung et al. 2018), and 0.42 for knowledge and 0.64 for skills (Chen et al. 2018). • Online collaboration leads to better learning than face-toface collaboration with effect sizes of 0.45 on knowledge and 0.53 on skills (Chen et al. 2018). Asynchronous online collaboration allows flexible time for learners to review and respond to others and may offer greater psychological safety compared to face-to-face collaboration. • The use of online support tools, including visual representation tools (concept or knowledge maps), or guidance in the form of online scripts improves online collaborative learning (Chen et al. 2018). • Task complexity coupled with expertise of the learners mitigates collaborative effectiveness. Tasks of lower complexity for a given audience are often more effectively completed solo, while the reverse is true for more demanding assignments.
Personable Guideline 5 Consider collaborative assignments when task demands are relatively high and individuals can benefit from expertise in the group.
176 • Chapter 8
A Postscript on Social Media I began this chapter with some data and thoughts on the status of social media. A review of research on Facebook was positive overall on the potential of social media for learning, and calls for more research on how to use social media sites efficiently for learning purposes (Aydin 2012). Based on what we have learned from previous research, no doubt the benefits of social media will depend on how the applications are used to support learning and knowledge management. To the extent that the applications support cognitive load and are congruent with learning goals, they should provide yet another path to extend learning by leveraging social presence. Social media should be applied in focused and deliberate ways that promote deeper processing of content.
The Bottom Line This chapter began with two samples from an e-learning course (Figures 8-1 and 8-2). Review those samples and see if your answers have changed. qqVersion A will lead to better learning than Version B. qqVersion B will lead to better learning than Version A. qqVersions A and B will lead to the same learning.
Based on current evidence, I would recommend Version A as incorporating more social cues and leading to better learning. The elements that make Version A more effective include the use of firstand second-person conversational language and the social cues built into the agent. This agent could be improved by more focused gestures pointing to the part of the spreadsheet being discussed. I look forward to additional research on agents to define how best to design them to contribute to learning.
Make Learning Personable • 177
Applying Personalization to Your Training Whether you are working in a face-to-face classroom or online, leverage human unconscious instincts to deeply process information that is accompanied by social cues. In the classroom you can get higher student ratings and improve learning by making yourself psychologically accessible to your learners. In other words, dress and act in a manner that will make your audience feel comfortable. Note that there may be cultural differences in how you implement this principle. In most Western cultures you should greet your learners individually, use eye contact, maintain appropriate physical proximity to your learners, speak in a conversational tone, encourage and respond to comments and questions, and incorporate some collaborative activities when and where appropriate. In synchronous e-learning, you should speak with your learners individually, call them by name, bring learners into the discussion via audio when practical, use a relaxed and conversational tone, and reveal your own experiences and opinions on the content. In asynchronous e-learning or books, maximize social presence through the use of conversational and polite language and learning agents, and by responding to online posts in ways that generate social presence. In any learning environment, consider collaborative assignments for more challenging tasks that involve problem solving.
A Personalization Checklist qqUse first- and second-person language in your explanations. qqUse a polite conversational tone. qqOffer your own relevant experiences and perspectives on
the content.
178 • Chapter 8
qqOffer relevant opportunities for social engagement among
your participants. qqUse online agents that project social cues with gestures
and eye gazes and that serve an instructionally relevant purpose, such as drawing attention to parts of an onscreen visual. qqUse a friendly voice for narration; avoid narration
that may seem unnatural to your learners, such as machine-generated voices. qqGenerate social presence on discussion boards by applying
techniques such as calling individuals by name, fostering an open environment, and sharing personal experiences related to course content. qqMake collaborative assignments for relatively
challenging tasks.
Coming Next We’ve now seen the benefits of such instructional methods as graphics, learning agents, and engagement. Is it possible to overdo some of these techniques? In the next chapter, we will look at evidence suggesting that when it comes to learning, often less is more. FOR MORE INFORMATION Chen, J., M. Wang, P. A. Kirschner, and C. C. Tsai. 2018. “The Role of Collaboration, Computer Use, Learning Environments, and Supporting Strategies in CSCL: A Meta-Analysis.” Review of Educational Research, 88:6, 799-843. A comprehensive meta-analysis that includes 425 studies. Focuses on the effects of online collaboration on a variety of outcomes as well as the effects of various online tools designed to optimize collaboration.
Make Learning Personable • 179
Clark, R. C., and R. E. Mayer. 2016. e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia, 4th edition. Hoboken, NJ: Wiley. See Chapter 9. Our book includes many chapters with evidence related to topics in this book as they apply to online learning. Chapter 9 focuses on personalization. Ginns, P., A. J. Martin, and H. W. Marsh. 2013. “Designing Instructional Text in a Conversational Style: A Meta-Analysis.” Educational Psychology Review, 25:4, 445-472. A good overview and synthesis of the personalization effect. Mayer, R. E. 2014a. “Principles Based on Social Cues in Multimedia Learning: Personalization, Voice, Embodiment, and Image Principles.” In R. E. Mayer, ed., The Cambridge Handbook of Multimedia Learning, 2nd edition. New York: Cambridge University Press. An excellent review chapter on evidence for various personalization effects. This handbook includes a number of chapters relevant to topics in this book. Worth the purchase! Sung, Y. T., J. M. Yang, and H. Y. Lee. 2017. “The Effects of Mobile-Computer Supported Collaborative Learning: Meta-Analysis and Critical Synthesis.” Review of Educational Research, 87:4, 768-805. A meta-analysis that included 48 articles that calculated effect sizes for mobile collaborative learning compared to non-collaborative learning, collaborative learning without a computer, and desktop collaborative learning. Breaks out a number of factors that can affect collaborative outcomes such as group size, structure, and so on.
Chapter 9
When Less Is More Why Training Is Too Flabby What Is Too Much? When Stories Defeat Learning Making Lessons Interesting Does Music Improve Learning? Do You Work Too Hard? When Visuals Overload the Brain Still Visuals Versus Animations Applying Less Is More to Your Training
181
182 • Chapter 9
Recently, I received an announcement from a popular online secondlanguage training application promising adjuncts similar to the one shown in Figure 9-1. No doubt these adjuncts are intended to spice up the lessons and motivate learners. You have likely seen (or even produced) various embellishments like these in an attempt to jazz up content. Are these adjuncts helpful or harmful? Figure 9-1. Visual Added to Enliven Learning
This is a chapter about resisting the urge to add extraneous stories, music, detailed explanations, and unrelated visuals to lessons because our brains are designed for best learning when less loaded.
Why Training Is Too Flabby A lot of over-inflated training comes from good intentions on the part of instructors as a result of: • the pressure to “cover the content” in unrealistic timeframes • an instructor-centered versus a learner-centered perspective • the urge to make the lessons more interesting with gamification or animated features • the charisma of the latest training fads and technologies. One of the latest technical innovations as I write this book is immersive virtual reality, in which learners explore a 3-D environment with 360-degree visuals.
When Less Is More • 183
Often our clients have unrealistic expectations of what can be achieved in a given time period. In an effort to accommodate them, trainers pour out lots of content. Unfortunately, our brains don’t soak up knowledge and skills like gravy on biscuits. We have learned that content “covered” does not necessarily translate into new and desirable behaviors on the job. Another challenge that trainers often face is a stack of dry content—company policies, compliance regulations, or procedural training on how to use the order-entry system. Trainers and instructional developers often look to engaging stories and visuals to spice up these lessons. Common wisdom suggests that the new generation raised in an age of intensive multimedia (including video games) has a greater predilection for pizzazz than those of the past. The desire to enliven training is fueled by evolving media functionality making immersive visual experiences, animations, and other visual and auditory effects easy to produce. In the end, however, these well-intended additions can have a negative effect on learning.
What Do You Think? Imagine that you are developing a multimedia lesson on how lightning forms. You prepare several visuals and write some text that describes the process. As you view your draft lesson, it seems pretty dull; you want to make it more interesting. You do a little research on lightning and discover a number of interesting factoids. For example, you are surprised to find that lightning is the leading weather-related cause of death and injury in the United States. In fact, in anyone’s lifetime there is one chance in 15,300 of being struck by lightning (weather. gov). Armed with several interesting anecdotes about lightning, you enliven your lesson by sprinkling visuals and text throughout. What is
184 • Chapter 9
the impact of anecdotes like these on student interest and on learning? Select the options you think are true: A. The lesson version with interesting text and visuals will qq lead to better learning because it is more engaging and motivating. B. Learners will rate the lesson with added text and visuals qq more interesting than the basic lesson. C. The basic lesson lacking the interesting factoids will lead qq to better learning because it is lean and stays on target. D. Learning will be the same because both lessons include qq the critical core content.
What Is Too Much? There are several ways lessons become bloated. Instructors commonly add stories, anecdotes, or themes to enliven a dull presentation. Music is another common addition, especially in multimedia lessons but also in some classrooms. Many individuals prefer to study or work in the presence of background music. In addition, music can add some emotional color. From space oddseys to horror films, music sets the mood. Along with music come visuals, which today can easily be generated as animations, 3-D virtual worlds, or video. These visuals can appear so much more polished and engaging than simple line drawings. Finally, often instructors simply provide too much explanation—too many words—too much detail on a topic. “Too much” can mean stories, words, visuals, and music—all added with the best intentions but running the risk of depressing learning in the human brain, where less is often more.
When Less Is More • 185
When Stories Defeat Learning A story is a narrative sequence of events, either true or fictitious. From war stories to jokes to anecdotes, stories are common fodder in most training programs. They are used to attract attention, generate interest, illustrate a point, dramatize a lesson, or simply add some spark to dull technical material. Instructors are often encouraged to add stories as effective learning devices. Because they are concrete and often have an emotional tenor, stories are especially memorable and students like them. The concreteness and drama of stories makes them a potent psychological device to be used with care. What evidence do we have on the effectiveness of stories in instruction?
Evidence on Stories Mayer (2009) summarized six experiments in which he compared learning from a basic lesson on lightning that included frames similar to the one shown in Figure 9-2 with the same lesson plus several interesting stories and visuals about the effects of lightning. Figure 9-2. One Frame From a Lesson on How Lightning Forms
+ -
+ -
+ Return stroke
+ + + 5. Positively charged particles from the ground rush upward along the same path.
From Mayer and Moreno (1998).
186 • Chapter 9
In the spiced-up version, there is a brief description and visual showing what happens to an airplane struck by lightning. Later the lesson shows the uniform of a football player who had been struck by lightning. Several of these brief anecdotes were added throughout the basic lesson to create a more interesting version. What effect did these spiced-up versions have on learning? Not surprisingly, learners rated the lessons with text and visual anecdotes as more interesting than the basic versions. However, interest did not translate into learning. In five of six experiments, learning was dramatically depressed by the addition of interesting facts and visuals about lightning. The median effect size was 1.66—a high effect size signaling considerable damage inflicted by the spiced-up lessons. But here is what is important. Although all of the additions were about lightning, they were not relevant to the instructional goal of building an understanding of how lightning forms. Lehman and others (2007) repeated this experiment, comparing a text-only basic lesson on lightning formation with and without the interesting details. For example, a sentence rated as high in interest but low in importance is “Golfers are prime targets for lightning strikes because they tend to stand in open grassy fields or to huddle under trees.” The researchers measured reading time and learning from both versions. Consistent with previous research, they found that the comprehension scores were significantly lower among readers of the text with details added. In addition, the reading time of individual relevant sentences that followed the interesting but unimportant sentences was longer compared to reading time of the same sentences in the versions lacking these details. This slow down suggests that the interesting sentences disrupted the coherence of the text, forcing readers to process the relevant sentences more slowly to make sense of the entire passage.
When Less Is More • 187
These experiments demonstrate the negative effects of adding interesting but unimportant details to expository text. Would the outcomes be similar in case-based lessons? Compared to expository text that relies primarily on a logical sequencing of content, case-based lessons use a narrative form that tells a realistic story as the basis for learning. Abercrombie (2013) compared two case-based lessons on how to give effective written feedback to students. The participants were student teachers assigned to a narrative lesson with or without interesting details added. The test required the learners to write feedback for a draft student essay. The group that studied with the version lacking interesting details scored higher on this test. Similar results are reported by Jaeger and others (2018). In their research, learners read one of two passages on the science of earthquakes. The base text included 843 words and the seductive text added 13 sentences, such as “When Mount St. Helen’s erupted, it created a landslide that carried mud and debris down the mountain at speeds of over 100 miles per hour for more than three miles.” Those reading the seductive details version scored lower on a comprehension test. Golke and others (2019) compared learning and student awareness of their own comprehension among two groups studying a text about blood cells as well as a text about hormones. One version took a stick-to-the-facts approach with an expository text. A comparison version added a narrative, such as a person who shrunk himself down to travel through the lungs and have an adventure through the bloodstream. The research team called this version an expository narrative. The research team found that the narrative versions led to less or equal learning compared to the expository version. Importantly, the narratives led many learners to overestimate their understanding and thus form inaccurate self-assessments of learning. In
188 • Chapter 9
chapter 16, we will look at evidence regarding the value of narratives in learning games. In short, we have strong evidence discouraging adding interesting but irrelevant details to instructional materials.
Stories and the Brain The research by Lehman and others (2007) suggests that irrelevant but interesting details depress learning by distracting learners from the important sentences and by disrupting the overall meaning of the paragraph. Imagine in a lesson on lightning formation that you are reading about how clouds first form and then develop ice crystals. Suddenly, you are viewing some information about airplanes and lightning. Next, you continue to see how negatively charged particles form and fall to the bottom of the cloud. Just as you are connecting the dots, you are reading about someone struck by lightning. You get the idea. Just as you start to put two and two together, your processing is interrupted by a very distracting item that seduces your attention away from the core content. Over time, the cumulative effects of these distractions corrode learning. Mayer calls the depression of learning resulting from topic-related but goal-irrelevant details a coherence effect because coherence is disrupted. In addition, narratives may impede learner assessment of their understanding, leading to premature termination of study. The evidence and psychology of adding interesting anecdotes or narratives unrelated to the learning objective to your training are the basis for my first recommendation.
When Less Is More • 189
Less Is More Guideline 1 Avoid adding factoids, visuals, narratives, and anecdotes that may be related to the topic but are irrelevant to the learning goal.
Making Lessons Interesting We have quite a bit of evidence that adding irrelevant information to your lesson can damage learning. Are there some ways you can add interest that do not depress learning? Um and others (2012) tested two approaches to what they call “emotional design” in instruction. One approach involved inducing positive feelings prior to studying a lesson by asking students to read positive statements. Half the students read neutral statements, such as “Apples are harvested in the fall,” while the other half read positive statements, such as “It’s great to be alive!” Attitude measures showed that the positive statements did induce more positive feelings among learners. A second approach involved use of color and shape design elements in a multimedia lesson on how immunization works. The emotional design version used warm colors and round shapes with eyes. You can compare the two versions in Figure 9-3. The researchers found that both inducing positive feelings and the use of emotional design elements improved learning. Since I found only one study that focused on emotional design, it is premature to suggest any guidelines. We will look for additional research on techniques that can make lessons more interesting and at the same time avoid distracting learners from the main lesson theme.
190 • Chapter 9
Figure 9-3. Neutral Versus Positive Emotional Design Elements
NOTE: The version on the right uses a pale yellow background with rounded shapes in various colors. From Um et al. (2012).
Can Stories Help? There are situations in which stories have beneficial effects. Unfortunately, we lack research evidence to answer questions such as: • What kinds of stories are most effective? • Are some stories more appropriate for some kinds of learning goals? • Does the number and placement of anecdotes in a lesson make a difference? For example, does a dramatic story about an injury or death told at the start of a safety lesson increase attention, learning, and transfer of safety practices? For now, I suggest that you keep stories or narratives that are relevant to your learning goal, discard those that are tangential, and avoid placing them in the middle of an explanation where they might disrupt the mental processing needed for understanding.
Does Music Improve Learning? Do you like music playing in the background while you are working or studying? The benefits of music have been promoted in popular press articles with an emphasis on classical music. What evidence do we have on the effects of music on learning?
When Less Is More • 191
Evidence on Music and Learning Mayer (2009) reports two experimental lessons—one involving lightning formation and one on how brakes work. The basic versions presented the process stages using narration and animations. The enhanced versions added sounds—both music and environmental sounds appropriate to the lesson topic. The sounds and music were background only and did not obscure the narration. Did music enhance or depress learning? As you can see, the auditory additions depressed learning (Figure 9-4). Figure 9-4. The Average Gain Was 105 Percent in Lessons in Which Extraneous Sounds and Music Were Omitted
Percent Correct on Transfer Test
100 80
No sounds or music version
60 40
Lesson with sounds and music added
20 0
Based on data from Moreno and Mayer (2000).
Remember that working memory is limited in capacity and has a center for visual and auditory information. When a complex visual is explained by audio narration plus sounds and music, the audio center is overloaded and capacity for learning is depressed. I have not found a wealth of evidence on the effects of music on learning—in fact none in the past few years. However, based on the limited evidence available and the psychology of learning, I offer the following recommendation.
192 • Chapter 9
Less Is More Guideline 2 Minimize extra audio in the form of music when the goal is to help learners build understanding.
Do You Work Too Hard? One Friday after an intense week of training, I was really tired. On reflection, it occurred to me that I had actually been working too hard. After all, I knew the content. I realized that the workshop participants needed to be doing much more of the work. I had been assuming a “water-pitcher” role in training, figuring that if I gave many detailed explanations presented with a lot of energy and showed a deck of good slides, learning would naturally occur. From then on, I started to reverse the workload. I went from two-thirds instructor work and one-third student work to just the opposite. I cut back on explanations. I presented the bare bones and followed up with an exercise. When and where there was confusion on the exercise, I responded with more explanations and examples. When students experienced problems or challenges in the exercises, they were more open to receiving additional explanations.
Evidence About Explanations Using the lightning content, Mayer tested the learning effects of a very concise lesson version with a version containing a much more detailed explanation. Take a look again at the lean version in Figure 9-2 and compare it to the inflated version in Figure 9-5.
When Less Is More • 193
Figure 9-5. One Frame From an Expanded Lesson on How Lightning Forms With Additional Explanatory Text
+ -
+ -
+ Return stroke
+ + +
As the stepped leader nears the ground, it induces an opposite charge so positively charged particles from the ground rush upward along the same path. This upward motion of the current is the return stroke and it reaches the cloud in about 70 micro-seconds. The return stroke produces the bright light that people notice in a flash of lightning, but the current moves so quickly that its upward motion cannot be perceived. The lightning flash usually consists of an electrical potential of hundreds of millions of volts. The air along the lightning channel is heated briefly to a very high temperature. Such intense heating causes the air to expand explosively, producing a sound wave we call thunder.
5. Positively charged particles from the ground rush upward along the same path.
From Mayer et al. (1996).
As you can see, the lean version consisted of a simple visual with just a few words as a caption. In total, the lean version contained about 80 words. In contrast, the detailed version added over 500 words to the captioned figures. In three experiments, learning was better from the more concise versions. In a different experiment that involved lessons on wave formation, Mayer and Jackson (2005) compared versions with and without technical detail. The basic lesson explained the general process of how waves form. The detailed version added technical details in the form of mathematical equations. They found better learning of the overall process of wave formation from the base version. They suggest that when first learning a new process, learners benefit from an explanation that helps them build a qualitative understanding. Then, if relevant to the instructional goals, more details can be added in subsequent lessons.
194 • Chapter 9
Chunk Content When your lesson content can’t be further reduced, break it into small chunks, placing less content on each slide or screen. In asynchronous multimedia, always include a continue button allowing the learner to proceed at their own pace. Mayer and Chandler (2001) divided the lightning lesson into 16 segments with a continue button on each screen. They found better learning with the version that limited the amount of content presented at one time and allowed learners to control the rate at which they accessed the information. First, the limits of working memory suggest that you provide only the amount of information needed to communicate the core content, leaving capacity for processing that content. When providing lean explanations, memory capacity is available to read the words, look at the visual, integrate the meaning of the words and visual, and connect that meaning to the unfolding process. In other words, there would be capacity to connect the dots. Based on three experiments, Mayer (2017) reports a large effect size of 1.0 for versions that segmented content into smaller chunks and allowed learners the opportunity to access these at their own pace. Based on the evidence and the psychology of explanations, I suggest the following.
Less Is More Guideline 3 Keep explanations concise; use just enough words to present content. Segment content into smaller chunks, which can be accessed by the learner at their preferred pace.
When Less Is More • 195
When Visuals Overload the Brain From photographs to immersive 3-D renderings, from video to animations, modern technology makes production of elaborate visuals quite easy. And the ubiquitous high-end visuals in popular advertising, movies, and video games encourage us to incorporate them into training, especially multimedia training. What is the value added of complex visuals?
Simple Versus Complex Still Visuals Chapter 5 reviewed research comparing the effectiveness of line drawings versus a more anatomically detailed and accurate diagram of blood circulation through the heart. To recap, the research started with a pretest that asked learners to draw a visual and write an explanation of blood circulation. The mix of ideas— many inaccurate or incomplete—was categorized. For example, some of the drawings showed no understanding other than that blood comes out of the heart. Others captured the idea that blood not only comes out of the heart but it also returns, although how it does so is sketchy. The experimenter then prepared three versions of a lesson on how blood circulates. One was a text explanation; the other two versions added visuals similar to those shown (Figure 9-6). Figure 9-6. Two Graphics From Lessons on Blood Circulation Through the Heart
From Butcher (2006).
196 • Chapter 9
After reviewing their assigned lesson, learners once again provided a sketch and explanation of circulation. These were compared to the pretest responses. Which lesson version resulted in best improvement? You can check the experiment results in Figure 9-7. Figure 9-7. Proportion Improvement Pre- to Post-Test in Understanding of Circulation
Based on data from Butcher (2006).
We learned in chapter 5 that relevant visuals enhance learning through dual channel messages to the brain—one through words and a second through pictures. Therefore, both lesson versions with visuals of the heart resulted in better learning than the text-only version. However, the simple line drawing was more effective than the more accurate depiction. For the purpose of learning how something works, the details in the realistic drawing did not augment understanding and in fact may have led to confusion. Had the goal been to learn anatomical features of the heart, the more accurate drawing might have been more effective.
When Less Is More • 197
Immersive Virtual Reality One of the hottest new technologies is immersive virtual reality (IVR), in which learners wearing a headset or glasses engage in a 3-D display. Is IVR more effective than more traditional media such as a PowerPoint presentation? Makransky and others (2019) compared learning and mental load from a science simulation via IVR or a desktop display. Learning was better among those working through the lab simulation on the desktop display. Electroencephalography measures indicated higher mental load in the IVR simulation. The research team noted, “Many companies and public institutions are deciding to adapt educational and training material to immersive VR even though there is a lack of theoretical or scientific evidence to guide this decision and adaption.”
Still Visuals Versus Animations In chapter 6 we reviewed research showing that a series of still visuals can be more effective for learning a process than an animated presentation. In contrast, animations are better for learning procedures as well as for comprehension of principles that benefit from an explicit illustration of motion. We also saw one experiment that compared a slide show to an immersive virtual reality lesson on blood flow and parts of a cell. While students liked the immersive version better, learning was better from the slide presentation. An animation conveys a great deal of complex visual information that in the experiments described previously ran continuously. In contrast, the still graphics were simpler and the learner could control the rate at which they viewed them. Animated visuals can present a flood of information that quickly overloads memory capacity.
198 • Chapter 9
Ironically, what we would commonly classify as an old-fashioned medium (still pictures) might engage the brain more effectively than a modern immersive lesson. It seems that the benefits of animations will depend on the desired outcome. The current evidence on the use of animations for different outcome goals suggests the following.
Less Is More Guideline 4 Other than procedural learning or goals that benefit from explicit illustration of motion, animations are often not as effective as a series of still visuals. Use a series of still visuals to depict processes and offer performance support for tasks that involve spatial assembly steps.
In summary, Mayer (2017) reports that in 17 experiments in which extraneous words and pictures were eliminated, an overall large effect size of 1.0 was seen in the leaner lesson versions. When in doubt, leave it out.
The Bottom Line We started our discussion with a comparison of two lesson versions on lightning formation. One lesson took a “just the facts” approach and was compared to a lesson that spiced up the content with interesting anecdotes sprinkled throughout. Now that we have reviewed the evidence, let’s revisit these introductory statements: A. The lesson version with interesting text and visuals will lead to better learning because it is more engaging and motivating. False. Although students rated the lesson as more engaging, it actually led to less learning as a result of disruption of learning.
When Less Is More • 199
B. Learners will find the lesson with added text and visuals more interesting than the basic lesson. True. However, often learner ratings are not correlated with learning. C. The basic lesson will lead to better learning because it is lean and stays on target. True. D. Learning will be the same because both lessons include the critical core content. False. This is not to suggest that you never add interesting anecdotes or narratives. Rather, use care in selection and placement of stories and visuals used to add interest. A worthwhile goal is to find narratives and visuals that add interest and promote learning.
Applying Less Is More to Your Training When it comes to explanations, visuals, stories, and music, in many cases, learning is better when you offer a leaner lesson with concise explanations and simple but relevant visuals. At the same time, avoid anecdotes that don’t directly contribute to the learning goal as well as extraneous visuals, such as exploding treasure chests. We are starting to see research on ways to make lessons interesting other than adding irrelevant stories. For example, one study by Um and others (2012) found higher interest and better learning from reading positive sentences prior to the lesson and viewing a lesson that used graphics with warm colors and rounded shapes. Given greater emphasis on both learning and motivation in recent research studies, I predict
200 • Chapter 9
that we will continue to grow a research base that focuses on ways to make lessons both engaging and effective. Use this checklist as you start to plan your lessons and when reviewing a draft lesson: Does my lesson or presentation: qqFocus on a few topics that can be effectively taught in the
allotted time frame? qqInclude concise explanations? qqSegment content into manageable chunks? qqAllow learners the opportunity to pace themselves through
self-study lessons? qqUse simpler visuals to illustrate content? qqAvoid anecdotes or narrative adjuncts that are not relevant
to the lesson goal? qqAvoid blanket adoption of high-end technology, such as
immersive video?
Coming Next The next section will review two of the most powerful tools in your instructor kit: examples and practice. There is a great deal of research guiding the what, when, and how of these important methods. FOR MORE INFORMATION Clark, R. C. and R. E. Mayer. 2016. e-Learning and the Science of Instruction, 4th Edition. Hoboken, NJ: Wiley. See Chapter 8. Our book focuses on a number of topics summarized in this book as they apply to e-learning. Mayer, R. E. and L. Fiorella. 2014. “Principles for Reducing Extraneous Processing in Multimedia Learning: Coherence, Signaling, Redundancy, Spatial Contiguity, and Temporal Contiguity Principles.” In R. E. Mayer,
When Less Is More • 201
Ed. Cambridge Handbook of Multimedia Learning, 2nd Edition. New York: Cambridge University Press. This handbook is a good resource for many of the topics discussed in this book. This handbook chapter includes a review of the coherence principle which is the theme of this chapter. Mayer, R. E. 2017. “Instructions Based on Visualizations.” In R. E. Mayer and P. A. Alexander, Eds. Handbook of Research on Learning and Instruction, 2nd ed. New York: Routledge. Pp 483-501. This handbook chapter focuses on evidence-based graphics including the coherence principle as discussed in this chapter.
Part 3
Evidence-Based Use of Examples and Practice
Chapter 10
Accelerate Expertise With Examples The Power of Examples What Are Worked Examples? Examples and the Brain Types of Worked Examples Examples for Routine Tasks Examples for Strategic Tasks High- Versus Low-Variable Examples Engage Learners in Your Examples What Are Self-Explanations? Formatting Examples Studying Incorrect Examples Prework: Examples Versus Invention Assignments Applying Examples to Your Training
205
206 • Chapter 10
When learning a new skill, do you prefer to review an explanation or study an example? Last year I wanted to make aebleskiver—Danish ball-shaped pancakes. The procedure is a little tricky, involving wooden skewers to rotate the batter as it cooks in a special pan. Rather than waste a lot of batter in a trial and error approach, I reviewed online video examples, which allowed me to borrow knowledge from others. Learners often favor examples over explanations. LeFevre and Dixon (1986) found that learners with options to study either a textual description or an example to complete problem assignments selected the examples as their preferred resource. This chapter reviews evidence regarding the what, when, and how of leveraging examples to accelerate learning. We will see how examples not only improve learning outcomes but also increase learning efficiency. To maximize the learning potential of examples, you need to make them engaging. In this chapter you will see how.
The Power of Examples Whether your goal is to promote awareness or to build skills, examples are useful. However, when you are focusing on skills, examples are an essential tool in your instructional kit. You might think that you do use examples, but are you exploiting their full potential to accelerate learning? Are you providing examples to the learners who most need them? Recent research has revealed techniques that maximize the instructional benefits of examples. With these techniques, you will extend the potential of your examples by applying evidence on the what, when, where, why, and for whom of examples.
What Do You Think? Which lesson version do you think would be most effective (Figure 10-1)?
Accelerate Expertise With Examples • 207
Figure 10-1. Three Lesson Versions With Different Combinations of Examples and Practice Lesson Version A
Lesson Version B
Lesson Version C
Explanation
Explanation
Explanation
Example 1
Example 1
Example 2
Practice 1
Example 3
Example 2
Practice 3
Example 4
Practice 2
Practice 4
Example 5
Example 3
Example 6
Practice 3
Practice 1 Practice 2
Practice 5 Practice 6
As you can see, the learner reviews an equal number of problems in each lesson. In Lesson Version A, all of the problems are presented in the form of examples; in Lesson Version B, half the problems are in the form of examples and half in the form of practice; in Lesson Version C, all of the problems are solved by the learner. Which of the following statements about these lesson versions do you think are true? qqLesson Version A will be the most efficient because the
learner does not spend time completing practice exercises. qqLesson Version B offers the best balance of examples and
practice problems. qqLesson Version C will be the most effective because practice
makes perfect.
What Are Worked Examples? John Sweller, one of the founding researchers of cognitive load theory, was the first to report the benefits of examples on learning in
208 • Chapter 10
terms of cognitive load. He took a traditional algebra lesson contain.
ing one or two examples followed by many problem assignments and converted several of the practice exercises into step-by-step examples similar to the example shown (Figure 10-2). Instructional psychologists call such demonstrations worked examples. A worked example is a demonstration that illustrates a specific sequence of the steps to complete a task or solve a problem. While some research focuses on worked examples presented in text, more recent research has studied worked examples that use animated illustrations of steps explained with audio narration. Figure 10-2. A Worked Example of an Algebra Problem
Sweller and Cooper (1985) compared learning from traditional lessons containing all practice to their example-practice pairs version. You won’t be surprised to learn that the traditional lesson with lots of practice problems took longer to complete. In fact, the versions that incorporated more practice took six times longer than the versions that alternated examples and practice. But did the time and effort invested in solving practice problems pay off in better learning? Surprisingly, the researchers found that learners studying the all-practice lesson made twice as many errors on a test as learners studying the example-practice version! A combination of examples and practice led to faster and better learning than working lots of practice problems.
Accelerate Expertise With Examples • 209
More recent research suggests that for relatively short lessons, examples can completely replace practice. In addition, as chapter 2 showed, motivation is now being measured in studies of worked examples. Van Harsel and others (2019) compared learning and motivation of first-year technical students as well as teachers in training from a series of four examples or four practice problems. The content involved mathematical applications and the examples were presented via step-by-step video modeling. The researchers also tested example-problem pairs as described. Post-test scores were best among those who studied examples only compared to those solving problems or those who reviewed example-problem pairs. Furthermore, the all-examples group reported better motivation and less effort invested. The research team concluded, “When short training phases are used, studying examples (only) is more preferable than problem solving only for learning.” The many experiments that have shown the benefits of worked examples support my first recommendation.
Examples Guideline 1 Save time and improve learning by replacing many practice exercises with worked examples.
Examples and the Brain How do examples work? In chapter 3 we learned that our working memory, which is our brain’s active processor, has a very limited capacity. Do you remember working all those homework problems in your math class? Solving lots of problems is hard work! When working memory capacity is tied up solving lots of problems, there
210 • Chapter 10
is little resource left over for learning. However, imagine that instead of working a problem, you are reviewing an example. Your working memory is free to carefully study the example and learn from it. In fact, by providing an example as a model, the student has an opportunity to build their own mental model from it. In other words, the example is a vehicle to enable borrowing knowledge acquired by others.
Types of Worked Examples Let’s look at three types of worked examples: routine task examples, strategic task examples, and problem-solving skills examples. A routine task is a procedure that is carried out more or less the same way each time it is performed. For example, I frequently refer to YouTube to look up examples of how to repair equipment or prepare a recipe. A strategic task is one based on principles that will require the learner to adapt those principles to different work contexts. Customer service and sales are two common examples. To illustrate interpersonal skills, such as sales, worked examples typically take the form of a video or animated graphic. Third, an underutilized type of worked example is one designed to illustrate mental problem-solving skills, such as troubleshooting or business management. In Figure 10-3 you can review a sales example that shows the learner how the expert sales representative responds to the client as well as the mental processes the expert uses to derive a solution. This is a two-level worked example. Level 1 illustrates how the representative responds to a client’s question. Level 2 uses a thought bubble to illustrate the representative’s thought processes. This type of example gives access to both actions and mental processes behind task completion.
Accelerate Expertise With Examples • 211
Figure 10-3. The Worked Example Illustrates the Task As Well As Expert Mental Processes
Examples for Routine Tasks For routine tasks, such as fulfilling a customer order or logging into an application, your examples should mirror the actual work environment as closely as possible. This means including the actual tools, application screens, forms, customer requests, and steps that would be used on the job. Your example should demonstrate from the worker’s perspective how to apply steps using the tools, screens, or forms involved in the task.
Examples Guideline 2 For routine tasks, create demonstrations that mirror the context of the workplace.
212 • Chapter 10
Examples for Strategic Tasks Most of the initial research on worked examples used relatively high-structure examples and problems with clear right or wrong answers, such as algebra problems. Only recently have researchers evaluated the benefits of examples for more strategic tasks that typically do not have a single correct answer and that require critical thinking that goes beyond routine problem solving. Nievelstein and colleagues (2013) evaluated the benefits of worked examples in lessons designed to teach reasoning about legal cases. In legal case reasoning, students read cases, formulate questions, search information sources for applicable laws, verify whether rules can be applied to the case, and provide argumentation to the questions. The research team compared performance following study of four lesson versions that included: 1. worked examples and explanations that summarized the general approach to researching and responding to cases 2. worked examples without an explanation summary 3. case analysis assignments with explanation (no worked example) 4. case analysis assignments without explanation (no worked example). The test consisted of a case assignment requiring learners to research and write arguments. Consistent with previous findings, the research team found better performance among learners who had access to worked examples (groups 1 and 2) rather than those who were assigned case problems without examples (groups 3 and 4). The explanations had no effect. It is encouraging that the benefits of worked examples apply not
Accelerate Expertise With Examples • 213
only to routine tasks but also to less structured tasks that require critical thinking. Since the initial research with worked examples in math, more recent research studies have shown their effectiveness in learning strategic tasks that require critical thinking, such as the legal reasoning tasks.
High- Versus Low-Variable Examples Unlike routine tasks, strategic tasks are based on guidelines that must be adapted to a variety of work contexts. Take sales, for example. Basic guidelines on effective sales techniques must be adapted to various situations, depending on the product, the client, and the context. For most effective learning of strategic tasks, use two or more worked examples that reflect the same guidelines but vary regarding the context. Worked examples in a sales course would show how the salesperson adapts basic guidelines based on assessment of different customer needs and diverse product specifications. We have evidence for the benefits of varied context worked examples. Quilici and Mayer (1996) tested the learning effects of worked examples for applying statistics tests using a T-test, correlation, or chisquare. For each of these, they created three examples. Version 1 used the same context for all three examples. For example, the three t-test problems used data regarding experience and typing speed while the three correlation examples used data regarding temperature and precipitation. In Version 2, the examples were varied. For example, the T-test problems used data regarding experience and typing speed for one example and data regarding temperature and precipitation for a second example.
214 • Chapter 10
As you can see in Figure 10-4, the lesson versions that used different context examples led to better learning. Figure 10-4. Varied Context Worked Examples Resulted in Better Learning
Varied Context Examples
Test Scores
4.0 3.0
Same Context Examples
2.0
SD SD = Significantly different
1.0
Data based on Experiment 3 in Quilici and Mayer (1996).
A recent series of experiments reported by Likourezos and others (2019) used logarithmic equation examples and problems, comparing the effects of high- versus low-variability examples on experienced versus novice learners. First, the students were provided a classroom lesson with explicit instructions for logarithmic equations. Participants were divided into high and low prior knowledge groups based on the pretest. The four test groups were: • high-variable examples given to high prior knowledge learners • high-variable examples given to low prior knowledge learners • low-variable examples given to high prior knowledge learners • low-variable examples given to low prior knowledge learners.
Accelerate Expertise With Examples • 215
It’s not surprising that overall the higher prior knowledge groups scored better on the post-test than the low prior knowledge groups. Take a look at Figure 10-5 to compare learning of low and high prior knowledge groups from high and low variable examples. Figure 10-5.Learning Outcomes From High and Low Variable Examples
Based on Experiment 2 in Likourezos et al. (2019).
The research team recommended that for a new skill, begin with low-variability examples. As learners gain more expertise, transition to examples with higher variability.
Examples Guideline 3 Use varied context worked examples as learners gain expertise to accelerate learning of strategic tasks.
Engage Learners in Your Examples We’ve seen that examples are one of your most powerful instructional tools. But there is a major barrier to learning from examples. Many learners either skip examples completely or don’t process them very
216 • Chapter 10
deeply. Chi (2000) showed that higher-scoring physics students studied examples carefully, investing the effort to explain the examples to themselves. Less successful students, in contrast, tended to ignore examples or merely repeat the steps shown in the example. Which of the two example versions shown in Figure 10-6 do you think will lead to best learning? Figure 10-6. Two Worked Examples; Version B Adds a Question
One of our basic instructional principles is promotion of psychological engagement that involves transformation of the content. You can boost the instructional potential of your examples by encouraging your learners to study them carefully. How can you encourage learners to review examples carefully? The next section reviews one proven technique called self-explanations. (By the way, in Figure 10-6, Example Version B will be more effective because it includes a self-explanation question.)
Accelerate Expertise With Examples • 217
What Are Self-Explanations? A self-explanation is an activity on the part of the learner that results in a deliberate and deep review of a worked example. A productive self-explanation might connect one or more steps of an example to an underlying principle. Alternatively, a learner may compare two examples to identify similarities and differences. According to Renkl (2017), “self-explaining is regarded as a crucial factor in example-based instruction.” While some learners may spontaneously review and explain examples to themselves, many do not. To increase the likelihood of self-explanations, use one of the following techniques.
Add Questions to Your Examples Take a look at the self-explanation question example in Figure 10-7. The goal of the management lesson is to help regional managers identify root causes of problems in local stores. In this example, the learner must distinguish among a root cause, a symptom, an assumption, and a hypothesis. Figure 10-7. A Self-Explanation Question From a Management Training Worked Example
The regional manager (RM) has noticed a decline in commercial sales over the past six months. She meets with store manager (SM) to identify a solution.
The SM response reflects: A. A root cause B. An assumption C. A symptom D. A hypothesis
RM: Why are commercial sales declining? SM: We aren’t seeing high penetration and growth in commercial sales. I think it’s the economy. RM: What are we doing to drive penetration in commercial accounts? SM: We’ve had some appreciation events . . . RM: What feedback did we get at those events?
218 • Chapter 10
Schworm and Renkl (2007) evaluated the benefits of worked examples to illustrate principles of effective argumentation. Argumentation requires construction of reasoned positions on a question— similar to a good editorial or debate. It involves stating a position, providing evidence for the position, considering counterarguments, and providing rebuttals. Students were provided an explanation of argumentation followed by video examples. The video showing the dialogue was first played completely and then replayed in short sequences. In the selfexplanation lesson version, when the video paused, a bubble appeared with a question about the techniques used in the argument. The researchers found that adding the questions to the video examples was essential to learning and recommended using questions in conjunction with examples, especially for strategic tasks such as argumentation, sales, and management training. There are many forms of questions that could help your learners extract maximum value from examples. Try for questions that most learners will answer correctly. At the same time, generate questions that stimulate reflection on the principles or rules behind your example. It is important that learners get feedback on their responses to these questions. An incorrect self-explanation is worse than no self-explanation. In e-learning, use an objective question format such as multiple choice, as shown in the examples, because specific feedback can be given more effectively. In classroom settings, instructor and participant discussion can provide feedback to open-ended as well as objective questions.
Accelerate Expertise With Examples • 219
Promote Comparisons Among Examples We’ve seen that strategic tasks benefit from varied worked examples that reflect the same guidelines applied in diverse contexts. The first sales example may show how staff respond to customer objections to product Y. The second example may illustrate another response to an objection to a different product. A second way to promote engagement in examples is to encourage learners to compare these examples in order to identify the commonalities. In other words, require learners to focus on the underlying guidelines by contrasting two or more examples. One of my favorite research reports evaluated learning negotiation skills by review and comparison of examples (Gentner et al. 2003). I liked this study because learning of negotiation techniques was measured by a role-play test. In the experiment, 158 undergraduate students studied one of four different lessons—three with examples and one without (Figure 10-8). In one version (Separate), learners were asked to analyze each negotiation example presented individually. In a second version (Comparison), learners were asked to review two examples presented together. In a third version, students completed an exercise that required them to compare examples displayed together on a page (Active Comparison). Which lesson do you think resulted in best learning? Check out the results summary (Figure 10-9).
220 • Chapter 10
Figure 10-8. Alternative Placement of Three Worked Examples for Negotiation Skills
Percent of Pairs Transferring Negotiation Skills
Figure 10-9. Learning Negotiation Skills From Different Example Layouts
100 90 80 70 60 50 40 30 20 10 0 No Examples
Separate Examples
Example Comparison
Guided Example Comparison
Accelerate Expertise With Examples • 221
The active comparison lesson led to a higher proportion of roleplay pairs using the trained techniques. The research team concluded that “comparing two instances of a to-be-learned principle is a powerful means of promoting rapid learning, even for novices.” In summary, learners who are encouraged to process examples deeply learn more than learners left to their own devices when reviewing examples—especially for learning strategic skills, such as negotiations or sales. The evidence to date supports adding questions to examples or assigning guided example comparisons.
Examples Guideline 4 Encourage engagement with examples by asking questions or assigning comparisons of worked examples.
Formatting Examples This section summarizes two guidelines regarding formatting your examples: integrating words and visuals and chunking example segments.
Help Learners Integrate Words and Visuals Chapter 7 examined the benefits of reducing mental load by either explaining a graphic with audio (modality principle) or by placing printed words close to the graphic (contiguity principle). These principles also apply to worked examples. In many situations your example will include a graphic such as a diagram, equipment illustration, or people interacting. Provide the words for your example in audio if your technology supports it. Alternatively, place words near the relevant parts of the illustration. In Figure 10-10 you can see a worked example
222 • Chapter 10
of how to read a chart with text embedded into the chart. Sweller and others (1990) found that when diagrams and words were separated, the benefits of worked examples disappeared. That is because the additional mental load imposed by split attention negated the benefits of worked examples. Figure 10-10. A Worked Example With Text Embedded Into the Chart
Adapted from Leahy, Chandler, and Sweller (2003).
Segment Example Steps Chapter 9 reviewed evidence showing the benefits of breaking content into chunks and allowing learners control over accessing those chunks with a continue button in an e-learning lesson. A similar principle can be applied to worked examples. Spanjers and others (2011) compared the benefits of four worked examples illustrating probability calculations displayed in two formats: a segmented version in which lines were inserted to visually isolate the six different sections in the problem solution and an unsegmented version that displayed all of the steps together without any separating lines. They found that the learning in the segmented and unsegmented versions was equivalent, although
Accelerate Expertise With Examples • 223
learners rated the segmented version as lower in mental effort. Thus the segmented versions were more efficient. In the same study, one group was asked to segment the worked examples themselves. The actively segmenting group rated their mental load as highest and did not learn more than the other groups. Here we see another example where learner physical engagement does not always foster learning. Based on this evidence, I recommend segmenting important elements of a worked example by using line spacing in written materials, overlays in slides and screens, or pauses in videos.
Display Examples Together to Promote Comparisons Chapter 7 discussed the benefits of displaying visuals and explanatory text together, allowing the learner easy integration to make sense of the combined visual and text. This technique is an application of the contiguity principle. Contiguity also applies to situations in which you want learners to compare two or more examples illustrating strategic tasks. Gentner and others (2003) found better learning when negotiation examples were displayed together using a diagrammatic summary of the first solution as learners compared the second example to the first as shown in Figure 10-8. Those who studied each example separately learned much less. Comparison of examples will be easier when the learner can review them together. For example, you could put each example on the same sheet of paper or on facing pages. Online you could display one example on one screen and a second example on a second screen next to a summary of the first example. Alternatively, you could use a split screen in video or online in which the learner can independently play each example.
224 • Chapter 10
Examples Guideline 5 Help learners process worked examples by using audio or integrating text into graphics, displaying examples together to promote comparisons as well as segmenting the example.
Studying Incorrect Examples Have you ever learned more from a mistake than from a correct response? Errors are often memorable, especially with good feedback and opportunities to correct and retry. Applying this idea to examples, consider using worked examples that show common errors. Coping models are one approach to incorrect examples. A coping model shows a learner attempting a new skill but making errors as they encounter difficulties. Gradually the errors are corrected as the example evolves from imperfect to good. For example, a model might write a performance review in a supervisory class. The instructor would comment on the review followed by a model edit. Learners would see the errors made and corrected by the model through a couple of iterations. This technique has been shown to improve learning in some cases better than self-explanation questions. Renkl (2017) reports that “including errors in examples can foster learning.”
Prework: Examples Versus Invention Assignments Would it be better to precede a lesson with a problem for learners to solve (an invention task) or with a worked example of an expert solving the same problem? Advocates for invention claim that preceding a lesson with an open problem for learners to attempt to solve will increase curiosity about the content; increase learner awareness
Accelerate Expertise With Examples • 225
of their knowledge gaps, which generates a need for an explanation; and result in better learning. Those opposing invention suggest that trying to solve a problem first will impose an additional mental burden that in the end will lead to less learning. Two independent reports tested these ideas by comparing an introduction that assigned learners either a problem to solve (invention) or a worked example of the same problem. These introductions were followed by a lesson explaining the guidelines and concepts (Kant et al. 2017; Glogger-Frey et al. 2015). In one experiment (Glogger-Frey), student teachers were taught how to evaluate student reports. Those assigned to an invention activity were asked to rate four different sample reports and then invent their own grading criteria. Those in the example group viewed a model teacher rate the same sample reports and derive correct criteria. Both groups then received a lesson on grading reports followed by a test on grading different reports. Which group do you think learned more effectively? The research team reported that those in the invention group did report more knowledge gaps and had greater curiosity about the topic than the example group. In spite of this, the example group outperformed the invention group on the application test. The research team concluded that a worked example is more effective as prework than working on an open (invention) problem when provided equal time on task. In chapter 13, we will review more evidence on sequencing of explanations versus problem-solving.
Examples Guideline 6 Use a worked example as a pre-lesson assignment to boost learning from lesson content.
226 • Chapter 10
The Bottom Line This chapter began by comparing three sequences of examples and problem assignments: Version A uses all examples, Version B alternates examples with practice, and Version C uses all practice. Based on evidence to date, at least in brief learning episodes, Version A (all examples) would lead to best learning and higher motivation. Research shows that lessons that include worked examples along with engagement techniques, such as self-explanation questions or comparison questions, lead to more effective and efficient learning. For training of strategic skills, such as customer service or troubleshooting, consider several examples that vary in context but reflect similar guidelines. Also consider using a coping model in which an avatar attempts a task, encounters difficulties, and with corrective feedback evolves to an improved outcome. Format examples in ways that reduce mental load by use of audio, integrating text, and segmenting steps. Finally, consider assigning a worked example as lesson prework in which learners view a model solve the problem.
Applying Examples to Your Training The checklists are divided into hints for examples that best illustrate routine tasks, strategic tasks, and all tasks.
Examples for Routine (Procedural) Tasks qqProvide demonstrations that incorporate the tools and
techniques of the job. qqUse animation for procedural tasks involving motion. qqIf using animation, incorporate controls and cueing to help
learners manage cognitive load.
Accelerate Expertise With Examples • 227
qqExplain an animated demonstration with brief audio
narration. qqIllustrate the procedure from the learner’s perspective—for
example, first-person view.
Examples for Strategic (Problem-Solving) Tasks qqProvide two or more varied context worked examples. qqPromote engagement in strategic worked examples by
assigning a guided comparison task. qqEnsure that learners can view two or more worked exam-
ples together to aid in comparison. qqIncorporate coping models that reflect common errors and
illustrate improvement revisions. qqPrior to a lesson, show a model worked example of a prob-
lem solution that applies the lesson concepts and guidelines.
For All Examples qqEncourage engagement in your examples through adjunct
questions or guided comparisons with feedback. qqFor examples with comparison assignments, ensure that all
examples being compared can be seen together. qqSegment complex examples into small pieces. qqFor examples that include a diagram, ensure that both the
diagram and text are visible together (contiguity principle). qqFor examples that include a diagram, use succinct audio to
present related words (modality principle).
228 • Chapter 10
Coming Next Typically, after reviewing examples, learners will apply what they have learned to practice exercises. What makes up a good practice exercise? How much practice do you need? Where should you place practice? These are some questions for the next chapter. FOR MORE INFORMATION Clark, R. C., R. E. and Mayer. 2016. e-Learning and the Science of Instruction, 4th edition. Hoboken, NJ: Wiley. See Chapter 12. This chapter in our book focuses on evidence and guidelines for use of examples in online learning. Nievelstein, F. T., T. van Gog, G. V. Dijck, and H. P. A. Boshuizen. 2013. “The Worked Example and Expertise Reversal Effect in Less Structured Tasks: Learning to Reason About Legal Cases.” Contemporary Educational Psychology, 38, 118-125. This is an interesting study illustrating the benefits of worked examples to strategic tasks. Reasoning about legal cases is analogous to many critical thinking tasks in the workplace. Renkl, A. 2017. “Instruction Based on Examples.” In R. E. Mayer and P. A. Alexander, Eds. Handbook of Research on Learning and Instruction. New York: Routledge. For a deeper dive on worked examples, look at this chapter written by one of the predominant researchers on worked examples.
Chapter 11
Does Practice Make Perfect? The Power of Practice What Is Practice? Recall vs. Application Practice Recall vs. Application Practice Practice and the Brain How Much Practice Do Your Learners Need? How Should Practice Be Spaced? How Should Practice Be Sequenced? Comparison Practice Exercises The Power of Feedback Applying Practice to Your Training
229
230 • Chapter 11
“One way of looking at this might be that, for 42 years, I’ve been making small regular deposits in this bank of experience: education and training. And on January 15, the balance was sufficient so that I could make a very large withdrawal.” —Capt. Chesley “Sully” Sullenberger Pilot of US Airways Flight 1549 “Miracle on the Hudson”
Do you recall what occurred on that day, January 15, 2009? Shortly after takeoff, a bird strike caused total engine failure. In those few life-or-death moments, the successful landing of the aircraft in the Hudson River saved 155 passengers and crew. From flight emergency responses to music to golf, evidence from world class performers reinforces the maxim that success is 99 percent perspiration and 1 percent inspiration. For the most part, productive learner engagement during training does not happen automatically. It is up to the instructional professional—you—to build and facilitate learning environments that promote deliberate engagement. Chapter 4 looked at engagement methods to support active learning, including questions, self-explanation questions, and teach-backs. Adding effective practice opportunities to your lessons is your main tool for active learning. This chapter examines evidence behind five core guidelines to help you maximize the benefits of practice during training: • what kind of practice works best • how much practice to include in your lessons • where to place practice • how to group practice • how to leverage comparison exercises.
Does Practice Make Perfect? • 231
The Power of Practice From Sullenberger to Yo-Yo Ma, world-class performers do not arrive haphazardly. Star performers start young. Yo-Yo Ma started to study the cello with his father at age four, and Sullenberger obtained his first pilot license when he was 14. Second, they invest countless hours in regular focused practice even after reaching high performance levels. In fact, the best musicians, athletes, and chess players require a minimum of 10 years of sustained and focused practice to reach their peak performance period. Naturally, aptitude and attributes play a role. I’m a basketball fan. But at five-feet-two-inches tall (not to mention my age!), no amount of practice will transform me into a professional-level player. Sullenberger is a smart man, having qualified for Mensa (the high-IQ group) when he was 12. However, for the most part, we underestimate the role of focused practice in building competence and in fact most of us fail to reach the full potential of our natural gifts. It bears repeating: Expertise is 99 percent perspiration and 1 percent inspiration!
What Do You Think? Place a check next to each statement you believe is true about practice: A. The more practice, the better the learning. qq B. Six practice exercises placed at end of the lesson will lead qq to better learning than the same six exercises distributed throughout the lesson. C. When teaching two topics, it’s better to group practice qq questions according to topic than to mix questions for both topics in the same section. D. “Summarize three main arguments supporting goal qq setting based on your assigned reading” is an effective practice exercise.
232 • Chapter 11
What Is Practice? Practice in learning environments is a deliberate assigned activity designed to promote a behavioral and psychological response that will build goal-relevant knowledge and skills. Responding to a well-designed multiple-choice exercise, repeating 20 free throws in basketball, engaging in role play with a classmate, working collaboratively on a case study, or dragging and dropping screen objects in response to an online question are some examples of common practice formats. Let’s look at this definition in more detail. First, while learners can and do undertake practice on their own, the quality of that practice may not lead to optimal results. The practice may not be aligned to desired outcomes, and feedback may not be available. Instead, a deliberate assigned activity is one that takes into account the instructional goals and incorporates evidence-based guidance on productive engagement. To qualify as practice, the learner must make some kind of behavioral response. For the purposes of formal training, that response usually generates a visible product—one that can be evaluated by the instructional environment. Naturally, learners can also practice without behavioral responses, such as silently repeating a vocabulary word or self-explaining an example. However, to maximize the benefits from feedback, I recommend embedding opportunities for behavioral responses that can be evaluated by the learning environment and shaped by feedback As chapter 4 showed, the behavioral response must promote the appropriate psychological activity. Activity per se does not necessarily lead to learning and can even depress learning. Therefore, practice is a specific assignment intended to help learners bridge performance gaps. There is a distinction between practice in general and deliberate
Does Practice Make Perfect? • 233
practice that focuses on specific skill gaps. We all know of the recreational golfer who accumulates a great deal of practice hours over time but never improves beyond a baseline, which typically falls far short of the person’s capabilities. Deliberate practice requires an analysis of skill gaps and a focus on those gaps, usually with the help of a skilled coach or instructor (Ericcson 2006).
Recall vs. Application Practice There are two types of practice common in training settings: recall assignments that ask the learner to repeat or recognize the content of the lesson and application assignments that ask the learner to organize content or to apply knowledge. For example, compare the three practice assignments from an Excel training session (Figure 11-1). Which exercises promote effective behavioral and psychological responses? Figure 11-1. Excel Formula Practice Versions A, B, and C
234 • Chapter 11
Practice Version A is a recall assignment that calls for repetition of content. Recalling the features of an Excel formula does not mean that the learner can identify or construct a viable formula. Practice Version B reflects a closer alignment to actual job performance because it asks the learner to identify a correctly formatted formula. This is a useful exercise for an important knowledge topic associated with using Excel. However, you would not want to stop with this practice because it fails to require learners to actually perform the task of constructing and entering a formula. Practice Version C is the closest to the requirements of the job. In short, Practice Version C involves both behavioral and psychological responses aligned to the workplace. Historically, I have discouraged the use of recall practice assignments, such as the one shown in Practice Version A. However, recent evidence suggests that recall practice can be beneficial. Roelle and Nuckles (2019) compared the learning effects of retrieval practice with a generative activity that asked learners to organize and illustrate the main ideas of a text. All learners first were assigned a study phase that involved reviewing a text. Next, as practice some learners were asked to recall some of the main ideas of the text, while others were asked to organize and provide examples of the main ideas. Two types of texts were used in the experiment: a well-organized text with plenty of examples (high coherence) and a poorly organized text with no examples (low coherence). The researchers reported that the generative activity (organize and provide examples) led to best learning among those assigned to the less coherent text because the learners were required to organize and illustrate it themselves. In contrast, those who reviewed a more effective text benefited from retrieval practice because they were able to rehearse ideas that were already well organized and illustrated. The authors recommended generative activities be assigned
Does Practice Make Perfect? • 235
early in learning to help learners form good mental models of the content but that retrieval practice be assigned later in the learning process. Instructional scientists have reviewed the benefits of retrieval practice, also called the testing effect. In testing effect research, after reviewing a text some learners take a practice test while others are asked to “restudy” or reread the content. A recent meta-analysis evaluated 272 effect sizes from experiments that compared practice testing to a restudy assignment (Adesope and others, 2017). The research team reported an effect size for practice testing of 0.51, which is a medium-sized effect. Benefits of practice testing were greatest when the practice exercise and the final test used the same format and on delayed rather than immediate learning assessments. Van Gog and Sweller (2015) reviewed evidence suggesting that the testing effect does not apply to more complex learning goals, which are typical of much workforce learning. Until we have more evidence on testing effects, I suggest that you assign practice that requires learners to respond in ways that mirror job role performance.
Practice and the Brain Recall from chapter 3 the three forms of cognitive load: intrinsic, extraneous, and germane. Germane is useful cognitive load. It promotes learning by stimulating active processing in working memory, leading to encoding in long-term memory. Germane load is generated by your practice exercises. But it’s not enough to get new knowledge and skills encoded into long-term memory. They must be retrieved from longterm memory later when needed on the job. Therefore, as you plan practice, you need to consider both how to get knowledge and skills into long-term memory and back out again.
236 • Chapter 11
Your training goal is to avoid “apparent learning.” Apparent learning occurs when learners respond correctly to exercises or quizzes in class but fail to apply skills to the job. Instead, your goal is transfer of learning. New knowledge and skills processed at the time of learning must transfer later to job settings. Fortunately for the passengers and crew of US Airways Flight 1549, Captain Sullenberger was able to make a rapid and effective transfer of his years of accumulated knowledge and skills.
Embed Retrieval Hooks During Learning Suppose I asked you to state the months of the year? No problem, right? However, what if I asked you to state them in alphabetical order? You could do it, but it would likely take you a bit longer because your learning cues were chronological, not alphabetical. Memory retrieval hooks must be implanted at the time of learning. It’s too late to add them later. For workforce learning, retrieval hooks are the sights, kinesthetics, and sounds of the workplace. In other words, it’s all about context. To optimize transfer of learning, you need to embed the context of the job into your practice exercises. Rather than asking learners to recall the steps to perform a task or list the name of equipment parts, embed the job context into the exercise by asking learners to perform the task or circle the part of the equipment that performs a specific function. Based on the evidence and the psychology of learning, I offer my first recommendation.
Practice Guideline 1 Incorporate the context of the job to build practice exercises that promote germane mental load.
Does Practice Make Perfect? • 237
Chapter 3 discussed the role of automaticity in learning. Any task that is repeated hundreds of times becomes hardwired in long-term memory and can be performed with relatively little investment of working memory resource. Driving a car or touch typing are two examples. Skilled drivers and typists practiced these skills repetitively until they became automated. Automated skills are important when complex performance demands fast and effortless execution of subskills, allowing allocation of working memory resources to higher-level task demands. For example, while typing, the skilled writer can plan and execute the organization and expression of ideas and not be bogged down in the mechanics of typing. This leads us to our next question regarding the amount of practice you should build into your training.
How Much Practice Do Your Learners Need? Practice takes time and time is money. Therefore, deciding how much practice to include is important. Top performers, such as world-class musicians or athletes, maintain a regular and rigorous practice regimen. Does this mean that the more we practice, the better we get? The answer is yes and no. In fact, you do continue to improve over time with practice—but at a diminishing rate. Instructional psychologists call this the power law of practice and skill. The power law means that skills build rapidly during the first few practice sessions. However, as practice continues, the rate of skill acquisition slows. The greatest improvements will accrue from the first practice sessions. Following the first few practice sessions, improvement continues but at a slower rate.
Evidence on the Amount of Practice and Learning Rohrer and Taylor (2006) measured learning of a mathematical procedure from two different practice regimens. All participants viewed a
238 • Chapter 11
tutorial that demonstrated how to solve several example problems. Then half the participants had a practice session consisting of three problems while the other half completed nine practice problems. Some participants from each group were tested one week after practice and the others were tested four weeks later. The results are shown in Figure 11-2. Figure 11-2. Learning Outcomes From Lower and Higher Amount of Practice
Based on data from Rohrer and Taylor (2006).
As you can see, everyone did better on the immediate test than on the delayed test. However, there were no real differences on either the immediate or the delayed test between the low and high practice groups. These results are an example of the power law of practice. The research team concluded that “the minimal effect of overlearning on retention can be interpreted as an instance of diminishing returns. That is, with each additional amount of practice devoted to a single concept, there was an ever smaller increase in test performance.” For greatest efficiency, pilot test your training to determine the amount of practice needed to achieve training goals.
Does Practice Make Perfect? • 239
A Time for Over-Learning There are situations that benefit from extensive practice that leads to automaticity. Landing an aircraft is one example. Not only are the consequences of error very serious, but also multiple actions must be taken very quickly. In other words, the task imposes a high intrinsic load. There is no time to refer to a working aid to decide when to start the flare or to identify the correct power level for a given stage of descent. Over-learning is expensive; it requires enough drill and practice to build automaticity. In many workplace settings, automaticity can evolve naturally through repetitive performance on the job—in other words, through experience. However, in some situations, such as landing an aircraft, the first solo landing must be pretty good. Over-learning is justified. Over-learning may also be needed when the final task is so complex that the underlying component skills must be automatic to free up working memory resources to devote to the entire task. I feel that one of the most important skills I learned in high school was typing. At least it is the one skill that I have continued to use more than 50 years later. With automated typing skills, I can devote most of my working memory capacity to expressing my ideas. However, my typing skill was an expensive investment requiring many hours of daily practice. You will need to decide whether over-learning through extensive and time-consuming practice is warranted for your learning goals. When over-learning is needed, a gaming format may make the repetitive practice more palatable. See chapter 16 for evidence on games and learning. So how much practice do you need? There is no universal rule. Based on the evidence and psychology of repetitive practice consider the next recommendation.
240 • Chapter 11
Practice Guideline 2 Adjust the amount of practice in your training based on: • Consequences of error. If serious, you need more rather than less practice. • Acceptability of a job aid. If yes, then fewer practice exercises might be needed and practice should incorporate the job aid. • Complexity of the work. If high, drill and practice might be needed to automate requisite sub-skills. • Pilot tests of your instruction to determine how much practice is needed to achieve desired competency levels.
How Should Practice Be Spaced? Have you ever crammed for an exam? Most of us at one time or another waited until the last minute and then studied intensively. In most cases, cramming does work at least to pass a test. On other occasions, you may have been more organized and scheduled your study periods for several weeks prior to the exam. Cramming is called massed practice and can be contrasted with spaced practice. Which approach is better? One of the first and most enduring principles of active learning is called the “spacing effect.” First reported by Ebbinghaus in 1885, the spacing effect states that when practice opportunities are distributed over time, learning—especially delayed learning—is better. One oftencited study evaluated different practice schedules for teaching postal workers to type (Baddeley and Longman 1978). Learners practiced
Does Practice Make Perfect? • 241
once or twice a day for either one or two hours at a time. The researchers found that those learners who achieved their typing goals in the fewest total practice hours were those assigned to the most distributed practice schedule—for example, once a day for an hour. Of course, those with more distributed practice required a longer calendar time to reach the goal. Often it may not be practical to extend overall learning time; in fact, a common goal in workforce learning is to accelerate job expertise. We have all been successful at one time or another with cramming or massed practice. That is because the benefits of spaced practice are most pronounced on delayed learning. For example, in an English as a second language course, Bird (2010) assigned practice on three kinds of verb tenses, asking learners to correct nongrammatical sentences. One group (massed) practiced over two weeks and another (spaced) over eight weeks. All students took a surprise test seven and 60 days after the last practice session. Take a look at the data in Figure 11-3. Figure 11-3. The Effects of Massed and Spaced Practice on Immediate and Delayed Learning
242 • Chapter 11
As you can see, both massed and spaced practice led to equivalent learning one week after study. However, the benefits of spaced practice are apparent after 60 days. Since many training organizations don’t evaluate learning over time, the benefits of spaced practice are rarely documented. Based on a lot of accumulated evidence, though, spaced practice will give you a better long-term return. Rohrer (2015) concluded that “distributing a given amount of study time over a longer rather than shorter period of time leads to increased post-test scores if, and perhaps only if, the test is given after a delay of at least a month or so.”
Spread Practice Over the Duration of Training There are several ways that you can space practice in an applied setting without disrupting the instructional schedule. One approach is to spread a given amount of practice over the duration of the training time. For example, imagine you have a course with six lessons and you plan on including six practice exercises with each lesson. At the end of lesson 1, assign two lesson 1 practice exercises. At the end of lesson 2, assign two exercises from lesson 1 and two from lesson 2. You can see the pattern. Rather than placing all practice related to a specific lesson together, you distribute those exercises among your lessons. In that way, each lesson will have some review practice as well as practice on new content.
Use Media to Extend Learning Time Frame A second approach is to use a blend of media to spread learning events over a longer time frame. For example, you may initiate your training with some reading assignments and a brief on-the-job assignment followed by a virtual classroom session to discuss the outcomes of the job assignment. Following more workplace assignments, a face-to-face instructor-led session may leverage social presence via group problem
Does Practice Make Perfect? • 243
solving or role plays. After the in-person session, asynchronous follow-up activities may include participation in discussion boards, submission of products to multimedia pages, and other assignments that require participants to continue to practice over time. Rather than an event, learning becomes a process. Rather than using instructional time to disseminate information, effective face-to-face learning leverages the opportunities inherent in high social presence with activities that involve personal engagement with others.
Assign Short Deadlines for Assignments A third approach is to force frequent study by making shorter rather than longer deadline assignments. Fulton and others (2013) gave three different assignment deadlines in a 12-week online statistics course attended by working healthcare executives. Participants were randomly assigned to deadlines that were either weekly, monthly, or endof-course. Learners with weekly deadlines performed significantly better on exams than the monthly or end-of-course groups, which were equivalent. The research team concluded that “provision of frequent, evenly spaced deadlines results in greater practice distribution which consequently predicts performance on tests of retention and transfer.” In summary, even though the spacing effect has been reported and consistently demonstrated for well over 100 years, this important principle is rarely applied. In his synthesis of meta-analyses, Hattie (2009) reported an effect size of 0.71 for spaced practice—a very high effect! Perhaps because most learning from training classes is assessed soon after the instructional event (if at all), workforce learning professionals have not seen the benefits of practice spacing. The considerable evidence accumulated over a range of topics and time is the basis for this guideline.
244 • Chapter 11
Practice Guideline 3 Distribute practice within your lessons and throughout your course rather than lumping them together. Convert learning events into learning processes.
How Should Practice Be Sequenced? Imagine you have three or more categories of skills or problems to teach such as how to calculate the area of a circle, a square, and a triangle. In traditional courses, a lesson on the area of circles would be followed by a lesson on squares and then triangles, each lesson including a presentation of the formula, some examples, and practice exercises. This type of organization is called blocked practice. An alternative sequence would combine all three types of calculations so that a practice on the area of a triangle would be followed by a practice on the area of a square and so on. This organizational scheme is called interleaved or mixed practice. Rohrer and others (2015) compared learning of graph problems and slope problems in a classroom setting. All students worked four problems that matched the lesson skill, such as four graph problems after an explanation of graphs and four slope problems after an explanation of slope. Then eight additional practice problems were assigned. Half the students were assigned four graph problems grouped together followed by four slope problems. The second half received the same eight problems in a mixed format. Half the students were tested after one day and the other half after 30 days. You can see the test results— mixed practice groups scored better after one day as well as after a 30-day delay (Figure 11-4).
Does Practice Make Perfect? • 245
Figure 11-4. Mixing Practice Problem Types Leads to Better Learning on Immediate and Delayed Tests
From Rohrer et al. (2015).
A more recent large-scale classroom study compared interleaved versus blocked practice schedules over a four-month period. Four types of mathematical problems were involved: graph problems, inequalities, expressions, and circle geometry. One month later, an unannounced test found that the interleaved group scored 61 percent compared to 38 percent among those who studied with the blocked organizational format (Rohrer et al. 2019). Interleaving has been shown to have positive effects for content other than mathematics. For example, Kornell and Bjork (2008) taught college students to distinguish among different artists by viewing landscape paintings of each artist. Some lessons used a blocked approach in which students reviewed six paintings by one artist followed by six of a second artist, and so on. The interleaved group reviewed one painting by Artist A, followed by one from Artist B, and so on. The final test asked learners to identify the artists of a series of paintings not seen
246 • Chapter 11
during the lessons. Those in the interleaving group scored 59 percent compared to 36 percent in the blocked group. Interleaving may help learners make discriminations among related categories and is therefore most beneficial with learning goals that require discrimination among classes of objects, concepts, or principles. For example, in an Excel class, an interleaved approach would combine calculations requiring addition, subtraction, division, and multiplication rather than grouping like calculations together. The evidence accumulated to date is the basis for the following guideline.
Practice Guideline 4 When it’s important to tailor responses to different categories of problems, follow practice on individual categories with mixed practice items.
Comparison Practice Exercises We saw that mixing or interleaving assignments that involve related but distinct concepts or problems perhaps benefits learning by promoting comparisons among the different categories presented. Assignments in which learners are asked to make explicit comparisons between two sets of data, two mental models, or two scenarios have also proven effective.
Comparison As Prework Several studies have shown that giving students a comparison assignment as prework before a lecture resulted in better learning from the lecture than students reading a summary of the comparison content as prework. In a psychology class, Schwartz and Bransford (1998) assigned two groups of college students prework to either compare
Does Practice Make Perfect? • 247
data from two contrasting cases or to read summaries of the data. Following a lecture that focused on the theories that explained the data, learners who had analyzed contrasting cases as prework performed better on a transfer test than those who read summaries of the data. The research team suggested that the prework comparison activity prepared learners to learn more deeply from an explanation of the related concepts provided in the lecture.
Comparison As Lesson Practice Gadgil and others (2012) focused on teaching the correct double loop process model of blood flow through the heart. First, they identified students with incorrect ideas such as a single loop model (Figure 11-5). Their instructional goal was to correct misconceptions in these students. They tried two approaches. One group was shown a drawing of their flawed single loop model next to a drawing of a correct double loop model and was asked to compare the two drawings. A second group was shown only the correct double loop model and asked to explain it. On a post-test, those in the comparison group learned more. Figure 11-5. A Drawing Reflecting an Incorrect Mental Model of Blood Circulation
From Gadgil et al. (2012).
248 • Chapter 11
The authors suggest that a comparison activity—especially one in which the learner compares their own flawed mental model with an expert model—helps learners correct misconceptions more than simply providing an explanation of a correct model.
Comparisons As Debriefs A common technique in scenario-based learning is to track learner actions and then display those actions next to an expert trace. The example in Figure 11-6 shows a multimedia scenario-based learning course for automotive technicians. The lesson ends with a summary of the learner’s actions next to those of an expert. To promote reflection, the debriefing activity should ask learners to write a comparison of their approach to the expert approach. Figure 11-6. A Scenario Comparison Debrief Activity
With permission from Raytheon Professional Services.
In summary, comparisons of data, examples, or models have proven effective as pre-work, training practice, and debrief. We need more evidence on these techniques, but the trend in the available research suggests this guideline.
Does Practice Make Perfect? • 249
Practice Guideline 5 Assign comparison practice exercises to build relevant prior knowledge, to correct flawed or incomplete mental models, or to reflect on decisions made in a training scenario.
The Power of Feedback A major caveat to all of this chapter’s guidelines is that feedback is an essential element to reap the benefits of practice. Hattie (2009) reported that feedback is among the most powerful influences on achievement, with a high effect size. But not all feedback is effective. In the next chapter, we will look at guidelines to help you design and deliver feedback both to your learners as well as to instructional professionals to help them adjust instruction based on learner outcomes.
The Bottom Line Now that you have reviewed the evidence, here are my responses to our initial questions: A. The more practice, the better the learning. True but with a caveat. It’s true that you get better learning with more practice. However, according to the power law of practice, the improvements will diminish over time. Although effective practice can lead to improved performance, the biggest skill gains accrue in the first few practice sessions. You will need to consider the criticality of the task and the need for automaticity as you weigh the return on investment of extensive practice.
250 • Chapter 11
B. Six practice exercises placed at end of the lesson will lead to better learning than the same six exercises distributed throughout the lesson. False. Putting all of your practice in one spot in your lesson or course is not as effective as dispersing it throughout. We have ample evidence that practice spread out over a learning event leads to better long-term learning. C. When teaching two topics, it’s better to group practice questions according to topic than to mix questions for both topics in the same section. False. Although it will make the instructional event more challenging, you will get better learning from mixed practice. If your content includes concepts and strategic skills, and it is important to determine when to use which, blend questions from the different topics to maximize learning. D. “Summarize three main arguments supporting goal setting stated in your assigned reading” is an effective practice exercise. Maybe. This practice requires some reorganization of content and is likely better than an assignment to restudy the material or no assignment at all. However, a more effective exercise would ask learners to identify a personal goal and describe how features of their goal lead to better outcomes.
Applying Practice to Your Training How can you apply the research we’ve reviewed in this chapter to your own learning environment? First, remember that to learn a new skill does require practice—and depending on the criticality and complexity of the task—it may require a great deal of practice.
Does Practice Make Perfect? • 251
However, practice is expensive and unless optimized, may not give you a return on investment. As you plan your lessons and courses, apply the following guidelines: qqResist the temptation to cover more material at the expense
of practice opportunities. qqCreate practice exercises that mirror the knowledge and
skills of the work environment. qqMinimize “regurgitation” exercises in favor of application
exercises that incorporate the context of the job. Use realistic job scenarios as a springboard for practice questions. qqAdjust the amount of practice based on the criticality of the
task performance and the need for automatic responses on the job. qqDistribute practice throughout your learning events. Take
advantage of virtual learning environments to spread practice opportunities over time. qqVary the context of practice when the goal is both how to
perform a skill as well as knowing when to apply that skill. As you distribute your practice sessions among lessons, mix the skills practiced and reap the benefits of spaced practice and interleaved practice schedules.
Coming Next To benefit from practice, learners need to know how they did on their practice attempts. Were their responses correct? How could they improve on their outcomes? Providing effective feedback is one of the most important instructional methods you can implement. Chapter 12 reviews evidence-based guidelines on feedback.
252 • Chapter 11
FOR MORE INFORMATION Hattie, J., and G. Yates. 2014. Visible Learning and the Science of How We Learn. New York: Routledge. See Chapters 6 and 7. A clearly written book that draws on the evidence regarding key instructional methods. Rohrer, D. 2015. “Student Instruction Should Be Distributed Over Long Time Periods.” Educational Psychology Review, 27:635-643. A clearly written review article on the spacing effect.
Chapter 12
Feedback to Optimize Learning and Motivation What Is Feedback? Best Types of Feedback Is Negative Feedback Useful? When to Provide Feedback Maximizing the Value of Peer Feedback Feedback for Instructional Professionals Applying Feedback to Your Training
253
254 • Chapter 12
Do you have a Fitbit or other exercise-tracking device? Have you set a personal goal to replace the default 10,000 steps? Do you check your activity data regularly? Does the data increase your motivation to exercise? Do you compare your steps against those of friends on a leaderboard? The value of exercise-tracking devices is feedback. Some life insurance policies plan to adjust premiums if clients allow their exercise data to be reported and tracked by insurers. Is feedback in learning effective? Evidence shows that feedback is among the top 10 most powerful instructional methods in our training tool kit. In previous editions of this book, I added a section on feedback to the previous chapter on practice. However, because feedback is such a critical part of your instruction and there is quite a bit of research on it, it warrants its own chapter. Based on a review of 12 meta-analyses that included 196 studies, Hattie (2009) reported an average effect size for feedback of 0.79, which is twice the average effect of other instructional methods. Surprisingly, in a classic meta-analysis, Kluger and DeNisi (1996) report that about one-third of the time, feedback actually depressed learning! While there is a common assumption that feedback always helps, it turns out that the effects of feedback are more nuanced. As we have seen with other instructional methods, such as graphics, there are a number of factors that can influence the effects of feedback. As summarized in Table 12-1, feedback can be positive, neutral, or negative, aimed at the learner or the task, immediate or delayed, corrective or explanatory, and come from a variety of sources. In addition, feedback may be ignored or attended by the learner. These are some of the main factors that modify the effects of feedback.
Feedback to Optimize Learning and Motivation • 255
Table 12-1. Some Variations of Feedback Feature
Variations
Description
Valence
Positive Neutral Negative
The degree to which feedback includes praise or emphasizes effective features of the response or focuses on the aspects of the response that are incorrect or need improvement
Timing
Immediate Delayed
Whether feedback is provided soon after a response or after a delay ranging from minutes to days
Target
Self Task Task Process
The degree to which feedback draws attention to the ego such as praise or to the task or suggested approaches to improve responses
Content
Corrective Explanatory
Whether the feedback informs learners of the correctness of their response or includes an explanation of a correct response
Focus
Task Criterion Normative
Whether the feedback focuses on a standard or specific goal associated with the learning outcome or focuses on the learner’s response relative to others
Source
In-person Mediated
Whether feedback is delivered face to face by an instructor or by a mediated source such as a computer
What Do You Think? Imagine this question in an online lesson: Take a look at the P&L statement for last year for this regional store. What is the profit for Q1? Which of the following feedback options do you think would be effective? More than one option is correct. A. Good work. You are clearly an effective manager. qq B. The correct answer is: a profit of $135 thousand. qq C. Correct. You scored in the top 10 percent of your cohort. qq D. Correct. After subtracting expenses, inventory, debts, and qq depreciation, the store realized a profit of $135 thousand. E. Incorrect. You forgot to include depreciation. Please qq recalculate.
256 • Chapter 12
What Is Feedback? Feedback comes in many forms and potentially from a variety of sources. However, at a high level, all feedback provides learners with information regarding some aspects of their performance. The goal of feedback in learning settings is to modify the learner’s cognition, motivation, and behaviors. Fitbit feedback is focused on modifying exercise motivation and behaviors. Much educational feedback is focused on modifying cognition and motivation. I am a consistent user of my Fitbit, which has definitely shaped my motivation. I set a personal goal of 12,000 steps and check my progress a couple of times a day. Once a week, I get an email summary of my data along with a leaderboard comparing my steps to those of other Fitbit friends. Regarding the source, Fitbit feedback comes from an automated rather than an in-person source. Fitbit feedback can be accessed at any time but is received automatically upon reaching a goal and on a weekly basis. The feedback focuses on exercise completed but does not offer suggestions regarding specific exercise recommendations. The feedback both focuses on the task goal—for example, 12,000 steps per day—as well as on a comparison with others on the weekly leaderboard. Fitbit feedback illustrates many of the major features of feedback summarized in the table. Feedback may come from an instructor, peer, computer, or experience. Feedback may be positive, negative, or neutral. Much Fitbit feedback is neutral—a report of your exercise metrics. However, positive feedback is also included in the form of visual and auditory reinforcement (vibrations and animations) upon goal attainment. Feedback may be provided immediately or with a delay after a response. Feedback may focus on how the learner’s response meets a specific success criterion (12,000 steps per day) or on how the learner stacks up
Feedback to Optimize Learning and Motivation • 257
against others (a leaderboard of others’ steps). Learners may attend to or ignore feedback. Learners who attend to feedback may attribute their results to their own ability, to their effort, or to luck, to name a few common rationales. As you can see, there are many feedback features that have been shown to shape its effectiveness. Fortunately research reports—especially meta-analyses of multiple experiments— offer evidence-based guidelines regarding the best type of feedback to use in instructional settings.
Best Types of Feedback This section reviews some feedback guidelines linked to improved motivation and learning.
Target Feedback on the Task An important feedback feature is the target. Three common target levels include feedback directed at the self, the task, or the process to complete the task. Well done—you are clearly a capable learner! This positive feedback in the form of praise targets the learner’s ego but in fact may not benefit learning. Praise alone directs learner attention to themselves and not necessarily to the task. Hattie and Yates (2014) contend:“We know of no research finding suggesting that receiving praise itself can assist a person to learn or to increase their knowledge and understanding.” Therefore, while praise might be motivational and should not be ignored, it will often do little to promote learning. Based on their extensive review of evidence, Hattie and others (2017) recommend: “not to mix praise with feedback about the content, as it dilutes the feedback message.” When given, praise “needs to be specific, sincere, accurate, earned, preferably unexpected, not exaggerated, more private than public, and not include social comparison.”
258 • Chapter 12
Let’s review a more effective form of feedback. Your analysis is on target, but you failed to include customer satisfaction data. This feedback focuses on the learning product—a profitability analysis in this example. By drawing learner attention to specific strengths and areas for improvement, this feedback directs learner attention to the task rather than the self. However, feedback may go beyond learner product features and direct attention to the processes best aligned to success. For example, Your analysis is on target; however, to sharpen your projections, conduct a regression analysis on customer feedback and store profitability. Here the goal is to direct learner attention to how to better complete the task or solve the problem. Comments like these might be more accurately termed “‘feed forward,”’ as they focus the learner on how to improve their outcomes. Evidence on the effectiveness of different feedback targets is the basis for the following guideline.
Feedback Guideline 1 Emphasize task and task process comments in feedback; avoid mixing praise with task feedback.
Provide Explanatory Feedback A common, simple form of feedback is Correct or Incorrect. Or maybe, Try Again. Or, Incorrect. The answer is 32. These feedback examples all provide the learner with knowledge of their outcomes—right or wrong or a correct answer. An alternative form of feedback provides knowledge of results plus an explanation. For example: Correct. Good eye contact along with paraphrasing shows you are listening to the customer. It’s more work to prepare and deliver explanatory feedback. Is that extra work a good investment?
Feedback to Optimize Learning and Motivation • 259
In a botany game called Design a Plant, learners build plants out of a selection of leaves, stems, and roots to best align with the environmental conditions of imaginary planets. An on-screen agent called Herman the Bug provides feedback. Half the learners received Correct–Incorrect feedback. The other half received an explanation such as: Yes, in a low sunlight environment, a large leaf has more room to make food by photosynthesis (Moreno and Mayer 2005). Which feedback option is more effective? Several experiments similar to this one show similar results. In a meta-analysis of computer-based feedback, Van der Kleij and others (2015) found that elaborated feedback like the botany explanation produces larger effect sizes than statements of correct, incorrect, or correct solutions. Is the effort to create elaborated feedback worthwhile? The answer is: Yes, providing elaborated feedback is more work but has a major positive effect on learning and is worth the effort.
Feedback Guideline 2 Provide feedback that includes an explanation of the correct answer.
Focus Feedback on Progress Toward a Success Criteria, Not on Performance of Others Feedback is most meaningful in the context of a goal. If a manager’s goal is to achieve customer satisfaction ratings greater than 85 percent and the average monthly ratings are 70 percent, the feedback draws attention to a 15 percent discrepancy. Alternatively, if on a leaderboard, your store scores 15th out of 70 on customer satisfaction ratings, the feedback draws attention to your performance in relation to others.
260 • Chapter 12
Instructional psychologists distinguish between criterion-referenced feedback and normative feedback. Criterion-referenced feedback provides knowledge regarding how the product or behavior meets goal standards. The goal in instructional contexts is typically expressed in the form of a learning objective. For example, Given spreadsheet data, you will graph a five-year profit and loss summary with no errors. Normative feedback provides knowledge regarding how the product or behavior compares to the products or behaviors of others, typically peers. Which is better? Feedback is most powerful when it is linked to a goal and it informs the learner how they are progressing toward that goal. Feedback that compares the learner’s outcomes to the outcomes of others draws attention to the self and has been shown to reduce motivation for learning. Evidence of the relationship between feedback and outcomes suggests the following.
Feedback Guideline 3 Provide feedback that helps the learner attend to their performance and their progress in relation to a specific goal or criterion. Avoid feedback that draws attention to how the learner stacks up against others.
Is Negative Feedback Useful? So far, most of our guidelines have discussed feedback that provides knowledge of results along with an explanation. Negative feedback provides criticism we often call “constructive criticism.” Constructive criticism attempts to close gaps in performance focused on products or evaluation outcomes. Often trainers want to provide
Feedback to Optimize Learning and Motivation • 261
helpful critiques—comments that help learners improve and at the same time are not demotivating. Out of fear of adverse learner reactions, many instructors avoid negative feedback. What evidence do we have about it? Fong and others (2019) published a meta-analysis of 78 studies that compared the effects of negative feedback on motivation to the effects of positive feedback or no feedback. Note that their focus was on motivation, not learning. Not surprisingly, compared to positive feedback, negative feedback was demotivating. However, negative feedback that was instructional—that is, provided guidance on how to improve performance—did not diminish motivation as much as negative feedback without guidance. In addition, negative feedback that included progress comments was not demotivating. For example, Your revision reflects greater clarity with your transitions. (Progress.) However, to add even more clarity, consider adding some examples to each of your subtopics (Guidance.) Normative negative feedback (feedback that compares learning outcomes with others) had a significantly larger negative effect on motivation compared to positive feedback. Finally, negative feedback delivered in person was less demotivating than feedback delivered in a mediated format, such as a computer. Negative feedback does not affect everyone in the same manner. Part of the impact of negative feedback on motivation will depend on what coping strategies the learner uses. If feedback is negative, learners are likely to invest more effort when: • The goal is clearly stated. • The learner is committed to the goal. • The learner believes in eventual success.
262 • Chapter 12
Alternatively, iIf the learner believes there is a low probability of success to achieve the goal, they are likely to abandon the goal or reject the feedback. In summary, Fong and others (2019) conclude that negative feedback is less demotivating when: • Specific criteria are used as the basis for feedback. • The feedback includes guidance on how to improve outcomes. • Feedback is provided in person rather than via a mediated source, such as a computer-based lesson. Consider the following option as you develop feedback.
Feedback Guideline 4 Give feedback based on learner progress toward a specific goal criterion along with suggestions for improvement.
When to Provide Feedback Feedback can be given immediately after responding or it can be delayed until the end of the exercise or lesson or even for a longer delay time, such as a day or a week. Is it better to provide immediate or delayed feedback? The best timing for feedback may depend in part on the level of learning involved—that is, lower level learning, such as recall, versus higher level learning, such as problem solving. In a meta-analysis of feedback in computer-based learning, Van der Kleij and others (2015) found positive effects for both immediate and delayed feedback, although immediate feedback led to better learning outcomes. Evidence also showed that most learners prefer immediate feedback. No doubt the best timing of feedback may depend on
Feedback to Optimize Learning and Motivation • 263
several factors. However, until we get more evidence, I recommend that most feedback be provided soon after the response.
Feedback Guideline 5 Provide feedback immediately or soon after the learner responds.
Maximizing the Value of Peer Feedback If the instructional setting involves multiple learners either in an in-person or virtual online setting, instructors might want to consider the benefits of structured peer feedback. Most instructors lack resources to provide as much feedback as they would like to individual learners. Can peer feedback provide a useful supplement to instructor comments? A potential drawback to peer feedback is a lack of accuracy. Nuthall (2007) noted that 80 percent of verbal feedback in classrooms comes from peers and that much of this feedback is incorrect. However, peer feedback has been shown to improve with learner training along with the use of checklists or feedback criteria. For example, in peer reviews of lessons, I would provide specific checklists, such as Does the learning objective use a behavioral verb and a performance criterion? Or Are at least two examples provided? Using a checklist, I would structure peer groups to offer at least 50 percent positive comments and limit themselves to three or fewer major improvement suggestions. I would also encourage feedback focused on substantive issues rather than mechanics, such as grammatical or spelling errors.
264 • Chapter 12
In research on peer feedback on written products, Patchan and others (2016) evaluated features of feedback comments that affected learner implementation of the comment and overall quality of the revised paper after comments. A student may implement a suggestion, but the resulting revision may not be an improvement. In contrast, comments that lead to quality revisions may in fact not be as likely to be implemented. In a review of 7,500 comments from 351 reviewers to 189 authors, the research team found that only overall praise and specification of locations in the paper (localization) increased the probability of implementation of feedback. For example: Second page of your paper flowed very smoothly. Good job! This comment incorporates both praise and localization. In contrast: I would suggest you write a conclusion after paragraph 4 includes localization but no praise. The research team reported that both overall praise and localization increased the probability that the comment would be implemented. Unfortunately, neither of these feedback features led to improvement of the revised paper. Feedback that led to improved revision quality focused on writing issues—such as clarity of main ideas—and substance issues, such as missing or inaccurate content. Unfortunately, comments that focused on writing issues were less likely to be implemented. Their findings suggest that general praise improves likelihood of implementation but not the quality of the product, and a focus on more substantive versus mechanical issues will lead to a greater probability of product quality improvement, although it may be less likely to be implemented. Clearly, the effects of feedback on writing are not straightforward. Overall, the evidence on peer feedback suggests the following guideline.
Feedback to Optimize Learning and Motivation • 265
Feedback Guideline 6 Leverage peer feedback to improve learning but provide clear guidelines, such as checklists, to ensure focused and accurate comments.
Feedback for Instructional Professionals So far, in this chapter, we have focused on feedback directed toward the learner. However, Hattie (2009) suggests that the most powerful feedback is from learner to instructor. Instructional designers and instructors can use student feedback in the form of test scores, product quality, and comments to determine what students know and understand (both individually and as a group), where errors are made, and when they are not engaged. Feedback from learners to instructors makes learning visible. Research has shown that when teachers were required to use data—especially graphed data—student achievement improved due to teacher modification of their instructional techniques. Instructional professionals developing multimedia programs should collect and assess data during prototype stages to adjust the programs in ways that maximize learning for all. Instructional professionals working in virtual or in-person classrooms should review learner feedback in the form of their learning outcomes and progress in order to offer individualized as well as group assistance.
The Bottom Line This chapter began with several feedback options for a question on interpreting a profit and loss statement based on spreadsheet data. Now that you have reviewed the evidence, which options do you think would be best?
266 • Chapter 12
A. Good work. You are clearly an effective manager. B. The correct answer is: a profit of $135 thousand. C. Correct. You scored in the top 10 percent of your cohort. D. Correct. After subtracting expenses, debts, and depreciation, the store realized a profit of $135 thousand. E. Incorrect. You forgot to include depreciation. Please recalculate. My answer is options D and E. Option D tells the learner that their response is correct and also offers a brief explanation. Option E tells the learner their response is not correct—negative feedback—but it also offers a suggestion for improvement. Evidence shows that feedback that provides praise or draws attention to how one performs relative to others should be avoided. That rules out the first and third options. Feedback that provides an explanation has consistently been shown to lead to better learning than correct answer feedback, such as option B. It will take a little more time to provide an explanation or improvement advice, but evidence strongly suggests that it’s worth the investment. Feedback that is linked to criteria of success, including information on how to improve, can be most valuable.
Applying Feedback to Your Training Although feedback is one of the most powerful instructional methods to improve learning, keep in mind that feedback is of variable effectiveness depending on the type of feedback given, how the feedback directs learner attention, and how effectively feedback focuses on achievement of a specific goal. Also consider feedback as a two-way street including both feedback from instruction to learner and vice versa. I recommend the following guidelines:
Feedback to Optimize Learning and Motivation • 267
qqBegin a learning event with goals that have specific success
criteria. qqRelatively soon after a learner response, provide a statement
that includes: »» 1. whether their response is correct or incorrect »» 2. a brief explanation for the correct response »» 3. s uggestions focusing on how to improve their outcomes. qqIn addition to explanations of the correct answer, consider
incorporating strategy suggestions that will help learners improve their responses. qqAvoid comments that compare learning outcomes to other
learners. The evidence suggests that normative devices such as leaderboards may not be effective for instructional purposes. qqAvoid comments that focus the learner’s attention to their
ego. Praise alone has generally not been shown to improve learning as it fails to direct attention to the task. qqProvide progress reports to help learners reflect on
strengths and gaps and focus on strategies to improve learning. qqIncorporate mechanisms, such as progress checks and easy
access to student data, in graphic forms to help instructional professionals optimize the learning environment. qqProvide constructive criticism that includes improvement
suggestions and is delivered in-person. qqLeverage the potential for peer feedback by providing
guidance on feedback criteria.
268 • Chapter 12
qqFor written products, include not only praise targeted to
specific aspects of the product but also provide suggestions for improvement of clarity of the writing or substantive comments, such as missing content.
Coming Next The next few chapters will show how to apply many of the evidencebased guidelines we have reviewed to specific types of lessons, such as lessons designed to teach procedures or lessons designed to teach strategic skills. The next chapter focuses on explanations, which are an integral part of all lessons. FOR MORE INFORMATION Kluger, A. N., and A. DeNisi. 1996. “The Effects of Feedback Interventions on Performance: A Historical Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory.” Psychological Bulletin, 119, 254-284. A classic review on the effects of different types of feedback on learning. Good for historical context. Van der Kleij, F. M., R. C. W. Feskens, and T. J. H. M. Eggen. 2015. “Effects of feedback in a computer-based learning environment on students’ learning outcomes: A meta-analysis.” Review of Educational Research, 85(4), 475-511. 1-37.85, 475-511. A review of evidence on item-based feedback in e-learning that supports the benefits of explanatory feedback and explores conditions under which it is most effective. Fong, C. J., E. A. Patall, A. C., Vasquez, and S. Stautberg. 2019. “A meta-analysis of negative feedback on intrinsic motivation.” Educational Psychology Review 31: 121-162. Giving constructive criticism is often a challenging task, so this review meets a need by focusing on the effects of various feedback formats on motivation. Looks at techniques to diminish demotivating effects of negative feedback.
Feedback to Optimize Learning and Motivation • 269
Hattie, J., J. Gan, and C. Brooks. 2017. “Instruction based on Feedback.” In R. E. Mayer and P. A. Alexander, Eds. Handbook of Research on Learning and Instruction. New York: Routledge. A comprehensive and detailed review of evidence-based feedback. This is one of many chapters in this handbook that focus on topics in this book.
Part 4
Evidence-Based Lessons and Games
Chapter 13
Give Effective Explanations The Importance of Explanations What Is a Principled Explanation? Incorporate Questions Incorporate Relevant Visuals Incorporate Examples and Models Engage Learners in Your Examples Leverage Social Presence Keep Explanations Concise and Focused When to Provide Explanations Applying Evidence-Based Techniques to Your Explanations
273
274 • Chapter 13
Since the first edition of this book, YouTube has become the go-to resource for explanations for individual learning and reference. Given the prevalence of explanations, building evidence-based methods into explanations is critical. Do you watch YouTube to learn skills or look up information? It would take you close to 200,000 years to watch the 7 billion videos on YouTube. Thirty million users refer to YouTube every day. Of course, YouTube is only one source of explanations. Explanations are ubiquitous in training class lectures, in conference presentations, and texts. Worldwide, presentations are the most common method for training adults (Bligh 2000). Some training events rely just about 100 percent on presentation alone. Despite calls for a more learner-centered curriculum, explanations are still the predominant instructional technique. This chapter reviews research that focuses on explanations used for direct instruction. Some of the guidelines are based on evidence we have reviewed in previous chapters and some are new to this chapter. All the techniques focus on ways to get your learners engaged in your presentation. When unattended, the most relevant explanation offers no benefit. Your goal is to prepare and deliver explanations that engage your learners. Explanations can be under learner control of pacing, such as a YouTube video or a textbook reading. Alternatively, explanations can be under instructional control of pacing, such as in-person lectures or conference presentations. How can you maximize the instructional benefits of your explanations?
The Importance of Explanations Explanations are one of the most used and abused instructional methods—often lacking techniques that promote learner engagement and subsequent learning. In spite of the hype on high engagement
Give Effective Explanations • 275
environments, such as games and immersive virtual worlds, presentations will continue to predominate the instructional landscape either on their own or as part of a larger instructional event. They are fast and easy to produce. They can be inexpensive and efficient to distribute via the Internet. They allow the sponsoring organization to document that the content was covered. Because direct instruction based on explanations is a useful and ubiquitous instructional method, it’s worthwhile to consider how to leverage them most effectively. There are plenty of books and resources full of tips and techniques for presentations. However, unlike those resources, I will emphasize principled explanations based on research evidence.
What Do You Think? Place a check next to each statement you believe is true about explanations: A. Asking the audience to respond to questions with clickers qq during an in-person lecture will improve learning. B. Explanations should rely heavily on visuals. qq C. You should assign review of explanations prior to qq class and use class time for interactive activities (flipped classroom). D. Explanations should be brief. qq
What Is a Principled Explanation? A principled explanation is instructional content delivered through text, graphics, or audio that typically addresses the what, when, why, and how of a work domain. To learn to use Excel for spreadsheets, you might attend a class or you might search YouTube for explanations.
276 • Chapter 13
Either way, typical topics might include: What is Excel? When might you use Excel? What are the basic components and functions of Excel? How do you use Excel to achieve the results you want? In order to keep experimental explanations consistent for comparison purposes, many of the research studies we will review used computer-delivered explanations. However, some studies are classroom-based, and those that are not offer findings that can be applied to the classroom. What are the key instructional features that distinguish a principled presentation from just a presentation? They are summarized in Table 13-1. Table 13-1. Seven Features of Effective Explanations Feature
Description
Engaging
Explanations intended for novice learners incorporate behavioral activities
Visual
Explanations incorporate explanatory visuals and minimize decorative visuals
Illustrated
Explanations include examples to demonstrate application of concepts, procedures and principles
Social
Explanations leverage social presence appropriate to the delivery medium
Concise and Focused
The explanation is succinct and does not maunder
Efficient
In Flipped Classroom, explanations are delivered via computer or books reserving classroom time for questions and activities that leverage social presence
Sequence
Explanations may be provided after problem solving attempts or before problem solving assignments
Incorporate Questions Since the first edition of this book was published, several research studies have focused on how best to actively engage learners in your explanations. A consistent guideline to emerge from these studies is: Add questions! Let’s take a look at some specific questioning methods.
Give Effective Explanations • 277
Use Clickers in the Classroom A clicker is a hand-held device or a smartphone that allows the learners to select a number or letter in response to a question. Typically, the instructor will project a multiple-choice question on a slide. After most students have made their selections, the instructor projects the aggregated responses in the form of a bar or pie chart and facilitates a discussion about the various options. Do we have evidence for the benefits of clickers? Mayer and others (2009) compared learning from three groups taking a collegelevel educational psychology class. One group responded to two to four questions per lecture with a clicker. After reviewing the group responses, the instructor discussed the reasons for the correct answer. A second group (no clicker group) received the same questions but did not make any overt response. A control group attended the lectures with no questions. The clicker group gained approximately one-third of a grade point more than the other two groups, which did not differ from one another. In this experiment, a behavioral response led to better psychological activity than listening to questions with no response requirement. Perhaps in the no-clicker group the questions were not deeply processed since there was no active response requirement. Wenz and others (2014) compared learning among groups in a preclinical dentistry class that either discussed questions or responded to questions via clickers. Overall, test performance was highest following clicker questions. DeLozier and Rhodes (2017) concluded that “clickers appear to either improve or fail to harm exam scores relative to equivalent time listening to class lectures or participating in class discussions.” Virtual classrooms offer multiple opportunities for participant involvement. As you can see, the typical virtual classroom interface
278 • Chapter 13
offers response tools including polling, chat, whiteboard markup, and audio (Figure 13-1). Of course, as with any technology, instructors can ignore these features and deliver a passive explanation. The effect can be soporific. In response, the audience can and will easily and unobtrusively minimize the virtual classroom window and focus on other activities. Figure 13-1.The Virtual Classroom Offers Many Opportunities for Overt Engagement
On the basis of positive evidence summarized here, I offer the following guideline.
Explanations Guideline 1 Use response technology such as clickers during instructional explanations.
Induce Self-Explanations of Content A 2018 meta-analysis reviewed multiple studies comparing learning from lessons in which self-explanation questions were added to
Give Effective Explanations • 279
instructional explanations with learning from the same lessons that included no additional questions. The research team reported a healthy effect size of 0.55 (Bisra et al. 2018). Not surprisingly, the effects of self-explanation questions were greater (0.67) when the comparison group had no additional assignment after the explanation. However, even when the comparison group reviewed instructor-provided explanations (rather than providing their own), learner-generated explanations resulted in an effect size of 0.35. Having to generate their own self-explanations promotes germane cognitive load, which benefits learning. The research team found that self-explaining was effective regardless of the timing of questions (before, during, or after explanations), content specificity, and question format. The team concluded that “research on self-explanation has arrived at a stage where the efficacy of the learning strategy is established across a range of situations. The most powerful application of self-explanation may arise after learners have made an initial explanation and then are prompted to revise it when new information highlights gaps or errors.”
Do All Learners Benefit From Questions After an Explanation? In the previous paragraphs we saw that adding meaningful questions to your explanations will improve learning. However, does this benefit apply to all learners? We have preliminary evidence that novice learners gain the most benefit while those with more experience gain little. We saw a similar pattern in chapter 5 on graphics. For individuals with prior knowledge, additional support in the form of graphics or questions may not offer anything learners could not provide for themselves and in some cases interferes with what they already know.
280 • Chapter 13
To determine whether adding questions to explanations would have different effects on high and low prior knowledge learners, two different studies created “experts” by giving half of the students additional instructional materials. For example, in one study the basic text of 684 words and three graphics was expanded to 1,139 words and eight illustrations. This enhanced lesson was used to build higher levels of expertise compared to the basic text. Next, “experts” who received the more detailed text and “novices” who received only the basic text were given explanations with or without questions. One of these experiments focused on use of a statistical analysis program (Rey and Fischer 2013) and the other used management content (Roelle, Berthold, and Renkl 2012). In both experiments, novices profited by receiving questions but experts did not. Adding questions provides a source of external guidance, which is helpful for novices but not for those with more expertise.
Explanations Guideline 2 Maximize self-explanations by incorporating questions before, during, and after explanations especially for learners new to the content.
Psychological Activity During Presentations There are some presentations devoid of behavioral interactivity that are nevertheless effective for learning. In these situations, your presentation promotes psychological processing in the absence of behavioral responses. Presentations that are brief and targeted to an experienced audience motivated to learn the content can be effective without overt
Give Effective Explanations • 281
audience response. For example, medical residents rated a standard one-hour lecture higher than a more interactive session that included discussion (Haidet et al. 2004). Furthermore, learning was about the same from both versions. The medical residents had sufficient context and interest to process the content of the lecture in the absence of overt activity. In summary, a well-organized, noninteractive presentation that uses engagement methods, such as visuals or rhetorical questions followed by Q&A, can be effective in brief timeframes when targeted for an experienced and focused audience.
Incorporate Relevant Visuals Chapters 2, 5, 6, and 9 offered solid evidence regarding the power of visuals to promote learning. A useful visual is one that shows the relationships among ideas in the presentation. For example, a visual agenda such as the example in Figure 13-2 in the form of a hierarchical chart establishes the presentation framework during the introduction and maintains audience orientation as the presentation moves from one topic to the next. Figure 13-2. An Organizational Visual Communicates the Presentation Agenda
282 • Chapter 13
Incorporate Examples and Models We saw the power of examples to accelerate expertise in Chapter 10. If your goal is to build problem-solving or other complex skills, you can dissect the appropriate behaviors by observing experts and defining specific problem-solving sub-skills. For example, take a look at the list of thinking skills defined for a history class in Figure 13-3. Rather than teach a series of history facts, the goal of the lessons was to teach how to review source documents to derive credible historical conclusions. The instructor used five teaching sessions to explain each strategy and to model each technique. Compared to classes that were not given explicit guidelines and examples, the experimental group wrote final test essays that were longer, more accurate, and more persuasive (De La Paz and Feldon 2010). Figure 13-3. A Partial List of Analytic Strategies to Apply to Historical Source Documents
Adapted from De La Paz and Felton (2010).
Don’t underestimate the power of skill modeling. But first be sure that you have identified appropriate skill models based on expert observations. Then spend time providing explanations and models
Give Effective Explanations • 283
(demonstrations) of each subskill followed by ample practice. In other words, you can demonstrate problem-solving skills in much the same way that you would demonstrate a procedural skill. During problem-solving demonstrations, show the actions an expert would take but also incorporate their thoughts giving learners access to expert rationale. You could use a thought bubble in multimedia learning or print materials. This technique will implement a cognitive worked example as discussed in chapter 10. The substantial evidence published regarding visuals and examples recommends the following guideline.
Explanations Guideline 3 Optimize learning value of explanations by adding relevant visuals and examples.
Engage Learners in Your Examples We reviewed ways to make your examples interactive in Chapter 10. Here is a review of those five techniques: 1. Add self-explanation questions. As you present an example, add questions linked to one or more steps in your illustration that require the learner to attend to the example and process it at a meaningful level. 2. Use completion examples. After showing one or two complete examples, you can provide additional examples that omit one or more of the final steps for the learner to finish. 3. Ask learners to compare examples. This technique might be especially useful when your goal is to discriminate among two or more related approaches to a problem or
284 • Chapter 13
instances of a concept. Display two examples and ask learners to identify the similarities and differences between them. For example, in a management class, you may show two videos—one illustrating a directive approach to management and the other a facilitative approach. After viewing each video, learners can construct a list of differences as well as discuss when one approach might be more effective than another. 4. Ask learners to compare their solution approaches with an expert approach. Gadgil and others (2011) identified students who held an inaccurate view of how the heart circulates blood. The research goal was to determine how best to correct these misconceptions. Half of these students were shown their own drawing illustrating an incorrect model next to a correct drawing and asked to write a comparison of the two. The other half reviewed and explained a correct model. The learners who compared their flawed model to a correct model learned more. This research suggests that learning is better when learners actively compare their own steps or products with expert steps or products. 5. Initiate an explanation with a discussion of an example or counter example. Figure 13-4 is a slide from an explanation I give on evidence-based training. The screenshot links to a brief multimedia lesson on Excel, which violates most of the major evidence-based guidelines regarding use of text, audio, and visuals. I show this counter-example demonstration during my introduction and ask attendees to grade it and to discuss their grade in a small group. In a large face–to-face session, the discussion takes place
Give Effective Explanations • 285
in small buzz groups; in a virtual session, participants talk in breakout rooms. Alternatively, you could make a similar assignment as a pre-work exercise. Use an activity similar to this one to launch an explanation relevant to your instructional goals. Figure 13-4. A Counter Example Serves As a Kickoff Activity
Explanations Guideline 4 Incorporate opportunities for overt engagement with examples before and during explanations.
Leverage Social Presence As reviewed in chapter 8, humans have an ancient imperative to learn from observation and talking with other humans. Our social nature is an evolutionary feature that can be profitably leveraged in instruction.
286 • Chapter 13
Social presence in the classroom arises from communication between the audience and the instructor as well as among audience members. The instructor should leverage social presence by looking and sounding approachable rather than appearing to be on a pedestal. An available instructor is one of the features of any learning event shown to correlate with higher course ratings (Sitzmann et al. 2008) and with better learning (Mayer 2009). A win-win! Specifically, the instructor should use a conversational tone and language, smile and maintain eye contact in a physical setting, speak to individual learners in smaller settings, reveal their own opinions or experiences relevant to the content, invite questions and comments, and encourage interactions among the attendees, such as brief buzz groups. In short, audience ratings and learning will be higher when the instructor is a learning host. A good host makes the guests feel comfortable with them and with other guests. You need not have an in-person class to leverage social presence. Chapter 8 reviewed the personalization effect in detail and found you can improve learning by using first- and second-person language and online learning agents. The most recent evidence suggests that agents are most effective when they stimulate human social responses through realistic gestures and eye contact. Based on consistent evidence regarding learning and motivational value of social presence, implement the following guideline.
Guideline 5 Make learning events personable through use of conversational language, collaborative engagement, and on-screen agents.
Give Effective Explanations • 287
Keep Explanations Concise and Focused YouTube videos are one of the most-used sources of explanations accessed widely on the Internet. How long do you think the typical YouTube video lasts? According to the Pew Research Center, the average length is four minutes. Data from attention and vigilance studies suggests that mental focus drops after 10 minutes (Hattie and Yates 2014). Naturally, the audience background and learning goals will moderate the optimal time for explanations. However, the bottom-line message is: keep your explanations short. Try for explanations of 15 minutes or less, write short chapters, or design e-learning lessons of 10 minutes or less! Of course, most training classes are based on increments of hours and days. Therefore, the wise instructor or instructional designer mixes it up. For example, an introductory exercise is followed by an explanation of 10 minutes, interactive modeling examples, and then a practice activity. Breaks are inserted at regular intervals. Instructor-led sessions embed a choreography of activity, ranging from listening to movement into small groups, and then back into a larger group setting. Often you will need more than 10-15 minutes to provide sufficient explanations to support the instructional goal. A proven technique that you may already use is to chunk the explanation into short segments, thus abbreviating the amount of content learners receive at once. In multimedia lessons, Mayer (2017) reports a high effect size of 1.0 from segmented compared to unsegmented lessons.
Guideline 6 Apply various techniques to keep explanations brief.
288 • Chapter 13
When to Provide Explanations In traditional training plans, explanations are provided early in the lesson, typically followed by worked examples and practice with feedback. However, consider placing explanations before a learning event or following a practice session rather than before it.
Flipped Classrooms Recognizing that spending time lecturing during a classroom learning event does not leverage the social presence potential for active learning opportunities, some instructors have reversed the normal sequence. In a flipped classroom, learners review explanations on their own prior to attending a class session. The explanations may take the form of prerecorded video lectures, text readings, or online lessons. In-class time is then spent on working problems, discussions, or collaborative projects. In a review of flipped classrooms, DeLozier and Rhodes (2017) note that there is little direct evidence regarding learning outcomes in a flipped versus traditional lecture-based classroom. Research comparing the form of pre-session explanations, such as video lectures, virtual lectures, or text-based readings, appears to result in similar outcomes. Following prework, flipped classrooms’ in-class activities vary, including clicker questions, pair-and-share activities, group discussions, and student presentations. A challenge in some settings is learner compliance and completion of pre-class explanations. I’ve found that most learners do not complete prework; only in the classroom setting, removed from the distractions of work assignments, do learners focus their attention on the content. In contrast, perhaps in your setting, there are opportunities to promote and monitor pre-class assignment completions. We will need more evidence on flipped classrooms before making definitive recommendations.
Give Effective Explanations • 289
Problem Solving First or Explanations First? Instructional scientists have investigated the potential of starting instruction with a problem-solving assignment followed by an explanation. Variously called productive failure, invention learning, or desirable difficulties, the proposed benefits of starting with a problem include: • activating prior knowledge related to new skills • combating student perceptions that the content is easy to learn • creating a moment of need, making students more receptive to explanations. Do you think that starting instruction with a problem will lead to better learning? Weaver and others (2018) compared learning physics principles among university students who either worked to solve a problem prior to an explanation or first received the explanation and then solved the problem. Those in the problem-first groups scored higher on conceptual questions. The research team also compared the effects of problem solving in either a collaborative group or alone. They found that those working in a small group did better on solving the problem, although both solo and group problem solving led to the same post-class learning. Those in collaborative groups indicated better motivation, giving the lesson higher interest and enjoyment ratings. The research team concluded, “These findings demonstrate that exploring prior to lecture benefits conceptual understanding. The opportunity to activate prior knowledge, identify their own knowledge gaps, generate questions, and test alternative strategies likely made exploratory learning a desirable difficulty and sparked a ‘need to know,’ which prepared students to learn at a deeper conceptual level.”
290 • Chapter 13
Not all studies have demonstrated that problem solving first leads to better delayed learning. Three independent classroom studies measuring delayed transfer learning of math principles among middle-school students found no general superiority of problem solving followed by instruction compared to instruction followed by problem solving (Loibi and Rummel 2014; Loibl et al. 2017; and Likourezos and Kalyuga 2017). Loibi and Rummel (2014) report that regardless of sequence, best learning occurred when instructors showed several erroneous solution attempts illustrating typical student errors followed by a correct solution. At this stage, we will need more evidence indicating the conditions under which varied sequences of problem solving and explanations are most effective. Meanwhile, you can try either sequence and decide what leads to better learning and motivation among your learners.
The Bottom Line Now that you have reviewed the evidence, let’s return to our initial questions regarding explanations: A. Asking the audience to use clickers to respond to questions during a classroom lecture will improve learning. True. Using clickers is one technique proven to promote learning during a lecture. There are many others that you can consider, such as show of hands, buzz sessions, polling, and chat in virtual settings. Techniques such as clickers or virtual polls that allow the class to view the collective responses, compare with their own responses, and engage in discussion with the instructor are likely the most valuable, although we could use more research comparing alternative responses, such as a show of hands.
Give Effective Explanations • 291
B. Explanations should rely heavily on visuals. True. We have seen the value of visuals to illustrate explanations in chapters 2 and 5. However, for maximum benefit, the visuals should be relevant rather than decorative, simple rather than complex, and used more extensively in explanations for novice learners. C. You should assign a review of explanations prior to class and use class time for interactive activities (flipped classroom). Maybe. The idea to reserve class time for engagement activities is a good one. However, in many contexts, learners may not allocate time to complete prework. D. Explanations should be brief. True. Remember that human adult attention spans about 10-15 minutes. Brief explanations are especially important in settings in which explanations are presented outside the pacing control of the learner, such as a classroom or video explanation, rather than a text reading that the learner can pace for themselves. Explanations should be concise and organized to make relevance salient from the start. Segment longer explanations into chunks, allowing time for learners to absorb new content. Brief explanations can be punctuated by inserted questions, discussions, project work, and breaks.
Applying Evidence-Based Techniques to Your Explanations Use this checklist as a guide when developing and reviewing explanations in the form of lectures, YouTube presentations, or textbook readings. Add your own lessons learned to this list:
292 • Chapter 13
qqThe explanation is accompanied by questions that stimulate
job-relevant processing of the content. qqThe explanation is illustrated by relevant visuals. qqThe explanation is supplemented by interactive examples
and models. qqThe explanation is concise and incorporates attention
boosters, such as questions, short discussions, and breaks. qqThe speaker invites social presence through a conversa-
tional approach, informal body language, and responses to questions. qqThe goals of the presentation are realistic and achievable
within the time frame allowed. qqExplanations are assigned prior to classroom meetings
to maximize engagement opportunities in the classroom (flipped classroom). qqExamples of failed or suboptimal student attempts to solve
problems followed by a correct solution are presented prior to an explanation.
Coming Next Work tasks can be roughly divided into procedural—step-by-step— tasks or strategic tasks that require problem solving. Chapter 14 offers guidelines and examples regarding teaching procedures. FOR MORE INFORMATION Clark, R.C., and A. Kwinn. 2007. The New Virtual Classroom. San Francisco: Pfeiffer. We included a number of techniques to support high engagement in the virtual classroom in this book.
Give Effective Explanations • 293
DeLozier, S. J. and M. G. Rhodes. 2017. “Flipped Classrooms: a Review of Key Ideas and recommendations for practice,” Educational Psychology Review, 29, 141-151. If the idea of a flipped classroom appeals to you, take a look at this review article that gives information regarding pre-class and during class activities in different implementations of flipped classrooms. Hattie, J., and G. Yates. 2014. “Chapter 6: The recitation method and the nature of classroom learning” and “Chapter 9: Acquiring complex skills through social modeling and explicit teaching.” In Visible Learning and the Science of How We Learn. Routledge: New York. I recommend this book to all instructional professionals as it includes a synthesis of evidence that focus on a number of the topics I’ve outlined in this book. Wittwer, J., and A. Renkl. 2008. “Why instructional explanations often do not work: A framework for understanding the effectiveness of instructional explanations.” Educational Psychologist, 43, 49-64. I found this a very helpful review of specific techniques that can make explanations effective.
Chapter 14
Teaching Procedures What Are Procedures? The Anatomy of a Directive Lesson Performance Support for Procedures Reference Support for Procedural Tasks Problem Solving or Explanations First? Applying Methods for Teaching Procedures in Your Training
295
296 • Chapter 14
Give the learner immediate feedback. Break down the task into small steps. Repeat the directions as many times as possible. Work from the most simple to the most complex tasks. Give positive reinforcement. —from Technology of Teaching
Sound familiar? These instructional prescriptions were written more than 50 years ago by B.F. Skinner, the father of behaviorist psychology. Behaviorism is the foundation for the core instructional methods to teach procedures using a directive design. In this chapter we will draw on the proven guidelines reviewed in previous chapters to apply them to training lessons that use a directive approach most appropriate for teaching procedures. Specifically, we will review the components of and evidence for effective directive lessons. We will also look at recent research on design of performance support for procedural tasks.
What Are Procedures? Procedures are routine tasks that are performed more or less the same way each time. Some examples include many software tasks, taking routine customer orders, taking a blood pressure, and equipment start-up tasks. Contrast procedures with nonroutine or strategic tasks that require problem solving and adjustment for effective results. Some examples of strategic tasks include troubleshooting unusual failures, many sales and customer service tasks, and diagnostic problem solving, to name a few. Many jobs include a combination of procedural and strategic tasks. Routine operations may rely on procedures, while unusual occurrences depend on adaptation of guidelines. Sometimes similar tasks may be
Teaching Procedures • 297
trained as procedures or as strategic tasks, depending on the organizational context. For example, fast food restaurants generally have a high turnover of relatively low-skilled staff. Their goal is a fast, consistent, and safe product. They rely on a procedural approach to food preparation based on a combination of training and performance aids. In contrast, chef training takes a more strategic approach by teaching the economics, aesthetics, and safety of food preparation. The trained chef can create unique and cost-effective menu options that maximize flavor and appearance using available ingredients. In the next chapter, we will focus on techniques for teaching strategic tasks that involve judgment and problem solving. The best approach to teaching procedural skills is a directive design. Direct instruction includes three core elements: explanations, as summarized in the previous chapter; demonstrations of skills; and student practice with feedback.
What Do You Think? Place a check next to each statement you believe is true about directive lessons. Hint: If you have read previous chapters, you should know some of the answers. A. Learning is better when topics are presented in qq small chunks. B. Learning is better when facts or concepts related to a qq procedure are sequenced prior to task steps. C. Performance support (job aids) for procedures is most qq effective when steps are presented visually rather than in text. D. Learning is better when practice exercises are distributed qq throughout directive lessons.
298 • Chapter 14
E. Asking learners to try to invent a procedure prior to qq demonstrating the procedure results in better delayed learning.
The Anatomy of a Directive Lesson Directive lessons are divided into three main parts: introduction, supporting topics with practice and feedback, and procedural demonstrations with practice and feedback. Note that for some purposes, elements of a procedural lesson may be used as performance support linked to a software or hardware task. For the purpose of performance support, one or more of the three elements summarized here may be truncated or even omitted completely.
Lesson Introduction The lesson introduction is critical to set the stage for learning and motivation. Don’t shortchange it. You need to accomplish several goals illustrated in the Excel sample classroom lesson handout (Figure 14-1). First, learners need to know the anticipated outcome and road map of the lesson. The learning objectives and lesson overview can serve this purpose. Second, the learners need to see the relevance and work-related context for what they are about to learn. For an Excel lesson, use a simple demonstration illustrating how a formula automatically updates calculation results when spreadsheet data is changed. A good technique for the relevance and context portion of the introduction is to show the benefits of the lesson skills in a work setting. This can be through a “What Went Wrong” or a “What Went Right” demonstration, scenario, video, or data. Third, the introduction should activate relevant prior knowledge stored in longterm memory. This can be accomplished by starting the lesson with
Teaching Procedures • 299
some review questions from prior lessons, presenting an analogy, or initiating a discussion about participants’ previous experiences related to the instructional goal. For example, in a management lesson on using Excel for budgeting, an introductory discussion could focus on previous budgeting challenges either at home or in a work context. Figure 14-1. An Introduction Page in an Excel Training Manual
300 • Chapter 14
Teach Supporting Topics Early in Lesson One of the characteristic features of a directive design is the sequencing of the main supporting topics prior to the steps to perform a task. As you can see in the Order of Topics at the bottom of Figure 14-1, the Excel lesson sequenced two main supporting topics: cell references and formula formats prior to the steps of entering a formula and creating a chart. Most main supporting topics involve “What Is It?” content along with any facts related to that topic. “What Is It?” knowledge is best taught by tell (give an explanation usually in the form of a definition or description), show (give some examples), followed by practice (identify a valid example from samples or produce a valid example) and feedback. You can see the workbook pages for the “What Is It?” lesson topic formula formats in Figure 14-2. The explanation summarizes the key elements of a formula. The three examples incorporate all of the formula elements in various legal combinations. Remember from chapter 10, evidence recommends adding questions to examples to ensure their processing. I use that technique here in the section labeled Format Questions. The goal of these questions is to encourage learners to review the examples carefully and induce some critical rules, such as “all formulas begin with an equal sign.” Following the tell and show sections, I added some important rules (facts) about order of operations along with a mnemonic to help recall the correct sequence.
Teaching Procedures • 301
Figure 14-2. Handout for Excel Supporting Topic
302 • Chapter 14
Figure 14-3. Practice for Excel Supporting Topic
Exercise 1-2 shown in Figure 14-3 requires learners to use spreadsheet cell values to manually calculate formula outcomes as well as to construct some formulas to achieve assigned calculation goals. This exercise requires learners to apply an understanding of how formulas work—not just parrot back the information given in the tell and show sections. In the classroom, the instructor allocates a few minutes to the exercise and when it is clear that most are finished, asks pairs of participants to compare their answers and resolve any discrepancies. The instructor then shows all correct responses and answers any questions. While participants are working on the exercise, the instructor moves
Teaching Procedures • 303
around the room checking for any misconceptions and helping individuals as needed. In the virtual classroom shown, the instructor adds engagement by summarizing the main components of a formula and asking learners to type into chat the common elements in the structure of all formulas based on the two examples displayed on the screen (Figure 14-4). Figure 14-4. A Virtual Classroom Lesson on Excel Formulas
Evidence for Sequencing Supporting Topics First What evidence do we have for the learning benefits of sequencing the supporting knowledge prior to the major lesson task? Mayer and his colleagues (2002) created two lesson versions on how brakes work. One version gave a multimedia explanation of how brakes work. A second version included the same explanation but preceded it by a short description of each part as illustrated in Figure 14-5. Learners who received the part explanations before the full description scored higher on a problem-solving test, with a high effect size of 0.9.
304 • Chapter 14
Figure 14-5. A Topics-First Lesson Begins With an Explanation of Each Part
This is the Piston in the Master Cylinder. It can either move back or forward.
Back to Front Page
Show Me
From Mayer, Mathias, and Wetzell (2002).
Lessons that fold all of the knowledge topics along with the steps into one meaty explanation barrage the learner with a great deal of information all at once. By teaching key concepts first, the amount of new information the learner must acquire all at once is greatly reduced. This sequencing helps mitigate mental overload.
Teach Supporting Topics in Context To help learners connect the relationships between the parts and the whole, always teach supporting topics in the context of the whole task. For example, in Figure 14-5 each individual part is explained in a visual that shows the structure of the entire brake. Likewise, in Figure 14-3, the formula practice exercise makes use of a spreadsheet example. If you teach a series of supporting topics out of context, the result can be fragmented knowledge and confusion regarding the relations between task elements and the whole task.
Teaching Procedures • 305
Teach Procedural Steps So far we have seen how to develop the lesson introduction followed by the key supporting topics. The next part of a procedural lesson uses tell, show, and practice with feedback to teach the steps of the procedure. Take a look at the example in Figure 14-6 showing part of an asynchronous demonstration of how to enter a formula in Excel The main instructional method for teaching tasks is a follow-along demonstration given by the instructor in the classroom or shown on the screen in multimedia lessons. Figure 14-6. An Animated Demonstration of Inputting a Cell Formula Described by Audio Narration
Demonstrate the Procedural Steps In chapter 6, we reviewed evidence-based guidelines for optimal use of animations. Evidence shows that brief animations described with audio are the best way to demonstrate procedures in multimedia environments. The animations could be rendered as computer-generated
306 • Chapter 14
drawings, screen captures, video, or instructor-led demonstrations. Simpler drawings impose less mental load and might be preferable for complex procedures or novice learners. Whatever form of animation you use in multimedia, include pause and replay buttons. Because learners may not make use of these controls, consider pausing the video after several steps and requiring the learner to continue the demonstration with a play button. Be sure to orient the animation from the visual perspective of the performer—that is, illustrate with an over-the-shoulder depiction as shown in Figure 14-7. Note the inclusion of a series of still shots under the running video. This technique—useful for physical procedures— is to show periodic still shots from the video capturing the main stages in the procedure. Each shot is described with audio narration. All of these techniques—inserted pauses, over the shoulder shots, and adding still captures—should help manage potential overload from video demonstrations. Figure 14-7. An Animation of Origami Folding
Used with permission of Chopeta Lyons.
Teaching Procedures • 307
Provide Job Aids For reference purposes, the learner benefits from a documented summary of the steps. The best documentation during initial learning includes a visual of the work interface (screen or equipment) with text captions placed close to the relevant portion of the visual as shown in Figure 14-8. For tasks that will be repeated many times in training and on the job, a condensed working aid such as the example in Figure 14-9 may suffice Figure 14-8. Handout for Excel Lesson Procedure
308 • Chapter 14
Figure 14-9. A Working Aid for the Excel Lesson
FORMULAS • Start with equal sign • Operators: + - * / • Order of operations: PDMAS Examples: = A3 + B4/C6 = A3 / (B4 - D8)
ENTERING FORMULAS 1. Enter data in cells 2. Right-click in cell where result should appear 3. Type in formula 4. Press Enter key
Assign Practice Following an interactive demonstration, practice exercises require learners to apply the same steps to some new scenarios or data sets. The practice for the Excel lesson in a virtual classroom is shown in Figure 14-10. This practice illustrates a spiral technique in which the formulas constructed in the initial supporting topics part of the lesson are reused with a different data set. In part 2 of this practice, the benefit of formulas is reinforced by asking learners to change data values in the spreadsheet and document the results. Figure 14-10. Practice Assignment in Virtual Classroom
Teaching Procedures • 309
Two important evidence-based techniques that we reviewed in chapter 11 are the spacing effect and interleaved sequencing of practice. The spacing effect recommends that you intersperse practice opportunities throughout an instructional event by sequencing practice among your lessons or by incorporating practice throughout a learning process using different delivery media. The interleaving effect recommends that you sequence exercises on different concepts together rather than blocking practice based on topics. For example, for Excel formula construction, include exercises that require multiplication, addition, division, and subtraction rather than exercises that require multiplication followed by exercises that require division, and so on.
When to Use Drill and Practice In some situations, workers must be able to perform a procedure on the job quickly and accurately without the benefit of a working aid. These procedures may involve high risk tasks, such as landing an airplane or, alternatively, require a rapid response, such as when learning a new language. In other situations, the overall task is quite complex and only by automating lower level subtasks can the entire task be performed. For example, driving a car involves a number of subskills, many of which must be learned to automaticity to permit the focus of attention onto the traffic and driving conditions. Use drill and practice exercises in these situations. Drill and practice require learners to perform the procedure many times until it becomes automatic. Recall from chapters 3 and 11 that once automatic, the procedure is stored in memory in a way that can be accessed and executed with minimal load on working memory. For example, automated typing allows me to think about the composition
310 • Chapter 14
rather than the mechanics of typing. Computer simulations and games may be useful for drill and practice exercises as the computer program can measure both accuracy and speed of response and assign points accordingly. In an adaptive exercise or game, accuracy and response time can be measured and practice continued until automaticity is reached. Because drill and practice can become rather boring, some types of repetitive exercises can be embedded in a game format. We will discuss more about games in chapter 16.
Give Feedback on Practice Outcomes Is your feedback effective? You may recall from chapter 12 that often suboptimal feedback actually impedes learning. To maximize the value of feedback in directive lessons, apply the following guidelines: • Focus on progress of the individual over repetitions; avoid comparisons with other performers. • Give immediate feedback when the consequence of an error on one step will affect the performance of future steps and the ultimate result. • Provide explanatory feedback that tells why a given answer is correct or incorrect. • Focus on specific techniques to improve performance—not just the final result or answer. • Provide instructors access to learner feedback so they can adjust training as needed.
Performance Support for Procedures Some tasks may not require training. Many procedural tasks are simplifying due to technological advances in software. For example, e-learning authoring systems used to require considerable knowledge
Teaching Procedures • 311
of coding. Most contemporary authoring systems use click or drag and drop actions based on familiar software such as PowerPoint. In other situations, a particular task is performed only once or infrequently. Training to perform such tasks may be replaced by some form of performance support or job aid. Performance support is a set of directions or demonstrations that guide the worker through a procedure with no intention that the steps will be learned. Furniture assembly directions are good examples. Once assembled, the task will hopefully not be repeated. Training would be a wasted effort. What is the best way to provide performance support for a procedural assembly task? In chapter 6, we reviewed research by Watson and others (2010) comparing time to assemble an artificial device using three forms of performance support: text directions, a series of still photos with no words, and an animation with no words. You can revisit the results in Figure 6-1. On the first build, the animated displays (with no words) led to 56 percent faster build times. By the second build, the animated and still photographs supported the same build times—both faster than text. By the third build, all three formats resulted in equivalent build times. It is likely that by the third build the procedure was learned and there was minimal reliance on the directions. We need more studies to confirm these results, but based on this data, visual representations were the most efficient format for performance support of a manual assembly task.
Reference Support for Procedural Tasks Evidence is mixed about the learning value of taking notes. No doubt potential benefits depend on the rate of content delivery, the level
312 • Chapter 14
of learner control over the delivery medium, note-taking skills, and learner familiarity with the content. Marsh and Sink (2010) found that students preferred receiving handouts, and learning was better when handouts were provided. Rather than devoting attention to taking notes, learners can process the tell and show parts of the training and invest mental resources in practice exercises. Because training time in adult learning settings is generally limited, I recommend providing relatively detailed handouts or online references—at least when the content is stable and the quality and consistency of the training is important. Memory support aids can be reproduced in paper or in software for display either in mobile devices or embedded in the software. Many software producers incorporate step-by-step help screens available adjacent to the application as performance support. Increasingly, documentation and training staff work together to ensure consistency and integration of documentation into hardware, software, and training.
Problem Solving or Explanations First? Chapter 13 reviewed evidence from several studies that evaluated different sequences of instruction. These studies compared conceptual and procedural learning from groups that started with problem solving followed by explanations versus groups that started with an explanation followed by problem-solving practice. Weaver and others (2018) reported better conceptual learning and motivation among learners who initiated physics lessons with attempts at problem solving. They found no differences in learning of solution procedures. In contrast, Loibl and Rummel (2014) found that procedural skills were learned better when the lesson started with an explanation followed by practice. Starting with a demonstration was a more efficient use of time for
Teaching Procedures • 313
learning procedures. Rather than spending time attempting to invent a procedure, that time was more effectively used applying demonstrated steps to practice problems. In summary, starting a lesson by asking learners to invent a procedure to solve a problem does not have consistent support. Based on evidence to date, I recommend following the sequence we have reviewed in this chapter: Provide an explanation and demonstration followed by practice and feedback.
The Bottom Line Now that you have reviewed the evidence, check off each statement you believe is true: A. Learning is better when topics are presented in small qq chunks. B. Learning is better when facts or concepts related to a qq procedure are sequenced prior to task steps. C. Performance support (job aids) for procedures is most qq effective when steps are presented visually rather than in text. D. Learning is better when practice exercises are distributed qq throughout directive lessons. E. Asking learners to try to invent a procedure prior to qq demonstrating the procedure results in better delayed learning. All of the statements except E are true. By chunking the content and sequencing key knowledge topics first you minimize mental overload. It’s important, however, that each topic reflects the work context so the learner can see its relationship to the lesson task.
314 • Chapter 14
We need more research on formats for performance support. However, one study shows that diagrams—animated or still—led to a more efficient first build compared to text descriptions. After three or more builds, though, the advantage of visuals disappeared as the procedure became familiar through repetition. The spacing effect is a classic principle of instructional psychology. Find ways to distribute practice opportunities among and within learning events. Remember that the benefits of spacing apply primarily to delayed learning and may not be seen during the instruction. Recent enthusiasm for starting a lesson with a problem-solving invention activity has not been shown to be especially effective for learning procedures. A more efficient use of instructional time is to assign practice exercises after reviewing a demonstration.
Applying Methods for Teaching Procedures in Your Training Consider the following guidelines when faced with helping learners acquire procedural skills: qqUse a directive approach for procedural tasks or novice
learners. qqFocus the lesson on a job task plus associated knowledge
topics. qqSegment content into brief topics followed by practice with
feedback. qqSequence knowledge topics prior to task steps. qqMake the work context of the lesson salient in the introduc-
tion and throughout the lesson. qqUse a “tell-show-practice-feedback” pattern in each lesson.
Teaching Procedures • 315
qqUse visuals described by audio or adjacent text for demon-
strations. qqPlan application level practice exercises distributed
throughout the lesson. qqWhen practical, use practice outcomes from the knowledge
topics in procedural practice assignments (spiral design). qqAssign mixed (interleaved) practice exercises that require
learners to apply different solution steps to resolve problems qqProvide prompt explanatory feedback that focuses learners
on their progress and on how to improve outcomes. qqInclude working aids to guide performance during and after
training.
Coming Next Now that we’ve looked at ways to teach procedural skills, we will turn to instructional methods best suited for strategic skills that involve problem solving or critical thinking. The next chapter will focus on scenario-based learning designs. FOR MORE INFORMATION Clark, R.C. 2008. Developing Technical Training. San Francisco: Pfeiffer. This is the first book I wrote based on some of the early work of Drs. David Merrill and Robert Horn. I believe the guidelines for teaching facts, concepts, procedures, processes, and principles are valid today.
Chapter 15
Teaching Critical Thinking With Problem Scenarios Can Critical Thinking Be Taught? Starting Instruction With a Problem What Is Scenario-Based Learning? When to Consider Scenario-Based Learning The Anatomy of a Scenario-Based Lesson Media in Scenario-Based Learning Environments Challenges With Scenario-Based Lessons Applying Scenario-Based Learning to Your Training
317
318 • Chapter 15
Confronting a realistic but unfamiliar problem or situation creates a moment of need for learning. Faced with a task or challenge we must resolve, we are most open to acquiring the knowledge and skills required to respond. And when that task or problem is clearly work related, we are engaged by the relevance of the exercise. This is the motivational power of scenario-based learning environments in which an authentic work problem initiates and drives learning. This chapter applies proven guidelines to design learning environments that use job-relevant scenarios as the engine for learning, specifically: • evidence that critical thinking skills can be trained • evidence for starting a lesson with a problem versus starting a lesson with an explanation • the components of scenario-based lessons • research on what media to use in scenario-based lessons.
Can Critical Thinking Be Taught? Critical thinking (CT) requires judgment that involves any combination of the following skills: interpretation, analysis, evaluation, and inference. Critical thinking is an essential part of many diverse job tasks, such as sales, management, business analysis, diagnosis, and troubleshooting. Critical thinking can involve generic skills that apply across domains or can involve context-specific skills that will vary based on the content domain. Generic CT skills can be trained in specific courses targeted to those generic skills. Alternatively, context-specific skills are best trained in a specific subject domain course, such as medicine, engineering, or business management. Do we have any evidence regarding the success of CT instruction and what instructional approaches work best?
Teaching Critical Thinking With Problem Scenarios • 319
Abrami and others (2015) reported a meta-analysis of 341 effect sizes from experiments that evaluated CT courses. They reported an overall effect size of 0.30—a positive but low effect. There were no significant differences in educational levels, subject matter, or duration of the training among the studies they analyzed. In contrast, among 97 effect sizes from experiments that focused on content-specific CT skills, a moderate effect size of 0.57 was reported. Based on their analysis, better outcomes will be realized from content-specific CT skills training embedded in domain-specific training compared to generic CT skills trained independently of work domains. In comparing instructional methods used in CT courses, they found inclusion of discussion, authentic problems and examples, and mentoring were associated with best learning. The research team concluded that dialogue, authentic instruction, and mentorship are effective techniques for building CT skills. In summary, evidence supports the notion that training—especially context-specific critical thinking training that includes discussion and authentic problems— can successfully build CT skills.
Starting Instruction With a Problem Chapters 11 and 12 reviewed research comparing conceptual and procedural learning from math or science lessons that either started with a problem assignment followed by an explanation or started with an explanation followed by a problem. Weaver and others (2018) reported that those in the problem-first group in an introductory physics course struggled and scored lower on the problem activity than those who received the explanation first. However, those who worked on the problem first scored higher on conceptual test questions and about the same on procedural questions as those who started with an explanation.
320 • Chapter 15
Loibi and others (2017) reviewed 20 separate research experiments that varied the sequence of problem solving and instruction and reported that starting with a problem fosters learning only under specific design conditions. They recommended an initial problem-solving phase should involve analyzing contrasting scenarios or assigning a problem related to the lesson guidelines. After problem solving, the instruction should build on student solution attempts (which are likely flawed or incomplete). They suggested that starting with a problem can be effective for conceptual knowledge but not procedures.
What Is Scenario-Based Learning? We learned from the Abrami meta-analysis that domain-specific critical thinking skills training that includes scenarios benefitted critical thinking outcomes. Scenario-based learning, also called problem-based, exploratory, or immersive learning is a popular approach. Scenario-based learning is a preplanned, guided inductive learning environment designed to accelerate expertise. For workforce learning purposes, the learner assumes the role of a staff member responding to a work-realistic assignment or challenge, which in turn responds to reflect the learner’s choices. Scenarios may be presented as a pre-lesson exercise to be followed by a traditional explanation. Alternatively, scenarios may serve as the framework for a lesson with explanations embedded to provide guidance. Scenario-based lesson designs can be delivered in instructor-led in-person or virtual classrooms as well as via asynchronous multimedia. Let’s review this definition in more detail using the multimedia scenario-based lesson illustrated in Figure 15-1.
Teaching Critical Thinking With Problem Scenarios • 321
Figure 15-1. A Simulated Repair Shop to Build Troubleshooting Skills
Used with Permission of Raytheon Professional Services.
The Learner Responds to a Job-Realistic Situation In the automotive troubleshooting lesson designed for apprentice technicians, the learner has access to testing devices in a virtual shop and is assigned a work order. As in real life, diagnostic tools are used to gather data, define the failure, and recommend a repair. In a classroom setting, a teaching shop would include the tools needed to identify and repair automotive failures. The focus of the online lesson is on the critical thinking elements of the job—that is, what tests to run to collect data and how to interpret that data. Targeted toward experienced technicians, the lesson assumes familiarity with the hands-on aspects of running test equipment.
The Scenarios Are Pre-Planned As with all forms of effective instruction, each scenario is defined by learning objectives that summarize desired knowledge and skill outcomes derived from a job analysis. In the automotive troubleshooting lesson, there are two objectives. One goal is to accurately define the
322 • Chapter 15
failure. In addition, a process objective focuses on the efficiency of the diagnostic process. Each diagnostic testing tool is tagged with a realistic use-time and some tests are irrelevant to the symptoms described on the work order. The program tracks which testing tools the learner uses and in what sequence. Therefore, learners get feedback not only on the accuracy of their repair decision but also on the process they used to define the failure.
An Inductive Rather Than an Instructive Approach Traditionally, training lessons have used an instructive design. That means the instruction provides explanations typically accompanied by examples and demonstrations, as well as practice with corrective feedback. Mistakes are generally flagged sooner rather than later and corrected promptly. In contrast, scenario-based designs rely on problem-solving activity as the main source of learning. Mistakes may or may not be corrected immediately and often the learner experiences the consequences of their mistakes rather than being told. For example, in the automotive lesson, if the learner selects an incorrect repair, the failure symptom persists and the customer is unhappy. From this result, the learner infers that their solution was not correct and the analysis must be revised.
The Instruction Is Guided We have a great deal of evidence showing that pure discovery learning is both inefficient and ineffective. To minimize the “flounder factor” that can occur in discovery-based learning, provide guidance. Three recent research analyses have evaluated the role of guidance in problem-based learning. Lazonder and Harmsen (2016) reported a meta-analysis based on 72 studies that compared the effects of
Teaching Critical Thinking With Problem Scenarios • 323
different types of guidance on learning from problem-based lessons. Some different forms of guidance in the studies they reviewed included restricting the scope of the learning task, making task progress visible, providing reminders, incorporating examples or explanations, and offering a more structured linear interface, such as a branched scenario. Overall, adding guidance improved outcomes on the problem activity itself (effect size of 0.71), as well as on learning outcomes (effect size of 0.50). Zacharia and others (2015) evaluated the benefits of guidance tools during computer-supported inquiry learning. They identified 44 tools that had been tested. Among the most promising were tools that imposed process constraints, concept map templates to aid with organization and synthesis, and feedback that tracked progress throughout the inquiry process. Kim and others (2018) reported an effect size of 0.38 for computer-based guidance in science, technology, engineering, and mathematics courses. The research team found best results from guidance in the form of expert modeling and feedback. In summary, we have ample evidence for the importance of guidance as an element of scenario-based learning designs. As an example, in the automotive troubleshooting lesson, some on-screen testing options are grayed out—thus constraining the choices. If the learner selects an irrelevant test, an on-screen message states that this test is not needed for the particular failure. In addition, as shown in Figure 15-1, the on-screen telephone located on the work cart offers context-specific assistance through a text recommendation.
The Goal Is to Accelerate Expertise How is expertise built? Research on a variety of high-level performers in sports, games such as chess, and music have pointed to the role of extensive and focused practice over an eight to 10 year period
324 • Chapter 15
(Ericsson 2006). In other words, expertise grows from focused experience, often with the support of a coach. Many work domains require considerable time to build experience because some tasks occur rarely or are too risky to learn on the job. For example, in troubleshooting, some failures may be relatively infrequent. Having the opportunity to resolve them in a compressed time period accelerates expertise. Time compression is a feature of a multimedia delivery environment where actions or decisions are made with the click of a mouse.
When to Consider Scenario-Based Learning As we saw previously, problem-focused lessons are an effective method to build job-specific critical thinking skills. In addition, scenario-based lessons can be motivational. By starting a lesson with a real-world task assignment or dilemma, the relevance of the training is immediately salient. Scenarios with an optimal level of challenge and guidance have the potential to increase learner engagement, leading to deeper processing and better learning. A third benefit is transfer of learning. Because in scenario-based designs most learning occurs in the context of real-world tasks and problems, the cues accumulated in memory will lead to better retrieval later. When facing a new problem on the job, most experts search their memory banks for a similar situation they faced in the past. Problem-based lessons give learners an opportunity to build those memory repositories for later use. In addition to a goal of building critical thinking skills, scenariobased designs are especially useful to teach skills that are difficult to acquire on the job because of safety concerns or scarcity of realworld opportunities. Teaching new military officers to make good combat decisions is one example. Decisions can have life-or-death
Teaching Critical Thinking With Problem Scenarios • 325
consequences, and a scenario-based approach to learning offers at least a partial substitute for real-world experience. In automotive troubleshooting, some failures may not occur regularly and technicians can benefit from virtual exposure to these problems.
What Do You Think? Place a check next to each statement you believe is true: A. Multimedia scenario-based lessons require computer qq simulations. B. Scenario-based lessons are more expensive to develop qq than traditional directive lessons. C. Scenario-based lessons should include feedback. qq D. Learning from scenario-based lessons is better when qq realistic multimedia such as video is used to portray the scenarios.
The Anatomy of a Scenario-Based Lesson There are four essential elements of an effective scenario-based lesson: an authentic scenario or task assignment that serves as a context for learning, learner guidance while responding to and resolving the problem, feedback on problem solutions or problem-solving processes, and explicit opportunities to reflect on problem solutions. Each of these is described in a bit more detail and illustrated with the automotive troubleshooting lesson introduced in Figure 15-1.
The Scenario Scenarios are commonly found in procedural lessons as well as in explanations. As discussed in chapter 14, in procedural lessons a problem or case study best serves as an end-of-lesson (or unit) practice
326 • Chapter 15
opportunity. But in the scenario-based approach, the lesson starts with a problem or scenario that serves either as an introduction to the lesson or as an ongoing context for learning. Take a look at the example of an Excel classroom lesson with some business analysis goals for Pete’s Pet Emporium (Figure 15-2). Figure 15-2. An Excel Assignment Initiates and Drives a Scenario-Based Lesson
Designing an effective problem or scenario can be challenging, depending on the complexity of the tasks and your delivery media. It may be a matter of moving an existing practice problem to precede an explanation as illustrated in the physics and math experiments. Or it may require considerable design work if the scenario will serve as the main context for learning, as in the automotive troubleshooting lesson shown earlier. First, your scenario must require the participant to apply the key concepts and skills associated with effective job performance. In the
Teaching Critical Thinking With Problem Scenarios • 327
Excel class, the business analyst will require the use of formulas to perform calculations and charts to display data. In the troubleshooting lesson, the technician must learn the mechanical and electrical components of the automotive systems involved, which diagnostic tests might be most appropriate at a given time, and how to interpret diagnostic data to identify a likely cause of failure. As you plan your scenario, define the desired outcome and the criteria for success. These elements correspond to the action and criterion of a traditional lesson objective. Your outcome may involve a decision, actions, rationale for actions, a problem-solving path, or a product. Your criteria may be a correct answer, an answer that matches logical rationale, a decision path that is efficient and effective, solution time, or specified features of a product deliverable. For example, the Excel scenario will initially require the construction and input of accurate formulas to achieve the assigned goals. The outcome will be a correct answer since the spreadsheet incorporates specific data values. In contrast, the automotive troubleshooting scenario will require both selection of a correct diagnosis as well as an efficient logical problem-solving process in which irrelevant tests are not used. Many scenarios will require the learner to access related problem data. For example, in Figure 15-1, the simulated automotive shop offers the technician access to a variety of common diagnostic tools and tests. This part of your design will correspond to the “Givens” in your learning objective. When you do your job analysis, note the common sources of data that experts use to solve problems and plan ways to incorporate these into your lesson. Typical sources of data include documents, technical diagrams, computer programs, client interviews, or test equipment—any resource that would be normally used on the job to define and analyze the problem.
328 • Chapter 15
The Guidance As we saw in previous paragraphs, evidence points to guidance as a critical success factor in scenario-based learning. One of the potential minefields in scenario-based lessons is mental overload and learner confusion leading to frustration and drop out. In the experiments that compared problems first to instruction first, learners who were assigned to solve a problem first did not solve it as well as those who solved the same problem after the explanation. Devote careful thought to the placement and type of guidance in the lessons. Instructional psychologists call this type of guidance “scaffolding.” For the initial problems in your course, provide heavy doses of guidance and gradually remove support as learning progresses. The most common types of guidance involve problem sequencing, process constraints, worked examples, and knowledge resources, such as expert models, tutorials, or references. A brief description of each follows.
Sequence Problems From Simple to Complex The initial problem or task assignment should be the simplest instance you can build of an authentic job problem appropriate for your target audience. Easy problems will have fewer variables, relaxed constraints, straightforward solutions, and limited amounts of data to consider. For automotive troubleshooting, initial cases could involve a single system with a straightforward failure.
Impose Process Constraints Process constraints limit learner selection options to simplify the problem-solving process. For example, a branched scenario design imposes more process constraints than a more open-ended design. The screen shot in Figure 15-3 shows a customer service branched scenario
Teaching Critical Thinking With Problem Scenarios • 329
lesson. The learner hears the customer’s comments and has a choice of three response options. Upon clicking any of the options, the learner sees and hears the customer response and receives feedback from the virtual coach in the lower left corner. Branched scenarios are especially effective for problems in which one choice leads to another and then another in a linear sequence. Note that compared to the design of the automotive troubleshooting scenario, in the branched scenario the learner makes only one decision per screen and gets immediate feedback on her response. Figure 15-3. A Branched Scenario Immersive Lesson on Customer Service
Used with permission from VCOM 3D.
Alternate Problem Demonstrations With Assignments Start with a demonstration (also called a worked example or expert model) of how to solve a problem. Alternatively, start with a partial demonstration in which the instruction demonstrates the first solution steps and the learner finishes it. Follow with another scenario in which the learner does more or all of the work.
330 • Chapter 15
Build on Learner Solutions to Introductory Problems If you decide to initiate your lesson with a problem to solve, use student solution attempts as the basis for explaining misconceptions and illustrating correct approaches.
Offer Knowledge Resources Some scenarios can benefit from a variety of perspectives. For example, a medical ethics scenario on discontinuance of life support provides links to virtual experts, including a member of the clergy, lawyer, ethics expert, and physician colleague. A course for new supervisors offers links to a manager, experienced supervisor, legal staff, and human resources. As another resource, learners can work on problems collaboratively. In a comparison of problems assigned before an explanation by learners working either individually or with a team, collaborative work did not result in better learning. However, learners working collaboratively gave higher ratings of interest and engagement (Weaver et al. 2018). If your problem involves documentation, consider providing a curated resource. Nievelstein and others (2011) compared learning outcomes among novice law students who worked with either a full civil code or a reduced code that included only material relevant to the case provided. Individual learners researched and wrote argumentation for a civil law case using either the complete civil code or a condensed version. Test case performance was better among those who used the condensed reference. The research team recommends: “Rather than losing precious cognitive resources on searching through large amounts of information, students’ attention can be entirely devoted to making sense of the relevant information in the code in relation to the case.” In a comparison of student-selected versus instructor-selected literature
Teaching Critical Thinking With Problem Scenarios • 331
resources, Wijnia and others (2015) recommend allowing students to select their own resources from an instructor-provided list.
The Feedback As chapter 12 showed, all learning benefits from feedback. In scenario-based learning environments you can use two types of feedback: intrinsic and instructive. Instructive feedback described in chapter 12 informs the learners that their responses are correct or incorrect and provides an explanation as well as suggestions for improvement. The virtual coach in the lower left of Figure 15-3 offers this type of feedback. Intrinsic feedback shows the outcomes of the learners’ scenario actions or decisions as they resolve the problem. In other words, the learner responds and sees how the situation plays out for better or for worse. For example, when the learner selects a rude comment in the customer service branched scenario, the customer responds with negative body language and words. Therefore, the customer service scenario lesson includes both instructive and intrinsic feedback. Intrinsic feedback can also reveal environmental responses that are normally hidden. For example, a food handlers’ lesson scenario incorporated a germ meter that reached the danger zone when food was improperly touched. A supervisory lesson on giving performance feedback included a motivation dial to reveal the feelings of the employee receiving feedback.
The Reflection One of the big differences between scenario-based and directive lessons is the instructional response to learner errors. Based on behaviorist roots, directive lessons attempt to minimize learner errors. When a mistake is made, the learner usually gets immediate corrective
332 • Chapter 15
feedback. In contrast, scenario-based course designs view mistakes as an opportunity for learning. Feedback may not come until several actions have been taken or even until the end of the case. To learn from mistakes, it is important to prompt learner reflection on what they did and what they might do differently. One powerful form of feedback that encourages reflection is an expert comparison. For example, the automotive troubleshooting lesson compares the diagnostic actions taken and time consumed by the learner with those of an expert (Figure 15-4). Gadgil and others (2012) found better learning when students were able to compare side-by-side their incorrect explanations to expert solutions in contrast to viewing an expert solution alone. Figure 15-4. A Comparison of Learner Solution Process With That of an Expert
Used with permission from Raytheon Professional Services.
Media in Scenario-Based Learning Environments Is it better to present a problem scenario in text or with more visual forms of media such as video or computer animation? There are
Teaching Critical Thinking With Problem Scenarios • 333
insufficient comparison studies to make universal recommendations. And the best choice may depend on the importance of visual and auditory cues on learner decision making, prior knowledge of the learners, and the emotional elements of instructional outcomes. Lane and others (2013) compared learning of negotiation skills between lessons that presented practice with a realistic avatar dialogue using audio (Figure 15-5) to lessons with a simpler 2-D interface using text statements (Figure 15-6). They did not find significant differences in learning from the two interfaces. Figure 15-5. A Screenshot From a Communications Training Simulation Using High-Fidelity Interface
From Lane et al. (2013).
Figure 15-6. A Screenshot From a Communications Training Simulation Using Low-Fidelity Interface
From Lane et al. (2013).
334 • Chapter 15
Gartmeier and others (2015) compared learning of professional communication skills among students who: 1. studied with an interactive e-learning module that featured analysis of video communications 2. engaged in role play with video feedback 3. received both the e-learning and role play practice 4. a control group that received no training. All training sessions were of the same length. Not surprisingly, those who received a combination of e-learning and role play learned the most. Those taking the e-learning lesson alone (group 1) scored better than those engaged in role play alone (group 2). The research team concludes that “a primarily inductive, experience-based way of learning (for example, the role-play group) potentially overburdens learners with this complexity.” Moreno and Ortegano-Layne (2008) compared learner ratings and learning among student teachers who reviewed case examples presented either in text, computer animation, or video. Both ratings and learning were better from the computer animated and video examples than from the text examples. The computer animated and video cases resulted in equivalent learning outcomes. Dankbaar and others (2016) gave fourth-year medical students an e-learning module on emergency room skills followed by either a text-based low-fidelity case or by a high-fidelity simulation game. A performance test taken four weeks after the training showed no significant difference in learning between the two groups and a control group which had no case practice after the e-learning module. Apparently in this experiment, the e-learning module itself was sufficient to build skills.
Teaching Critical Thinking With Problem Scenarios • 335
In summary, among the four studies reviewed, only one (Moreno and Ortegano-Layne) found an advantage to presenting cases with media of higher fidelity. The question of which media will best present your scenarios will be resolved in part by your development resources and delivery technologies. Realistic media is likely to be especially helpful to portray case elements that involve sights and sounds that cannot be authentically represented in text as well as providing safe experiences in situations involving high emotions. We have previously reviewed evidence showing that a lesson on blood flow presented via immersive virtual reality did not result in better learning than the same content presented in a slideshow (Parong and Mayer 2018). However, for situations with greater affective elements such as firing a staff member, the immersive version may prove more effective. We will look to future research to guide decisions regarding best uses for immersive environments.
Challenges With Scenario-Based Lessons There are a number of potential challenges to consider in scenariobased lessons.
Learner Overload Perhaps the most common pitfall is mental overload. Asking a learner to solve a problem unfamiliar to them and to learn the knowledge and skills they need to resolve that problem at the same time can be overwhelming. One solution is to design scenario-based lessons for learners with some prior experience. Another solution is to incorporate guidance. Learners new to the domain benefit from higher levels of guidance such as branched scenarios or pre-scenario tutorials.
336 • Chapter 15
Balancing the Skill Set I’ve focused primarily on the elements of a single scenario-based lesson. Imagine, however, in medical education that all your case problems focus on a broken leg. Clearly the range of knowledge and skills acquired would be very limited. To achieve balance, you need to identify problem classes based on the diversity of work role functions you identify during your job analysis. For example, you might have problem cases that focus on cardiac issues, oncology, orthopedic problems, and so forth. Within each problem class, you will need to identify a series of cases that incorporate the required knowledge and skills of that class and that progress from easy to complex. In some online situations, you can reuse the interface to accommodate different scenarios. For example, in the automotive troubleshooting example, new cases required less development work than the first case as the graphic interface and tool-specific programming was recycled.
Inefficiency of Learning Inefficiency is an offspring of high “flounder factor” lessons. When learners start to take random actions to progress through a problem, the result can be both ineffective and inefficient learning. Consider minimizing learner control over elements of the lesson to make learning more efficient. For example, present more constrained interfaces (branched scenarios, limited active objects) to guide learners during early problem-solving stages. Also be more aggressive with imposing guidance and directions when learners get off track.
Instructor Roles In the classroom it is up to the instructor to administer and facilitate the case problems. Specifically, the instructor can present the problem,
Teaching Critical Thinking With Problem Scenarios • 337
provide relevant clarifications, facilitate group discussions, help locate relevant resources, and facilitate problem debriefs. Note that this is quite different from a more traditional role. Instructors must be agile and flexible—able to provide just-in-time explanations but also willing to let learners make some errors and learn from them. The degree of ambiguity in the process and in the final results may be difficult for some instructors to implement—especially if they are used to working in highly directive learning environments.
The Bottom Line Now that you have reviewed the evidence, here are my comments on the questions at the start of this chapter: A. Multimedia scenario-based lessons require computer simulations. False. Scenario-based lessons do not require computer simulations. Some can be produced with simple branching. More complex forms such as the automotive troubleshooting example do involve some level of simulation. B. Scenario-based lessons are more expensive to develop than explanatory lessons. True. Many factors affect development costs, including the media used, the incorporation of simulations, and the complexity of the case problems among others. Presentations—even when augmented with examples and questions—are generally much faster and easier to design and develop, hence their popularity. In contrast, scenario-based designs are very interactive and in general will require more development time and higher cost.
338 • Chapter 15
C. Scenario-based lessons should include feedback. True. Actually, this is a true statement for any lesson that involves learner practice. However, in scenario-based lessons you can decide whether to provide instructive or intrinsic feedback or both. You can also decide whether to provide immediate or delayed feedback or a combination. D. Learning in scenario-based environments is better when realistic multimedia such as video is included. Unknown. We don’t really have sufficient evidence to make blanket generalizations on this issue. It is likely that more visual media, such as still photos or video, will be more engaging and also may be essential when the sights and sounds of the workplace are critical data to consider. Comparing video to animation, keep in mind that video may be more difficult to update than an animated interface.
Applying Scenario-Based Learning to Your Training Use the following checklist to guide your design and development of scenario-based learning environments. qqConsider a scenario-based approach for tasks that involve
decision making and critical thinking or for tasks that are challenging to learn in the work environment due to infrequency or safety concerns. qqProvide a more constrained design such as a branched
scenario and higher levels of guidance for novice learners. qqInitiate the lesson with a work-authentic assignment or
scenario.
Teaching Critical Thinking With Problem Scenarios • 339
qqDesign a clean interface in which learner response options
are clear. qqIncorporate fewer variables and less data in initial scenarios. qqOffer less learner control in initial scenarios. qqProvide sufficient guidance to minimize learner frustration
and ensure learning. qqFade guidance as learners gain more experience. qqProvide both intrinsic and instructional feedback. qqUse feedback to illustrate both visible and invisible
consequences of actions. qqAllow learners the opportunity to make mistakes,
experience the results, and reflect in order to learn from their mistakes. qqAs an instructor, assume a facilitative role rather than a
knowledge source. qqEnsure the full range of knowledge and skills through
a series of scenarios developed to fulfill the goals of the instructional program.
Coming Next Do you include games in your instructional materials? Have you found that learners like them? Do you think they promote learning? There has been so much interest in the potential of games for learning that a number of recent research papers have evaluated them. Chapter 16 looks at what we know about serious games for learning.
340 • Chapter 15
FOR MORE INFORMATION Abrami, P. C., R. M. Bernard, E. Borokhovski, D. I. Waddington, C. A. Wade, and T. Persson. 2015. “Strategies for Teaching Students to Think Critically: A Meta-Analysis.” Review of Educational Research, 85(2), 275–314. A meta-analytic review including 341 effects sizes that evaluate whether critical thinking can be taught and features of successful critical thinking courses. Clark, R. C. 2013. Scenario-based e-Learning. Pfeiffer: San Francisco. If you are interested in more detail and additional examples, take a look at my book that focuses on the use of scenarios for workforce learning. Lazonder, A. W. and R. Harmsen. 2016. “Meta-analysis of inquiry-based learning: Effects of guidance.” Review of Educational Research, 86, 681-718. A meta-analysis that focuses specifically on the types of guidance that are most effective for problem-based learning lessons.
Chapter 16 Digital Games for Workforce Learning What Is a Game? Types of Games Game Design and Learning Do Games Promote Learning? Are Games More Motivating Than Traditional Instruction? How to Design More Effective Games Design of Game-Based Learning for Workforce Learning Applying Games to Your Training
341
342 • Chapter 16
Are you or your family members video gamers? From Mario Kart to Minecraft to Fortnite, video games have become a cultural obsession— at least with a substantial population of children and young adults. As of early 2019, Fortnite was the most popular video game ever, hosting more than 10 million players at one time and earning the developers $2.4 billion in 2018—the largest total for a game to date. Games like Fortnite have been reported to be so addictive as to be distracting— from schoolwork for younger players and from real work for young adults. Thirty-five percent of student players have admitted to skipping classes to play Fortnite. Trainers for professional sports teams have expressed frustration over addictive late-night Fortnite playing among their new recruits (Fortier 2018). As of 2018, more than 211 million Americans play video games— probably more by the time you read this chapter. Games are played by young and old, men and women. While many adults believe that men predominate video game play, 54 percent of gamers are men and 46 percent women (ESA 2019). Demographically, entertainment games cross categories of age and sex. Given massive participation in entertainment online games, their instructional potential has not escaped the attention of learning professionals and researchers. Enough research has accumulated to support three meta-analyses and a 2019 MIT Handbook for Game-Based Learning, which provide us with a number of useful guidelines for educational game design. This chapter reviews evidence on three critical questions: Can games lead to better or equal learning outcomes compared to traditional methods? Are learning games more motivating than traditional lessons? And what design features make games more effective for learning?
Digital Games for Workforce Learning • 343
What Is a Game? As we have seen in this book, many instructional methods feature variations that will affect their learning potential. The same is true for games. From casual solo games, such as Solitaire, to highly immersive team action games, such as Fortnite, the diversity of game formats and goals makes any universal generalizations suspect. Learning games are instructional environments that are entertaining enough to motivate play and educational enough to promote learning goals. Games are characterized by high interactivity and responsiveness, specific challenging goals, and rules and constraints aligned to the learning objective. Let’s consider each of the core features individually.
Interactive and Responsive Learners are highly engaged in a learning game environment when they’re making physical responses that promote psychological engagement linked to learning objectives. When the learner interacts with the game interface, there is an immediate response (that is, feedback) in the form of points, scores, or in a change in the game environment itself. For example, in arcade games, rewards, such as points, tokens, or prizes, are given for player actions that advance toward game goals. In strategy games, such as a zombie game shown in Figure 16-1 designed for retail sales associates, the goal is to minimize the number of customers turning into zombies due to inadequate sales staff response. More zombies in the store indicate poor or slow decision making. The physical and mental exchange between player and game is one of the core features that initiates and sustains continued play and supports learning.
344 • Chapter 16
Figure 16-1. The Zombie Game
From Clark and Nguyen (2019).
Specific Challenging Goals Games are goal driven. The goal may be to win points, to beat competitors, or to create an environment with specific properties. Challenge requires a careful balance. Games that are too easy fail to hold interest. Games that are too challenging lead to discouragement and drop out. Many games use levels as a mechanism to gradually increase the challenge. As players move through levels, more difficult goals require improvements in speed, accuracy, agility of response, or strategic decisions. Games maintain challenge by adapting to the player; that is, they adjust difficulty based on the player’s success at lower levels. For example, in the zombie game, the escalator located at the back of the shopping area leads to higher levels with more challenging situations. A study by Sampayo-Vargas and others (2013) evaluated an adaptive “bubble” game in the domain of foreign language vocabulary, which adjusted the difficulty level based on player performance. They
Digital Games for Workforce Learning • 345
compared motivation and learning from an adaptive version, a nonadaptive version that increased difficulty automatically in one-minute play time increments, and a paper worksheet (not a game) that asked students to perform similar vocabulary matches as those in the game. They found no differences in motivation among the three versions. However, the pre-test to post-test gains shown in Figure 16-2 reflect significant learning benefits of the adaptive game version Figure 16-2. Pre-Test to Post-Test Gains in Two Game Versions and a Paper-Based Worksheet
Based on data from Sampayo-Vargas et al. (2013).
Rules and Constraints Game progress cannot be achieved arbitrarily. Game moves and responses are guided by rules and constraints. Sometimes these must be discovered inductively by game play; other times the rules are made explicit from the start. Game goals, rules, and constraints should align with the learning objective. When misaligned, learning will be
346 • Chapter 16
depressed. For example, games that reward for fast responses, such as shooter or other “twitch” games, may not be the best for learning cognitive skills that benefit from reflection.
Types of Games From Scrabble to Fortnite, games that meet these three broad criteria (interactive, specific challenging goals, and rules and constraints) are highly diverse. There is no single agreed-upon categorization system. Casual games (71 percent), action games (53 percent) and social games (47 percent) are reported as the three most popular mega-categories of entertainment games (ESA 2019). The main features and examples of these mega-categories are summarized in Table 16-1. Many games incorporate multiple features that qualify them for more than one category, including action, shooting, and team-competitive or collaborative efforts. Table 16-1. Three Mega-Categories of Entertainment Games Genre
Description
Examples
Casual
Simple rules, short games such as puzzle, cards, quiz show
Jeopardy Angry Birds
Action
Rapid accurate responses required to achieve game goals
Shooter Racing
Social
Players collaborate and/or compete to achieve game goals; Engagement with others is a major game feature
MMORPGS (Massively Multiplayer Online Role Play Games) Fortnite
From Entertainment Software Association (2019).
Game Design and Learning According to Mayer (2019b), educational games are especially effective in the domains of science and second language learning. These domains benefit from automatization of skills and understanding
Digital Games for Workforce Learning • 347
of cause-and-effect models (Young et al. 2012; Mayer 2014b, 2019b). Language learning relies on automaticity to formulate sentences and therefore benefits from drill and practice made more palatable by a gaming veneer. But not all game designs lead to optimal learning. Imagine a computerized concentration-type game to learn second language vocabulary. The player faces a matrix of face-down cards with either a picture or a written vocabulary word on the back. The player can turn over two cards at a time with a goal of matching picture and word. If there is no match, the cards revert to face down. The player wins by matching all the cards in the least amount of time. Do you think this is a well-designed game for the goal of learning vocabulary? My answer is no. Success in the concentration game relies on recalling the spatial position of each card as well as the translation of the word in order to make successful matches. The need for spatial recall imposes extraneous cognitive load that detracts from learning vocabulary. How could this game be designed more effectively? One approach would be to convert it to a drag-and-drop matching game in which all cards are face up and the player drags matching cards together. The score could be based on the speed at which cards are correctly paired. The game could be adapted to learning progress by replacing pairs once they have been correctly identified three or more times. An alternative would be to present a vocabulary word (or sentence) in audio and ask the player to click on the correct picture. As learning progresses, sentence structures would become more complex and eventually include multiple sentences. For instructional success, the game design must prompt activity that promotes germane mental load and minimizes extraneous load. The learning objectives must be drivers for the engagement and response, the challenge of the goals, and the rules and constraints.
348 • Chapter 16
What Do You Think? Based on your own experience with games, which of the following statements do you think are true? A. Overall, there is substantial evidence for the effectiveness qq of learning games. B. Games based on narratives are more effective for learning qq than nonnarrative games. C. Social play involving competition and collaboration is qq better for learning than solo play. D. Game design can benefit from many of the basic multiqq media principles, such as modality and personalization, reviewed in previous chapters.
Do Games Promote Learning? In spite of considerable enthusiasm at conferences, in books, and in social media, in reality very little is known about the effectiveness or design of games that support adult learning and job performance outcomes. The vast majority of research reported in three metaanalyses focused on game effects among school-aged learners. The Sitzmann (2011) meta-analysis analyzed 65 studies of which only seven involved workforce learners. Meta-analyses by Wouters and others (2013; Wouters and van Oostendrop 2013) included 39 studies, of which only two involved adult learners. Analysis by Clark and others (2016) focused exclusively on games for K-16 students. In conclusion, there is little published academic data on the effects of games on learning in workforce domains such as management, compliance, or sales training. Keep in mind these limitations of audience and learning domains as you review conclusions from these meta-analyses. The good news:
Digital Games for Workforce Learning • 349
All three meta-analyses found positive game learning effect sizes of around 0.3. In other words, compared to a traditional training lesson, a game resulted in three-tenths of a standard deviation better score on a post-test. An effect size of 0.3 is small. Nevertheless, a 0.3 effect size reported from three different analyses that together incorporated around 170 studies suggests that games can teach as well as if not better than traditional lessons, including PowerPoint presentations, e-learning tutorials, or readings. Are all games equally effective? Consider an experiment that compared learning from a game called Cache 17 shown in Figure 16-3 intended to teach electromagnetic principles, to a slide presentation of the same principles (Adams et al. 2012). Figure 16-3. A Screen Shot From Cache 17 Game
In Cache 17, players are challenged to find lost WWII art in bunkers. To locate the art, players must move through bunkers performing sub-tasks, such as opening doors by constructing a wet-cell battery. The instructional explanations shown in the game were duplicated in a slide presentation. Learners were randomly assigned to either play
350 • Chapter 16
the game or view the slide presentation. The research team reported better learning in less time from the slide presentation than from the game. What might be some explanations for the lack of results from this game? Let’s look at some moderating factors reported by the meta-analyses that help define features or conditions that promote game effectiveness. The key features of effective games reported by two or more of the three meta-analyses are summarized in Table 16-2. Table 16-2. Moderators of Game Effectiveness Reported in Meta-Analyses Effect Sizes Moderators
Sitzmann (2011)
Wouters et al. (2013)
Clark et al. (2016)
Multiple game plays
Yes 0.68 No 0.31
Yes 0.54 No 0.10
Yes 0.44 No 0.08
Game as supplement to other instruction
Yes 0.51 No -0.12
Yes 0.42 No 0.20
Yes 0.36 No 0.32
Not Analyzed
0.46 0.20 0.14
0.48 0.32 -0.01
Not Analyzed
Yes 0.25 No 0.45
Thick 0.36 None 0.44
Visual Realism • Schematic • Cartoon • Realistic Narrative
Multiple Playing Sessions All three of the reports find that multiple game-playing sessions are essential for success. When a game was played only one time, it was of minimal learning value compared to traditional instruction. “When only one training session is involved, serious games are not more effective than conventional instructional methods” (Wouters et al. 2013). For game success, make games sufficiently engaging and relevant to workforce learners to stimulate multiple plays. For example, the zombie game was highly engaging for sales staff as shown by a high number of repeat game plays.
Digital Games for Workforce Learning • 351
Simpler Interfaces Two of the meta-analyses evaluated the effects of different levels of graphic fidelity in the game interface. The Wouters (2013) and Clark (2016) reports suggested that simpler visuals, such as schematic or cartoon representations, were more effective for learning than games with highly realistic visuals, such as photographs or high-fidelity computergenerated graphics. Mayer (2019a) concurred: “A straightforward conclusion is that adding realism for its own sake is not a promising game feature, when the goal is to improve learning outcomes.” These results may reflect the coherence effect we discussed in chapter 9 in which simpler visuals have generally been found more effective because they impose less irrelevant cognitive load on learners (Mayer 2017).
Narratives in Games Two of the meta-analyses found that having little or no narrative was more effective than having a complex evolving story over the course of the game. “Results showed that games with no story or thin story depth both had significantly larger effects relative to those with medium story depth” (Clark and others 2016). However, for adult learners, games based on job-relevant scenarios, such as the zombie game, may be more effective than games lacking work context. The problem, of course, is that the research lacked studies involving adult learners playing job-relevant games. Mayer (2019b) classified narrative theme as a game feature that requires more research to determine its effectiveness.
Games As One Instructional Event All three of the meta-analyses considered the effects of games as standalone instruction compared to games that were supplemented by other instruction. Two of the three (Sitzmann 2011; Wouters et al. 2013)
352 • Chapter 16
report an advantage for games added to other instructional events. For example, after a tutorial, a game is used as a practice opportunity. Plass and others (2015) suggest that games may provide a pre-training function, a knowledge and skill generation function, or an after training drill and practice function. For each of these functions, games serve as one part of a larger learning environment.
Why Cache 17 Failed Here are some possible reasons why the slide show was more effective than the game in the Cache 17 experiment. First, learners played the game only one time. Second, the game visuals were of relatively high fidelity, and third, the game was based on a narrative involving a lost art discovery mission—a narrative unrelated to the instructional goal. Finally, the game was standalone with no additional instruction. All of these factors may have decreased the potential effectiveness of the game relative to a slide show. In fact, Pilegard and Mayer (2016) achieved better results from this game by adding supplemental instruction in the form of worksheets. In summary, evidence supports the positive potential of welldesigned games for learning. Even if a game leads to the same learning outcomes as a traditional instructional approach, motivational advantages may prompt learners to spend more time in a game environment than a traditional instructional format. This leads us to our next topic— the motivational potential of games.
Are Games More Motivating Than Traditional Instruction? If multiple game plays are essential to learning effectiveness, it is important that workforce learners find games sufficiently motivating
Digital Games for Workforce Learning • 353
to repeat. Unfortunately, we lack sufficient data about the motivational effects of games. Sitzmann (2013) did not find sufficient data to conclude anything about the effects of games on motivation. Wouters and others (2013) reported that, overall, games were not more motivating than the instructional methods used in the comparison groups. There were some exceptions. For example, they found that games for problem solving were more motivating than traditional instruction (with average effect size of 0.88). Most research studies use a survey to measure learner motivation. Have you used the Duolingo game to study a second language? James and Mayer (in press) found that while learning Italian from Duolingo was not much better than from slide show presentations, ratings of enjoyment were much higher, with an effect size of 0.77. Motivation may depend on audience-specific factors. For example, Landers and Armstrong (2017) asked undergraduate learners to choose between an instructional scenario that involved a serious game and a scenario using traditional PowerPoint instruction. On average, participants anticipated greater value from the game. However, the selection of a game was largest among participants with video game experience and positive attitudes toward game-based learning. The researchers concluded that “individuals with less game experience and poorer attitudes towards games in general may benefit less from gamified instruction than others.” Among the different games produced by a large retail organization, the zombie game shown in Figure 16-1 was the most popular. Motivation to play was measured by the overall number of game plays plus the number of repeat plays by the same associate. Perhaps the fantasy element of the zombie interface, the scenarios based on actual customer data, as well as elements of competition served as motivating factors.
354 • Chapter 16
Social engagement has become a common feature of entertainment games. The meta-analyses summarized in this chapter reported mixed effectiveness of group versus solo play as well as competition. Wouters and others (2013) found that while both solo and group play resulted in better learning compared to traditional instruction, group play was more effective than solo play with an effect size of 0.66. In contrast, Clark and others (2016) reported best outcomes among individuals playing on their own in a noncompetitive manner. When comparing conditions involving competition, best results were realized among team competitions compared to individual competitive players. Plass and others (2013) reported greater interest and enjoyment from competition and collaboration among middle school students playing a math game. Collaboration in games was associated with stronger intentions to play the game again. Both collaboration and competition may increase play motivation for at least some players. For example, certain personnel, such as sales associates, may be naturally more competitive. Mayer (2019b) classified competition among instructional methods that need further research before making recommendations. Future research should help us define the types of learning goals and learners that benefit from collaborative and competitive game environments.
How to Design More Effective Games We have sufficient data to conclude that games can result in learning equal to or slightly better than traditional instruction. An important follow-up question is: How can we design games to maximize learning and motivation? For example, consider the basic Circuit Game shown in Figure 16-4, where players learn about electrical flow in circuits with various resistances and power sources. In the base version of the game, players select which circuit has greatest electrical flow. What could be
Digital Games for Workforce Learning • 355
added to this basic game version to enhance learning? Consider what methods have been found to improve learning in traditional e-learning tutorials. Feedback is one such method. Explanatory feedback not only informs the learner of the correctness of their response but also provides a short explanation. To determine the effectiveness of explanatory feedback in a game, Johnson and Mayer (2010) compared learning from the basic game shown with the same game that adds explanatory feedback (Figure 16-5). Figure 16-4. The Circuit Game
From Johnson and Mayer (2010).
Figure 16-5. The Circuit Game With Feedback Added
Adapted from Johnson and Mayer (2010).
356 • Chapter 16
Several reviews have summarized instructional methods found to improve learning from games (Wouter and van Oostendrop 2013; Mayer 2014; and Mayer 2019a, 2019b). The methods summarized in Table 16-3 have an effect size of 0.5 or more. Table 16-3. Methods to Improve Learning From Computer Games Method
Recommendation
Number of + Outcomes
Effect Size
Modality
Present words in brief audio rather than text
9 of 9
1.4
Personalization
Use conversational language including first and second person
8 of 8
1.5
Pretraining
Provide pre-game information regarding content and/or mechanics of the game
7 of 7
0.8
Coaching/ Feedback
Provide in-game advice and feedback
12 of 15
0.7
Self-explanation
Ask players to select reasons for their responses
13 of 16
0.5
From Mayer (2019a).
Provide Explanatory Feedback and Advice Chapter 12 looked at lessons learned from research on feedback. Providing feedback that not only tells the learner the correctness of their response but also includes an explanation results in better learning (Moreno 2004). Similar effects have been found in 12 of 15 games that offered either advice or explanatory feedback, such as the elaborated feedback in the Circuit Game. In the basic game, feedback consisted only of audio signals and points. In the enhanced version, feedback included a verbal explanation.
Present Words in Audio Rather Than Text As you saw in Chapter 7, evidence has shown that in e-learning tutorials with a graphic component, learning is best when words are presented
Digital Games for Workforce Learning • 357
in audio rather than text (Clark and Mayer 2016; Mayer 2009, 2014). In two versions of a botany game called Design-a-Plant, Herman, an on-screen agent, gave explanations in text in one version or presented the same words in audio in a second version. In nine of nine comparisons, the audio version resulted in better learning with a large effect size of 1.4. Using audio to describe a complex visual leverages the dual channels of our limited working memory. While the eyes view a visual, the words that enter the ears access the auditory centers of working memory and thus maximize its limited capacity.
Use Conversational Language Chapter 8 showed that a personalized approach improves learning. In a series of experiments, games using conversational language were compared to games with a more formal tone. Conversational language used first- and second-person constructions such as I, you, and we and generally incorporated an informal vernacular. The advantage of conversational language was seen in eight of eight comparisons with a large effect size of 1.5 (Mayer, 2019b). Conversational language may promote a social connection between the player and the game, encouraging the player to engage with the game as a social partner.
Provide Game Orientation Pre-class or pre-lesson orientations are common in conventional training programs. These orientations may include pre-work to introduce basic lesson concepts or to initiate an on-the-job project related to lesson skills. A similar principle applies to games. Prior to starting a game, students may be introduced to learning content, such as names and descriptions of concepts or a description or demonstration of game mechanics and rules. Mayer (2019b) reports that
358 • Chapter 16
in seven out of seven experiments, providing pregame information resulted in better learning.
Ask Players to Select Explanations for Their Responses Chapter 10 showed that adding a question improves learning from examples (Renkl 2017). Having to provide or select an explanation forces the learner to carefully review an example that otherwise might be ignored or perused in a cursory fashion. In 13 of 16 experiments, asking learners to select explanations for game plays resulted in better learning with an average effect size of 0.5. You can see an example of self-explanation selection options added to the Circuit Game in Figure 16-6. Mayer and Johnson (2010) found better learning from the game version with the self-explanation selections than the version without them. Asking players to select from a menu rather than type in an explanation was more effective. Having to construct and type in an explanation may be too disruptive to the game flow compared to selecting an on-screen option. Figure 16-6. The Circuit Game With Self Explanation Questions Added
Adapted from Mayer and Johnson (2010).
Digital Games for Workforce Learning • 359
Design of Game-Based Learning for Workforce Learning Consider the following recommendations for game design and implementation based on the research reviewed in this chapter: 1. Plan a learning context that promotes repeated game play. Games played only once are often not more effective than a standard tutorial. Will multiple game plays be a cost-effective use of staff time? If a goal is to reach automaticity—that is, fast and accurate inputs or responses—staff might enjoy multiple practice sessions in a game setting more than traditional drill and practice. Alternatively, if the work environment has periods of down time, short games that are engaging and promote learning might offer a cost- effective instructional option. 2. Keep the graphic interface simple. It should be less expensive to construct game interfaces that are simple rather than use high end photo-realistic or computer-generated graphics. Until we have additional evidence, stick with schematic or cartoon-type interfaces. 3. Avoid narratives other than those based on work-relevant scenarios. If designing a role-play game, create a narrative based on work scenarios. Ideally, you can collect real-world job scenarios that involve decisions shown to promote organizational goals. 4. Integrate games as one element of a larger learning or performance support initiative. Most game reviews reported better learning when the game was a supplement to other training events. For
360 • Chapter 16
example, a game could offer a practice opportunity immediately after a tutorial. Alternatively, a game could serve as pre-work to introduce concepts that will be included in a follow-up tutorial. Bottom line: Avoid relying on a game as a sole learning resource. 5. Build proven techniques into your games. Games are more effective when words are presented with audio in a conversational tone using first and second person, learners are oriented to game content or mechanics before starting play, learners receive explanatory feedback or advice after responses, and learners select explanations that justify their responses.
Additional research is needed to determine how both
learning and motivation may be affected by solo versus group play and by solo or team competition. In organizational settings where competition is already instituted among staff, a competitive element in games may serve as a motivator. 6. Minimize features that impose extraneous mental load. Remember the concentration vocabulary game? Having to recall the spatial location of the cards imposes irrelevant mental load that impedes learning. Likewise, unrelated narratives, such as the WWII art discovery mission featured in Cache 17, may interfere with learning from games. Evidence also recommends using a simpler rather than more complex graphic interface. Many of these guidelines reinforce the need to consider and manage mental load in game design. 7. Align engagement to the learning objective. By definition, games involve active engagement to pursue
Digital Games for Workforce Learning • 361
a goal. But the goal and associated engagement must align with the learning objective. In the zombie game, the player has only a few minutes to respond to customer questions or requests or to correct problems, such as misplaced products. If the question or problem is not resolved effectively, the customer turns into a zombie. All the customer requests or problems were derived from actual customer feedback. The zombie game was the most popular among several games produced in this retail organization. Scores are based not only on accuracy of response but also on priority given to various challenges offered. For example, resolving a customer issue will generate more points than replacing a fallen article of clothing. A leaderboard promoted a competitive spirit in playing this game.
The Bottom Line Now that we have reviewed the evidence accumulated to date on the effects of games, let’s revisit your original ideas about games. Based on my interpretation of the evidence in this chapter, here are my answers: A. Overall, there is substantial evidence for the effectiveness of learning games. True. But . . . first, it is really difficult to make generalizations about any method as broad as “games.” As you know, there are many types of games and their benefits will depend on game design, features of the players, and the instructional goals. Further, the meta-analyses of games to date focus primarily on younger learners and traditional educational topics that may have limited application to workforce learning. In the end, no doubt some games are
362 • Chapter 16
useful for some individuals to achieve some learning goals. As you consider games in your learning suite, test some prototypes to gather data most relevant to your setting. B. Narratives in games make them more effective for learning. Not Yet Known. We do not yet have sufficient evidence to claim narratives as either helpful or harmful to game-based learning. However, if the narratives are job relevant, such as the mini scenarios embedded in the zombie game, they may be both motivational and effective. C. Social play involving competition or collaboration is better than solo play. Not Yet Known. There is some evidence for learning and motivational benefits of social play. The evolution of realtime online competition and collaboration in popular entertainment games suggests the potential of social play in learning games. Future research needs to define the types of learning goals and learners that benefit from elements of either collaboration or competition in games. D. Game design can benefit from many of the basic multimedia principles, such as modality and personalization, that we have reviewed in previous chapters. True. We have considerable evidence that many of the principles we have reviewed throughout this book apply to game design. This is not surprising, as most of these methods reflect basic human cognitive processes and would apply to varied instructional contexts, including tutorials and games.
Digital Games for Workforce Learning • 363
Applying Games to Your Training While there appears to be solid opportunity in harnessing the motivational potential of games in the service of learning, we do not yet have sufficient guidelines for the types of games most useful for workforce learning. For now, we must extrapolate lessons learned in game research involving educational topics and younger learners. The most useful stream of research delineates the features we can embed in games that boost learning. Evidence to date recommends the following: qqDesign game engagement, rules, and progress to align
with the learning objectives. qqProvide explanatory feedback. qqAsk learners to select reasons for their game moves
(self-explanations). qqAvoid extraneous narrative treatments and overly complex
interfaces. qqCreate opportunities for multiple game plays. qqProvide instructional support, such as explanations of
game interface elements or principles. qqPersonalize games with first- and second-person informal
language. qqConsider games for skills that benefit from drill and practice,
such as second language learning. qqIntegrate games as one element of a larger instructional
solution.
Coming Next The next and final chapter will classify many of the instructional strategies we have reviewed in the context of instructional design and development processes, emphasizing their reported effect
364 • Chapter 16
sizes—often from meta-analyses. As you consider which guidelines to implement, evaluate both their impact in research studies as well as the cost-benefit of their implementation. In some cases, a method that has a smaller overall effect but is easy and inexpensive to implement will be preferable. FOR MORE INFORMATION Clark, R. C., and F. Nugyen. 2019. “Chapter 19: Digital games for workforce learning and performance.” In J. L. Plass, B. Homer, R. and Mayer, Eds. Handbook of Game-Based Learning. Cambridge, MA: MIT Press. A chapter I wrote with my colleague Frank Nguyen that focuses on the benefits of games for workforce learning. Mayer, R. E. 2014. Computer Games for Learning: An Evidence-Based Approach. Cambridge, MA: MIT Press. A short but comprehensive book that reviews evidence regarding the effectiveness of games compared to standard tutorials, whether games can improve aptitudes, and instructional methods shown to improve learning from games. Mayer, R. E. 2019a. “Computer Games in Education.” Annual Review of Psychology, 70, 531-549. An article which includes much of the same information as Computer Games for Learning book but in a more concise format. Mayer, R. E. 2019b. “Cognitive foundations of game-based learning.” In J. L. Plass, R. E. Mayer, and B. D. Homer, Eds. Handbook of Game-Based Learning. Cambridge, MA: MIT Press. A chapter in a handbook that surveys evidence on game-based learning that includes much of the same information as the two previous references.
Part 5
Evidence-Based Principles of Instruction
Chapter 17
Evidence-Based Methods Aligned to Training Development Direct Instruction vs. Problem-Based Learning Methods to Use in Direct Instructional Designs Methods to Use in Problem-Based Learning Designs Should You Incorporate Games? Common Threads of Evidence-Based Instruction Applying Evidence-Based Practice to Your Training
367
368 • Chapter 17
Most of the previous chapters describe evidence and guidelines related to a single instructional method or strategy, such as graphics, feedback, or problem-based learning. To put these methods into context of your training decisions, this chapter revisits these methods as they apply to instructional design and development processes. I also want to highlight the relative effectiveness of the various methods. To do so, I have included effect sizes based primarily on meta-analytic reports. Recall that an effect size above 0.50 is worth implementing and any effect sizes around or above 1.0 are very impressive. The majority of these effect sizes are drawn from Mayer (2017, 2019a, 2019b) or from Hattie’s 2009 synthesis of 800 meta-analyses. Keep in mind that this is a summary chapter; to get more detail and examples regarding any of the methods mentioned here, refer to the related chapter.
Direct Instruction Versus Problem-Based Learning After completing a job analysis, most training development processes focus on creating outlines and writing learning objectives. As you tackle these tasks, you will need to decide whether to use a direct instructional approach or a more inductive problem-based architecture or some combination of the two. One of the main differences in these designs is the sequencing of content and methods—so you will need to make this decision early in the process. Both direct instruction and problem-based designs have reported effect sizes of around 0.59— either approach has an evidence base to support it. A class that focuses on Excel could be organized either using a traditional direct design or in a problem-based format as illustrated in Figure 17-1.
Evidence-Based Methods Aligned to Training Development • 369
Figure 17-1. A Directive and Problem-Based Outline for an Excel Class
Note that both Excel training outlines include basically the same concepts and procedures. The directive version teaches prerequisite knowledge first with explanations, examples, and practice exercises. The problem-based version starts each segment with a challenge—a scenario that will involve manipulating a preconstructed spreadsheet. The initial challenge will be highly guided and incorporate the concepts of cells and cell references. As challenges progress, the level of guidance will decrease. Consider a problem-based design if your primary goal is to build critical thinking skills and your learners have some related job experience. Because knowledge and skills are gained in the context of solving a real-world problem, this approach can be motivational because the relevance of the training is apparent from the start. For example, if you are teaching business analysis to experienced staff, a problem-based approach might be more motivational. Alternatively, consider a traditional direct instructional approach if your target learners are new to the tasks and if many of the tasks are procedural in nature. If you are teaching an Excel class to staff new to spreadsheets, the more directive
370 • Chapter 17
approach will be more efficient. In a hybrid approach, you may start with a problem—either as a pretraining assignment or as an introductory exercise—and then shift to a traditional instructional design, returning to apply new knowledge and skills to the problem solution as instruction progresses.
Methods to Use in Direct Instructional Designs A typical direct instructional lesson will include five main components: • pretraining assignments • explanations • worked examples (demonstrations) • structured engagement opportunities • feedback. Mayer reports a high effect size of 0.8 for pretraining events. Some options for pretraining assignments include: • problems related to the training goal • work projects in which learners make observations or collect data on the job • self-study tutorials that present prerequisite or introductory concepts • games. Pretraining can be assigned prior to a formal instructional event or as an introductory activity to an in-person class or lesson. If you want to assign pretraining prior to a formal event, determine whether your learners are likely to complete prework. If not, integrate pretraining activities as an introduction to a formal event. The main goals of pretraining assignments are to activate prior knowledge or build a knowledge base of foundational concepts, help learners determine their own
Evidence-Based Methods Aligned to Training Development • 371
knowledge gaps and the value of the knowledge and skills they will learn, and lay groundwork for transfer by initiating a work-related project that can be continued during and after formal training.
Explanations Explanations are a major component of just about all training programs whether presented by an instructor, included in online self-study tutorials or videos, incorporated into reading resources, or integrated as problem guidance. Explanations will include visuals and words presented in audio, text, or both. Relevant visuals have one of the highest effect sizes of 1.4 for novice learners. In addition, learners like them! Don’t shortchange or abuse this important instructional method. Remember to keep visuals simple based on the goal of your explanation. Excluding extraneous or distracting visuals also has a very high effect size of 1.66, so maximize the value of your visuals by keeping them relevant to your learning objective. For more complex visuals, such as animations, direct learner attention to the most relevant aspects of the visual using cues such as color, highlighting, or arrows. If your instruction includes complex visuals, explain them with brief audio narration. For e-learning, write scripts for the narrator; otherwise write notes for the instructor. Using audio rather than text explanations results in a high effect size of 1.4. If your delivery medium does not support audio, such as a book, present words in text closely aligned to relevant visuals. Integrating text with visuals leads to high effect sizes of 1.0. Write text on a screen or slide close to the relevant portion of the visual. In addition, ensure that visuals and text describing a visual can be viewed on the same page or page spread. Personalize your explanations by using I, you, and we constructions and relatively informal language. Using personalized language has a
372 • Chapter 17
high effect size of 0.89. In e-learning, consider adding an on-screen character or agent who will serve some instructional purpose, such as pointing to relevant portions of a graphic or providing hints or feedback. Note, however, the use of on-screen agents has a lower reported effect size of 0.36, so evidence does not indicate as high a priority for agents compared to the other methods just described. As you develop your explanations, keep them short and to the point. You can implement the segmentation principle (high effect size of 1.0) by using fewer words per section of instruction, breaking content into small chunks, and inserting frequent examples and engagement opportunities within your explanations.
Worked Examples—Demonstrations Worked examples are illustrations of how to complete a task—either a step-by-step procedural task or a more strategic task that involves critical thinking or problem solving. Worked examples allow your learners to borrow knowledge. By studying worked examples, learners can emulate how others perform a task. When constructing worked examples, apply many of the guidelines listed under explanations. Specifically, use relevant visuals and explain them with brief audio narration or with text integrated close to the visual. Use conversational language and manage the length of the example by breaking it into two or more shorter examples or by separating the steps with pauses or line spaces. To ensure that your learners review the worked examples, include questions with them. The inclusion of self-explanation questions has a reported effect size of 0.55. Questions may ask learners about underlying principles in an example or may require learners to compare two or more examples for similarities and differences. For
Evidence-Based Methods Aligned to Training Development • 373
example, the e-learning Excel lesson shows a worked example with a self-explanation question added to the example (Figure 17-2). Figure 17-2. A Worked Example in Excel Includes a Self-Explanation Question
Structured Engagement Methods Learning relies on engagement with instructional content. Some learners are skilled at self-engagement. Most learners, however, benefit from structured engagement opportunities. The outcome of the activity can then be assessed and improved with feedback. Feedback not only guides learners but also helps instructors identify where they need to adapt their training. Some structured engagement options that we have reviewed in previous chapters include: • questions—clicker questions or self-explanation questions • drill and practice to build automaticity • application practice exercises aligned to learning objectives • collaborative assignments
374 • Chapter 17
• teach-backs • games. An application practice exercise from the Excel class is shown in Figure 17-3. Figure 17-3. An Application Practice From an Excel Class

Practice assignments will take time to create and time to execute in the instructional environment. We know that the majority of learning benefits accrue from the initial practice sessions with diminishing returns over time. The amount of practice you include will depend primarily on the criticality of the task. For tasks involving high risk as a consequence of failure, such as piloting an airplane, high amounts of practice are needed. In other situations, where additional practice can occur on the job without undue risk, lower amounts of practice will suffice. Pilot test your training to determine an optimal amount of practice to reach acceptable job competence.
Evidence-Based Methods Aligned to Training Development • 375
As you plan your practice activities, find ways to spread them over a lesson or course. Spaced practice results in better long-term learning with a high effect size of 0.71. In addition, if learners need to adapt steps or guidelines to different categories of problems or situations, mixing problem categories (interleaving) will yield better long-term performance. For example, if you are training technicians to detect product quality flaws, after initially working on different categories of flaws individually, a practice that combines the categories will be most beneficial.
Feedback Feedback on structured engagement opportunities is essential to maximize value from the activity with a reported effect size of 0.73. However, in many instances, feedback can actually depress learning! Feedback comes from many sources, including instructors, computergenerated, and peers. To maximize the benefits of feedback: • Direct feedback to the task criterion or task performance process rather than to the learner’s ego. • Go beyond correct or incorrect feedback responses by including explanations for a correct or incorrect answer. • Comment on effective features of an assignment product as well as offering suggestions for improvement. • Avoid feedback that draws attention to the learner’s performance relative to others. This type of normative feedback, especially when negative, has been shown to reduce motivation for learning. Some feedback options for the Excel practice in Figure 17-3 are: Yes, Angela earned $140 in the last pay period.
376 • Chapter 17
Something went wrong. Check that you started the formula with an equal sign, used the operator *, and that your cell references are B4 and B5. Revise your formula and click enter.
Methods to Use in Problem-Based Learning Designs When your goal is to build critical thinking skills and your learners have some basic work experience, you may opt for a problem-based design. Your instruction will begin with a work-related problem scenario and incorporate sufficient guidance to enable learners working solo or in collaborative groups to make progress toward scenario solutions. Training that focuses on job-specific critical thinking skills has been shown to promote critical thinking with an effect size of 0.57. First, identify critical thinking skills linked to the job roles of the target audience. Then you can embed these skills into a series of job-authentic scenarios. In a problem-based design, you will incorporate the following key elements.
A Series of Scenarios Design a series of authentic job situations or problems to be resolved. Start with simpler scenarios that include fewer variables and straightforward solutions. Gradually increase scenario scope or difficulty.
Guidance A risk of problem-based learning is confusion and demotivation of learners. Minimize the flounder factor by providing guidance—what instructional psychologists call scaffolding. Meta-analyses show that the inclusion of guidance improves both the problem-solving effort
Evidence-Based Methods Aligned to Training Development • 377
during training as well as learning outcomes. Some useful forms of guidance include: • explanations—to develop effective explanations, follow the guidelines in this chapter regarding explanations for direct instruction • process constraints—find ways to limit and guide learners actions and decisions; for example use a more constrained e-learning design, such as a branched scenario • expert models that will provide worked examples of task performance • feedback that can be instructional as described above and can also be intrinsic in which the system responds to actions that are or are not effective.
Should You Incorporate Games? When compared to direct instruction, the learning-effect sizes of games are positive but small at about 0.30. The learning benefits of games are not as impressive as other methods we have reviewed. However, if games are sufficiently motivational that your learners will engage with them repeatedly, consider including games as one element of your instructional suite. Motivational effects of games compared to traditional tutorials are high with reported effect sizes around 0.80. When planning games, evidence shows that their learning value is maximized when: • Game moves, responses, and feedback are aligned to job tasks. • Extraneous load is minimized by simpler interfaces, avoidance of irrelevant narratives, and use of audio rather than text.
378 • Chapter 17
• Engagement options incorporate explanatory feedback and self-explanation selection options. • Pregame orientations are included that explain game rules and mechanics.
Common Threads of Evidence-Based Instruction Looking back at the evidence summarized in this book, what are some themes that subsume individual chapters? The following are some of my lessons learned:
No Yellow Brick Road We would all love to have some universal keys that will open the doors to effective learning environments for most situations. Remember that even instructional methods such as graphics that have yielded very high positive effects, have conditions required to get those benefits. For example, graphics will give the best return when: • Learners are relatively novice. • The graphic is relevant to the learning objective. • The graphic is generated in the simplest form that supports the learning objective. Every method and strategy we have reviewed has what instructional psychologists call “boundary conditions”—specific guidelines regarding the learners, the instructional goal, and the implementation of that method that optimize its benefits.
Inconsistent or Imprecise Terminology When you hear or read claims about methods or technology such as games or immersive virtual reality, take a minute to clarify how that
Evidence-Based Methods Aligned to Training Development • 379
particular method is implemented. The profession of instructional psychology lacks consistent definitions, and there are some overgeneralized claims for methods that actually are very broad. For example, there are many types of games that potentially can serve various learning purposes. Ask yourself how the author has implemented the method and the extent to which it will apply to your context.
Lack of Valid Evidence Often claims are made based on incomplete or invalid evidence. When you read or hear generalizations about the effectiveness of any instructional approach such as stories or games, take time to investigate the basis for those claims. Are they based on intuition, individual experience, community consensus, or data? If there is data, is it derived from valid experiments that include random assignment of learners as well as control groups?
Less Is Often More Keep in mind that while humans have great intellectual capabilities, in some ways our abilities to process information are quite limited. Quite a few of the guidelines in this book are offered in the service of minimizing extraneous mental load so that limited mental capacity can be devoted to learning. A recurrent theme through many chapters is: Keep it simple. Whether it comes to visuals, stories, or technology, remember that often less is more.
High Tech Does Not Necessarily Translate Into Better Learning New technologies are a constant in this information age and those of us with instructional responsibilities naturally want to leverage them.
380 • Chapter 17
For example, one of the hot new technologies of 2019 is immersive virtual reality. However, more than one research evaluation has found that a lower tech delivery vehicle such as a slideshow led to better learning than the latest technology. Parong and Mayer (2018) reported better learning of biology concepts presented via a slideshow than an immersive virtual reality presentation. No doubt immersive virtual reality will have some impressive educational applications. However, be leery of blanket overstatements of the benefits of new technologies. Ask yourself what advantages a new technology has compared to older technologies.
Balance Motivation and Learning Most of the evidence I have reviewed in this book focuses on learning. Measures of motivation have, until recently, been ignored. The good news is that most current research studies are now including measures of both learning and motivation. Recent research measures motivation with survey rating questions, such as “How much did you enjoy this lesson?” “To what extent did you find the lesson boring?” and “Would you like to take more lessons like this?” Responses to questions like these can be compared between two or more lesson versions. For example, while Parong and Mayer found better learning outcomes from a slide presentation, overall, learners liked the immersive virtual reality version better. Keep in mind that often liking and learning are not correlated. For example, students rated all lessons with visuals higher than lessons without visuals, including lessons that included ineffective visuals that depressed learning. A reasonable goal is to create learning environments that are both motivating and promote learning. If two approaches have the same learning effects but one is more motivating than another, the more motivational lesson wins. A convincing
Evidence-Based Methods Aligned to Training Development • 381
argument for the use of games is their potential for motivation leading to more time invested in learning. I believe that in the next five years, we will have a much broader base of evidence regarding both learning and motivation to guide our decisions. Meanwhile, you can incorporate measures of motivation into your evaluation metrics along with measures of learning.
Applying Evidence-Based Practice to Your Training As you have read in the previous chapters, instructional researchers have built an impressive base of evidence-based methods over the past 25 years. It is my hope that you will apply this evidence as one factor to consider in your instructional decisions. Especially give weight to methods that are relatively inexpensive to implement and are based on valid data of learning benefits. Consider a few simple changes to your instruction, such as: qqWrite with first- and second-person language. qqDelete extraneous stories, visuals, and verbose text
descriptions. qqBreak instruction into bite-sized pieces punctuated by
structured engagement opportunities. qqPlace engagement opportunities throughout the learning
environment rather than lumped into a few places. qqIncorporate feedback that focuses the learner’s attention to
the task or the task process. qqAdd pretraining experiences that help learners to define
their knowledge gaps and provide some basic concepts prior to the main learning events.
382 • Chapter 17
I am grateful to the many research scientists who have dedicated many years to conducting and reporting research on instructional strategies. I hope that my attempts to translate and illustrate their guidelines will be helpful to all instructional professionals. We can all look forward to additional and modified guidelines as evidence continues to evolve.
Appendix
A Synopsis of Evidence-Based Instructional Methods Communication Modes (Text, Audio, Graphics) Examples Engagement Explanations Teaching Procedures Scenario-Based Learning Games
383
384 • Appendix
We’ve looked at quite a bit of instructional research in the past 17 chapters. In this appendix I will summarize many of the most important guidelines. If you are interested in a specific method, review the headings below to get a summary and then refer to the indicated chapters to get the details. Remember that the effectiveness of many instructional methods have qualifiers such as background knowledge of the learners or the instructional goal. There are few universal guidelines.
Communication Modes (Text, Audio, Graphics) All instructional environments from books to simulations rely on some combination of text, audio, and graphics to communicate content and instructional methods. We have quite a bit of research to guide best use these modes. Mode
Guidelines
Chapters
Graphics
Relevant graphics will improve learning of novices compared to text alone. Avoid seductive or unnecessarily complex visuals that distract from the instructional content. Learners prefer materials with graphics of any kind—even graphics that distract. Line graphics and animations are more effective than text for performance support of spatial tasks. Use of color and shapes in graphics may improve motivation without depressing learning.
2, 5, 9
Stills Versus Animations
Still visuals generally impose less mental load than animated visuals. Still visuals have been shown more effective to teach mechanical and scientific processes; animations have been shown more effective to illustrate procedures and processes that involve dynamic changes in movement.
6
Animated Visuals
Apply methods to manage cognitive load, including cues to direct attention, segmentation, controls to stop and replay, and explanations given in audio narration.
6
A Synopsis of Evidence-Based Instructional Methods • 385
Text
Place text near a visual when the visual requires an explanation; otherwise omit words in any form. Write lean sentences. Chunk text into small segments allowing learners to access each chunk at their own pace in asynchronous e-learning.
7, 9
Audio
Explain complex graphics with audio rather than text to avoid split attention. Keep audio narration brief. See chapter 7 for exceptions.
7
Audio and Text
Avoid identical narration of written text. Narration that expands on short bulleted text is OK.
7
Personalize Communication
Use first and second person in your text or audio narration; use on-screen learning agents that serve a relevant purpose such as directing attention or giving feedback in asynchronous e-learning.
8
Examples Humans are uniquely wired to learn by observation. You can save instructional time by providing examples of task completion. Called worked examples by instructional researchers, evidence has shown their benefits for learning structured tasks such as solving algebraic problems, interpersonal skills such as customer service, and critical thinking skills used in problem solving. Leverage your worked examples by applying the following guidelines: Guideline
Description
Chapter
Use high fidelity examples to illustrate routine tasks.
Worked examples for procedural tasks should mirror the sights and sounds of the workplace and should illustrate tasks from the learner’s visual perspective, that is, over the shoulder view
10
Use varied context examples for strategic tasks.
Provide two or more worked examples of tasks that involve problem solving or critical thinking that vary the cover story while illustrating the core guidelines.
10
Engage learners in your examples.
Promote processing of your examples by: • adding self-explanation questions to your examples • giving example comparison assignments.
10
Apply principles on text, audio, and graphics to examples.
When designing worked examples, use relevant visuals, and explain them with audio or with text closely assigned to the visuals.
10
386 • Appendix
Engagement Contrary to popular belief, not all engagement leads to learning. At the same time, productive engagement is essential to learning. Distinguish between behavioral engagement and psychological engagement. In some cases, behavioral engagement leads to mental overload and depresses learning. In other cases, behavioral engagement does not align with the learning goal and thus does not promote the intended learning. However, effective behavioral engagement can maximize learning through feedback. Guideline
Description
Chapter
Some behavioral engagement depresses learning.
When behavioral engagement is unrelated to the learning objective, imposes extraneous mental load, or promotes incorrect or incomplete responses, learning is depressed. For example, filling in blank organizers led to less learning than reviewing a completed organizer.
4
Psychological engagement is essential with or without behavioral engagement.
Learning occurs when learners process and practice content and skills relevant to the learning objective. Relevant psychological processing can occur in the absence or presence of behavioral engagement.
4
Add techniques shown to promote psychological engagement.
Learning from explanations is best when you include: • relevant visuals • worked examples or demonstrations • personalization techniques.
2, 4, 5, 6, 7, 8
Best learning comes from relevant behavioral engagement with feedback.
Some proven behavioral engagement techniques include adding questions to explanations and examples, using clickers during lectures, and assigning relevant collaborative exercises.
4, 10, 11, 12
A Synopsis of Evidence-Based Instructional Methods • 387
Optimize behavioral engagement in the form of practice exercises.
• Be sure the practice aligns to the learning objective. • Assign sufficient practice based on criticality of the task. • Space practice over and among learning events. • Mix practice categories when it’s important to distinguish when to apply problem-solving strategies.
11
Include productive feedback to behavioral responses.
Some forms of feedback actually depress learning. Effective feedback: • provides knowledge of results and explains the answer • focuses on ways to improve outcomes • avoids comparing learner outcomes with other learners • is used by instructors in order to adapt teaching to individual or group needs
12
Explanations Almost all learning environments include explanations. They may be in the form of an instructor lecture, online tutorials, workbooks, or hints provided during a game or simulation. Guideline
Description
Chapter
Add behavioral engagement to explanations.
Build in frequent activities during explanations including questions (consider clicker questions), questions linked to examples (self-explanation questions), collaborative exercises, and drawing assignments.
13
Promote psychological engagement during explanations.
Incorporate relevant graphics and examples, leverage social presence.
13
Avoid extraneous cognitive load.
Keep explanations concise and focused.
13
388 • Appendix
Teaching Procedures A procedure is a task that is performed more or less the same way each time. Some examples of routine tasks include logging onto a computer, responding to routine customer transactions, or operating equipment in a consistent manner. Infrequent procedures can be guided by performance support to economize time spent training. Guideline
Description
Chapter
Consider performance support to replace or supplement training.
For assembly tasks, visual performance aids (still graphics or animations) were more efficient for initial performance than text. After several task iterations, text and graphics were equally effective.
14
Break large tasks into subtasks, but teach in context of the whole.
To manage cognitive load, teach procedures in small chunks of 7-12 steps each. Be sure to illustrate subtasks in context of whole task.
14
Teach important concepts prior to the procedure.
Prior to demonstrating or practicing steps, teach critical concepts needed for understanding. Show concepts in context of the task.
14
Provide guided practice with feedback.
Depending on the complexity of the procedure, impose guidance in the form of demonstrations, guided practice, and feedback.
14
Scenario-Based Learning For tasks that involve critical thinking skills, or that are challenging to build expertise on the job due to safety or other constraints, consider scenario-based learning. Scenario-based learning, also called problem-based or immersive learning, is a preplanned guided inductive learning environment designed to accelerate expertise. The learner assumes the role of a worker responding to an authentic job assignment or challenge, which in turn responds to reflect the learners’ choices.
A Synopsis of Evidence-Based Instructional Methods • 389
Guideline
Description
Chapter
Use experts to identify realistic scenarios and the thinking processes to resolve them.
Build real-world scenarios that will be the driver of the learning. Ensure authentic scenarios by deriving them from experts or from archives of work samples such as customer service recordings.
15
Embed guidelines into domain-specific training.
Although critical thinking skills can be supported by generic thinking guidelines, best results were seen when domain-specific guidelines were embedded in courses focusing on job-relevant tasks.
15
Ensure sufficient Some guidance techniques include: guidance. • Transition from simple to complex scenarios. • Constrain learner control through scenario designs with greater structure such as branched scenarios. • Provide instructional support such as worked examples and knowledge resources. • Provide consequential (intrinsic) as well as instructional feedback during and at the end of a scenario. Provide opportunity for explicit reflection.
15
To maximize lessons learned, include techniques 15 that encourage learners to review their decisions and actions and identify lessons learned. Some techniques include: • collaborative debrief • comparison of learner solution with expert solution • replay of the scenario • a learner statement of lessons learned.
Games Due to popularity of entertainment games, there has been quite a bit of research on how to leverage games in educational settings. What evidence do we have for the benefits of serious games, that is games designed for both fun and learning? What techniques can be added to make games more effective as learning devices?
390 • Appendix
Guideline
Description
Chapter
Align game progress to desired learning outcomes.
Design a game so that game actions and progress link to the learning goal.
16
Minimize complexity.
Avoid extraneous cognitive load in games by: • using a simple interface • avoiding narrative themes unrelated to learning • providing pretraining on principles underlying the game • avoiding mental work not directly related to the learning objective.
16
Promote reflection on lessons learned.
Learning from a game may result in tacit knowledge not readily articulated. Encourage conscious awareness of lessons learned through: • collaborative play • comparison of responses or results with expert responses or results • a learner statement of lessons learned.
16
Add known instructional methods to promote learning from games.
Some techniques shown to improve learning from a game include: • adding job aids such as an explanation of principles • using audio for explanations and feedback • including a pregame exercise to familiarize players with game concepts and interface • giving explanatory feedback • adding self-explanation questions.
16
References Abercrombie, S. 2013. “Transfer effects of adding seductive details to case-based instruction.” Contemporary Educational Psychology, 38, 149-157. Abrami, P. C., R. M. Bernard, E. Borokhovski, D. I. Waddington, C. A. Wade, and T. Persson. 2015. “Strategies for Teaching Students to Think Critically: A Meta-Analysis.” Review of Educational Research, 85(2), 275–314. Adams, D.M., R. E. Mayer, A. MacNamara, A. Koenig, and R. Wainess. 2012. “Narrative games for learning: Testing the discovery and narrative hypotheses.” Journal of Educational Psychology, 104, 235-249. Adesope, O.O., and J. C. Nesbit. 2012. “Verbal redundancy in multimedia learning environments: A Meta-Analysis.” Journal of Educational Psychology, 104, 250-263. Adesope, O. O., D. A. Trevisan, and N. Sundararajan. 2017. Rethinking the use of tests: A meta-analysis of practice testing. Review of Educational Research, 87(3), 659-701. Ainsworth, S., and A. T. Loizou. 2003. “The effect of self-explaining when learning with text or diagrams.” Cognitive Science, 27, 669-681. Ainsworth, S., and S. Burcham. 2007. “The impact of text coherence on learning by self-explanation.” Learning and Instruction, 17, 286-303. Alfieri, L., P. J. Brooks, N. J. Aldrich, and H. R. Tenenbaum. 2011. “Does discovery-based instruction enhance learning?” Journal of Educational Psychology, 103, 1-18. Amadieu, F., C. Marine, C. Laimay. 2011. “The attention-guiding effect and cognitive load in the comprehension of animations.” Computers in Human Behavior, 27: 36-40. Anderson, L.S., A. F. Healy, J. A. Kole, and L. E. Bourne. 2013. “The clicker technique: Cultivating efficient teaching and successful learning.” Applied Cognitive Psychology, 27, 222-234. ATD. 2017. 2017 State of the Industry. Alexandria, VA: ATD Press. ASTD. 2014. Playing to Win: Gamification and Serious Games in Organizational Learning. Whitepaper. Alexandria, VA: ATD Press. Atkinson, R.K., A. Renkl, and M. M. Merrill. 2003. “Transitioning from studying examples to solving problems: Effects of self-explanation prompts and fading worked out steps.” Journal of Educational Psychology, 95(4), 774-783. Aydin, S. 2012. A review of research on Facebook as an educational environment. Education Technology Research & Development DOI10,1007/s 11423-012-9260-7. Ayres, P., N. Marcus, C. Chan, and N. Qian. 2009. “Learning hand manipulative tasks: When instructional animations are superior to equivalent static representations.” Computers in Human Behavior, 25, 348-353. Baddeley, A.D. and D. J. A. Longman. 1978. “The influence of length and frequency of training session on the rate of learning to type.” Ergonomics, 21, 627-635. Bernard, R.M., P. C. Abrami, Y. Lou, E. Borokhovski, A. Wade, L. Wozney, P. A. Wallet, M. Fisher, and B. Huang. 2004. “How does distance education compare with classroom instruction? A meta-analysis of the empirical literature.” Review of Educational Research, 74, 379-439. Bird, S. 2010. “Effects of distributed practice on the acquisition of second language English syntax.” Applied Psycholinguistics, 31, 635-650.
391
392 • References
Bisra, K., Q. Liu, J. C. Nesbit, F. Salimi, and P. H. Winne. 2018. “Inducting self-explanation: a meta-analysis.” Educational Psychology Review, 30, 703-725. Bligh, D.A. 2000. What’s the Use of Lectures? San Francisco: Jossey-Bass. Boucheix, J.M., R. K. Lowe, D. K. Putri, and J. Groff. 2013. “Cueing animations: Dynamic signaling aids information extraction and comprehension.” Learning and Instruction, 25, 71-84. Brewer, N., S. Harey, and C. Semmler. 2004. “Improving comprehension of jury instructions with audio-visual presentation.” Applied Cognitive Psychology, 18, 765-776. Butcher, K.R. 2006. “Learning from text with diagrams. Promoting mental model development and inference generation.” Journal of Educational Psychology, 98, 182-197. Canham, M.S., J. Wiley, R. E. Mayer. 2012. “When diversity in training improves dyadic problem solving.” Applied Cognitive Psychology, 26, 421-430. Carpenter, S.K., N. J. Cepeda, D. Rohrer, S. H. K. Kang, and H. Pashler. 2012. “Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction.” Educational Psychology Review, 24, 369-378. Chen, J., M. Wang, P. A. Kirschner, and C. C. Tsai. 2018. „The role of collaboration, computer use, learning environments and supporting strategies in CSCL: A meta-analysis.” Review of Educational Research, 88, 799-843. Chi, M. T. H., M. Bassok, M. W. Lewis, P. Reimann, and R. Glaser. 1989. “Self-explanation: How students study and use examples in learning to solve problems.” Cognitive Science, 5, 145-182. Chi, M.T.H. 2000. “Self-explaining expository texts: The dual processes of generating inferences and repairing mental models.” In R. Glaser (Ed.), Advances in instructional psychology: Educational design and cognitive science. Mahwah, NJ: Lawrence Erlbaum Associates. Cho, K., and C. MacArthur. 2011. Learning by Reviewing. Journal of Educational Psychology, 103,73-84. Clark, D.B., E. E. Tanner-Smith, and S. S. Killingsworth. 2016. “Digital games, design, and learning: A systematic review and meta-analysis.” Review of Educational Research, 86, 79-122. Clark, R.C. 2011. Scenario-Based E-Learning. San Francisco: Pfeiffer. Clark, R.C. 2008. Developing Technical Training. San Francisco: Pfeiffer. Clark, R.C. and A. Kwinn. 2007. The New Virtual Classroom. San Francisco: Pfeiffer. Clark, R.C. and C. Lyons. 2011. Graphics for Learning 2nd Edition. San Francisco: Pfeiffer. Clark, R.C. and R. E. Mayer. 2008. “Learning by viewing versus learning by doing: Evidence-based guidelines for principled learning environments.” Performance Improvement, 47, 5-13. Clark, R. and R. E. Mayer. 2016. E-Learning and the Science of Instruction 4th Ed. San Francisco: CA: Pfeiffer. Clark, R.C. and F. Nugyen. 2019. “Chapter 19. Digital games for workforce learning and performance.” In J.L. Plass, Homer, B. and Mayer, R (Eds). Handbook of Game-based Learning. MIT Press. Clark, R.E. and D. F. Feldon. 2014. “Six common but mistaken principles of multimedia learning.” In R.E. Mayer (Ed.). Cambridge Handbook of Multimedia Learning, Second Edition. Boston, MA: Cambridge Press Connolly, T.M., E. A. Boyle, E. MacArthus, T. Hainey, J. M. Boyle. 2012. “A systematic literature review of empirical evidence on computer games and serious games.” Computers & Education, 59,661-686. Cook, D.A., W. G. Thompson, K. G. Thomas, M. R. Thomas. 2009. “Lack of interaction between sensing-intuitive learning styles and problem-first versus information-first instruction: a randomized crossover trial.” Advances in Health Science Education, 14, 70-90.
References • 393
Corbalan, G., F. Paas, and H. Cuypers. 2010. “Computer-based feedback in linear algebra: Effects on transfer performance and motivation.” Computers & Education, 95, 692-703. Cowan, N. 2014. “Working memory underpins cognitive development, learning, and education.” Educational Psychological Review, 26, 197-223. Cromley, J.G., B. W. Bergey, S. Fitzhugh, N. Newcombe, T. W. Wills, T. F. Shipley, J. C. Tanaka. 2013. “Effects of three diagram instruction methods on transfer of diagram comprehension skills: The critical role of inference while learning.” Learning & Instruction, 26, 45-58. Dankbaar, M.E.W., J. Alsma, E. E. H. Jansen, J. J. G. van Merrienboer, J. L. C. M. van Saase, S. C. E. Schult. 2016. “An experimental study on the effects of a simulation game on students’ clinical cognitive skills and motivation.” Advances in Health Science Education, 21, 505-521. De Koning, B.B., H. K. Tabbers, R. M. J. P. Rikers, and F. Paas. 2007. “Attention cueing as a means to enhance learning from an animation.” Applied Cognitive Psychology, 21, 731-746. De Koning, B.B., H. K. Tabbers, R. M. J. P. Rikers, and F. Paas. 2011. “Improved effectiveness of cueing by self-explanations when learning from a complex animation.” Applied Cognitive Psychology, 25, 183-194. De La Paz, S. and M. K. Felton. 2010. “Reading and writing from multiple source documents in history: Effects of strategy instruction with low to average high school writers.” Contemporary Educational Psychology, 35, 174-192. DeLeeuw, K.E. and R. E. Mayer. 2011. “Cognitive consequences of making computer-based learning activities more game-like.” Computers in Human Behavior, 27,2011-2016. DeLozier, S.J. and M. G. Rhodes. 2017. “Flipped Classrooms: A Review of Key Ideas and recommendations for practice.” Educational Psychology Review, 29, 141-151. Dunlosky, J., K. A. Rawon, E. J. Marsh, M. J. Nathan, and D. T. Willingham. 2013. “Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology.” Psychological Science in the Public Interest, 14, 4-58. Dunlosky, J., and K. A. Rawson. 2012. “Overconfidence produces underachievement: Inaccurate self evaluations undermine students’ learning and retention.” Learning and Instruction, 22, 271-280. Endres, T., and A. Renkl. 2015. “Mechanisms behind the testing effect: An empirical investigation of retrieval practice in meaningful learning.” Frontiers in Psychology, 6, 1054. Ericcson, K.A. 2006. “The influence of experience and deliberate practice on the development of superior expert performance.” In K.A. Ericsson, N. Charness, P.J. Feltovich, & R.R. Hoffman (Eds.) The Cambridge Handbook of Expertise and Expert Performance. New York: Cambridge University Press. ESA, 2019, “Essential facts about computer and video game industry.” https://www.theesa.com/ wp-content/uploads/2019/05/ESA_Essential_facts_2019_final.pdf Evans, C. 2013. “Making sense of assessment feedback in higher education.” Review of Educational Research, 83, 70-120. Eysink. T.H.S., T. de Jong, K. Berthold, B. Kolloffel, M. Opfermann, and P. Wouters. 2009. “Learner performance in multimedia learning arrangements: An analysis across instructional approaches.” American Educational Research Journal, 46, 1107-1149 Feldon, D. F., G. Callan, S. Juth, and S. Jeong. 2019. “Cognitive load as motivational cost.” Educational Psychology Review, 1-19. Ferdinand, A. O., A. Aftab, and M. A. Akinlotan. 2019. «Texting-While-Driving Bans and Motor Vehicle Crash–Related Emergency Department Visits in 16 US States: 2007–2014.” American journal of public health, 109(5), 748-754.
394 • References
Fiorella, L., and S. Kuhlmann. 2019. “Creating Drawings Enhances Learning by Teaching.” Journal of Educational Psychology, August 15. Advance online publication. http://dx.doi.org/10.1037/ edu0000392 Fiorella, L., and R. E. Mayer. 2012. “Paper-based aids for learning with a computer game.” Journal of Educational Psychology, 104, 1074-1082. Fiorella, L., & Mayer, R.E. (2015). Eight ways to promote generative learning. Educational Psychology Review, 28, 717-741. Fiorella, L. and R. E. Mayer. 2014. “Role of expectations and explanations in learning by teaching.” Contemporary Educational Psychology, 39, 75-85. Fiorella, L., T. van Gog, V. Hoogerheide, and R. E. Mayer. 2017. “It’s all a matter of perspective: Viewing first-person video modeling examples promotes learning of an assembly task.” Journal of Educational Psychology, 109(5), 653. Fiorella, L. and Q. Zhang. 2018. “Drawing boundary conditions for learning by drawing.” Educational Psychology Review, 30: 1115-1137. Fisher, P., J. Kubitzki, S. Guter, and D. Frey. 2007. „Virtual driving and risk taking: Do racing games increase risk-taking cognitions, affect, and behaviors?” Journal of Experimental Psychology, Applied. 13, 22-31. Fong, C.J., E. A. Patall, A. C. Vasquez, and S. Stautberg. 2019. “A meta-analysis of negative feedback on intrinsic motivation.” Educational Psychology Review 31: 121-162. Fortier, S. 2018. “Are pro athletes playing too much Fortnite? Some teams are worried.” Washington Post, July 20. Fulton, L.V., L. V. Ivanitskaya, N. D. Bastian, D. A. Erofeev, and F. A. Mendez. 2013. “Frequent deadlines: Evaluating the effect of learner control on healthcare executives’ performance in online learning.” Learning and Instruction, 23, 24-32. Gadgil, S., T. J.Nokes-Malach, and M. T. H. Chi. 2012. “Effectiveness of holistic mental model confrontation in driving conceptual change.” Learning and Instruction, 22,47-61. Garland, T.B and C. A. Sanchez. 2013. “Rotational perspective and learning procedural tasks from dynamic media.” Computers and Education, 31-37. Gartmeier, M., J. Bauer, M. R. Fischer, T. Hoppe-Seyle, G. Karsten, C. Kiessling, G. E. Moller, A. Wiesbeck, and M. Prenzel. 2015. “Fostering professional communication skills of future physicians and teachers: Effects of e-learning and video cases and role play.” Instructional Science, 43, 443-462. Gentner, D., J. Lowewenstein, and L. Thompson. 2003. “Learning and transfer: A general role for analogical encoding.” Journal of Educational Psychology, 95, 393-408. Ginns, P., A. J. Martin, H. W. Marsh. 2013. “Designing instructional text in a conversational style: a meta-analysis.” Educational Psychology Review, 25: 445-472. Ginns, P. 2005. “Integrating information: A Meta-analysis of spatial contiguity and temporal contiguity effects.” Learning and Instruction, 16, 511-525. Glogger-Frey, I., C. Fleischer, L. Gruny, J. Kappich, and A. Renkl. 2015. “Inventing a solution and studying a worked solution prepare differently for learning from direct instruction.” Learning and Instruction, 310, 72-82. Golke, S., R. Hagen, and J. Wittwer. 2019. „Lost in narrative? The effect of informative narratives on text comprehension and metacomprehension accuracy.” Learning and Instruction, 69, 1-19. Haidet, P., R. O. Morgan, K. O’Malley, B. J. Moran, and B. F. Richards. 2004. “A controlled trial of active versus passive learning strategies in a large group setting.” Advances in Health Sciences Education, 9, 15-27.
References • 395
Harp, S.F. and R. E. Mayer. 1998. “How seductive details do their damage: A theory of cognitive interest in science learning.” Journal of Educational Psychology, 90,414-434. Hatala, R.M., L. R. Brooks, and G. R. Normal. 2003. “Practice makes perfect: The critical role of mixed practice in the acquisition of ECG interpretation skills.” Advances in Health Sciences Education, 8, 17-26. Hattie, J.A.C. 2009. Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge. Hattie, J. and M. Gan. 2011. “Instruction based on feedback.” In R.E. Mayer and P.A. Alexander (Eds.) Handbook of research on learning and instruction. New York: Rutledge. Hattie, J., J. Gan, and C. Brooks. 2017. “Instruction based on Feedback.” In Mayer, R.E. & Alexander, P.A. (Eds). Handbook of Research on Learning and Instruction New York: Routledge. Hattie. J. and G. Yates. 2014. Visible Learning and the Science of How we Learn. London: Routledge. Hayes, R.E. 2005. “The effectiveness of instructional games: A literature review and Discussion.” Technical Report 2005-004. Naval Air Warfare Center Training Systems Division, Orlando, FL Hegarty, M., H. S. Smallman, and A. T. Stull. 2012. “Choosing and Using Geospatial Displays. Effects of Design on Performance and Metacognition.” Journal of Experimental Psychology: Applied, 18, 1-17. Heidig, S. and G. Clarebout. 2011. “Do pedagogical agents make a difference to student motivation and learning?” Educational Research Review, 6, 27-54. Hoogerheide, V., S. M. M. Loyens, and T. van Gog. 2014. “Effects of creating video-based modeling examples on learning and transfer.” Learning and Instruction, 33, 108-119. Holsanova, J., N. Holmberg, Holmqvist. 2009. “Reading information graphics: The role of spatial contiguity and dual attentional guidance.” Applied Cognitive Psychology, 23, 1215-1226. Hoogerheide, V., L. Deijkers, S. M. M. Loyens, A. Heijltjes, and T. van Gog. 2016. “Gaining from explaining: Learning improves from explaining to fictitious others on video, not from writing to them.” Contemporary Educational Psychology, 44-45, 95-106. Hoogerheide, V., L. Fiorella, A. Renkl, F. Paas, and T. van Gog. 2018. “Enhancing example-based Learning: Teaching on video increases arousal and improves problem-solving performance.” Journal of Educational Psychology, 111, 45-56. Hew, K.F. and W. S. Cheung. 2013. “Use of Web 2.0 technologies in K-12 and higher education: The search for evidence-based practice.” Educational Research Review, 9, 47-64. Ingraham, C. 2019. “Dogs VS Cats.” Article reported in Washington Post. Jaeger, A.J., M. N. Velazquez, A. Dawdanow, and T. F. Shiple. 2018. “Sketching and Summarizing to Reduce Memory for Seductive Details in Science Text.” Journal of Educational Psychology, 110, 899-916. Jairam, D., K. A. Kiewra, D. G. Kauffman, and R. Zhao. 2012. “How to study a matrix.” Contemporary Educational Psychology, 37, 128-135. Jones, R., M. Panda, and N. Desbiens. 2008. “Internal medicine residents do not accurately assess their medical knowledge.” Advances in health sciences education, 13(4), 463-468. Johnson, C. I. and R. E. Mayer. 2010. “Adding the self-explanation principle to multimedia learning in a computer-based game-like environment.” Computers in Human Behavior, 26, 12461252. Johnson, C.I. and R. E. Mayer. 2012. “An eye movement analysis of the spatial contiguity effect in multimedia learning.” Journal of Experimental Psychology: Applied, 18, 178-191. Jones, R., M. Panda, and N. Desbiens. 2008. “Internal medicine residents do not accurately assess their medical knowledge.” Advances in Health Science Education, 13, 463-468,
396 • References
Kamin, C.S., P.S. O’Sullivan, R. Deterding, and M. Younger. 2003. “A comparison of critical thinking in groups of third-year medical students in text, video, and virtual PBL case modalities.” Academic Medicine, 78(2): 204-11. Kellogg, R.E. and A. P. Whiteford, A.P. 2009. “Training advanced writing skills: The case for deliberate practice.” Educational Psychologist, 44, 250-266. Kalyuga, S. 2012. “Instructional benefits of spoken words: A review of cognitive load factors.” Educational Research Review, 7, 145-159. Kalyuga, S., P. Chandler, J. Tuovinen, and J. Sweller. 2001. “When problem solving is superior to studying worked examples.” Journal of Educational Psychology, 93, 579-588. Kant, J.M., K. Scheiter, K. Oschatz. 2017. „How to sequence video modeling examples and inquiry tasks to foster scientific reasoning.” Learning and Instruction, 52, 46-58. Kauffman, D. F., R. Zhao, Y. Yang. 2011. “Effects of online note taking formats and self-monitoring prompts on learning from online text: Using technology to enhance self-regulated learning.” Contemporary Educational Psychology, 36, 313-322. Kim, N.J., B. R. Belland, and A. E. Walker. 2018. “Effectiveness of computer-based scaffolding in the context of problem-based learning for STEM education: Bayesian meta-analysis.” Educational Psychology Review, 30, 397-429. Kirschner, P.A. 2017. “Stop propagating the learning styles myth.” Computers & Education, 106, 166-171. Kirschner, F., F. Paas, P. A. Kirschner, and J. Janssen. 2011a. „Differential effects of problem-solving demands on individual and collaborative learning outcomes.” Learning and Instruction, 21, 587-599. Kirschner, F., F. Paas, and P. A. Kirschner. 2011b. “Task complexity as a driver for collaboratie learning efficiency: The collective working memory effect.” Applied Cognitive Psychology, 25, 615-624. Kornell, N., and R. A. Bjork. 2008. “Learning Concepts and Categories Is Spacing the ‘Enemy of Induction’?” Psychological Science, 19(6), 585-592. Kluger, A. N., and A. DeNisi. 1996. “The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory.” Psychological bulletin, 119(2), 254. Kuhl, T., S. D. Navratil, and S. Munzer. 2018. “Animations and static pictures: The influence of prompting and time of testing.” Learning and Instruction, 58, 201-209. Kratzig, G.P. and K. D. Arbuthnott. 2006. “Perceptual learning style and learning proficiency: A test of the hypothesis.” Journal of Educational Psychology, 98, 238-246. Landers, R.N., and M. B. Armstrong. 2017. “Enhancing instructional outcomes with gamification: An empirical test of the Technology-Enhanced Training Effectiveness Model.” Computers in Human Behavior, 71, 499-507. Lane, H. C., M. J. Hays, M. G. Core, and D. Auerbach. 2013. “Learning intercultural communication skills with virtual humans: Feedback and fidelity.” Journal of Educational Psychology, 105, 1026-1035. Lazonder, A.W. and R. Harmsen. 2016. “Meta-analysis of inquiry-based learning: Effects of guidance.” Review of Educational Research, 86, 681-718. Lazonder, A.W., M. G. Hagemans, and T. de Jong. 2010. ”Offering and discovering domain information in simulation-based inquiry learning.” Learning and Instruction, 20, 511-520. Lee, D.Y and D. H. Shin. 2012. “An empirical evaluation of multi-media based learning of a procedural task.” Computers in Human Behavior, 28, 1073-1081.
References • 397
LeFevre, J. A. and P. Dixon. 1986. “Do written instructions need examples?” Cognition and Instruction, 3, 1-30. Lehman, S., G. Schraw, M. T. McCruddent, and K. Kartler. 2007. “Processing and recall of seductive details in scientific text.” Contemporary Educational Psychology, 32, 569-587. Leopold, C., E. Sumfleth, and D. Leutner, D. 2013. “Learning with summaries: Effects of representation mode and type of learning activity on comprehension and transfer.” Learning and Instruction, 27, 40-49. Li, W., Wang, F., Liu, H. & Mayer, R.E. (2019). Getting the point: Which kinds of gestures by pedagogical agents improve multimedia learning? Journal of Educational Psychology, 111, 1382-1395. Likourezos, V., S. Kalyuga. 2017. “Instruction-first and problem-solving first approaches: Alternative pathways to learning complex tasks. Instructional Science.” Instructional Science, 45, 195-219. Likourezos, V., S. Kalyuga, and J. Sweller. 2019. “The variability effect: When instructional variability is advantageous.” Educational Psychology Review. 31, 479-497. Linek, S.B., P. Gerjets, and K. Scheiter. 2010. “The speaker/gender effect: Does the speaker’s gender matter when presenting auditory text in multimedia messages?” Instructional Science, 38: 503-521. Liu, H., M. Lai, and H. Chuang. 2011. “Using eye-tracking technology to investigate the redundant effect of multimedia web pages on viewers’ cognitive processes.” Computers in Human Behavior, 27, Loibl, K. and N. Rummel. 2014. “Knowing what you don’t know makes failure productive.” Learning and Instruction, 34, 74-85 Loibl, K., I. Roll, and H. Rummel. 2017. “Towards a theory of when and how problem solving followed by instruction supports learning.” Educational Psychology Review, 29, 693-715. Lowe, R., W. Schnotz, and T. Rasch. 2011. „Aligning affordances of graphics with learning task requirements.” Applied Cognitive Psychology, 25, 452-459. Lundeberg, M.A., H. Kang, B. Wolter, R. delMas, N. Armstrong, B. Borsari, N. Boury, P. Brickman, K. Hannam, C. Heinz, T. Horvath, M. Knabb, T. Platt, N. Rice, B. Rogers, J. Sharp, E. Ribbens, K. S. Maier, M. Dechryver, R. Hagley, T. Goulet, and C. F. Herreid. 2011. “Context matters: Increasing understanding with interactive Clicker Case studies.” Education Technology Research and Development, 59, 645-671. Makransky, G., T. S. Terkildsen, and R. E. Mayer. 2019. “Role of subjective and objective measures of cognitive processing during learning in explaining the spatial contiguity effect.” Learning and Instruction, 51, 23-34. Marsh, E. J., and H. E. Sink. 2010. “Access to handouts of presentation slides during lecture: Consequences for learning.” Applied Cognitive Psychology 24. 691-706. Mason, L., R. Lowe, and M. C. Tornatora. 2013. “Self-generated drawings for supporting comprehension of a complex animation.” Contemporary Educational Psychology, 38, 211-224. McCrudden, M.T., and D. N. Rapp. 2017. “How visual displays affect cognitive processing.” Educational Psychology Review, 29, 623-639. Marcus, N., B. Cleary, A. Wong, and P. Ayres. 2013. “Should hand actions be observed when learning hand motor skills from instructional animations?” Computers in Human Behavior, 2172-2178. Mayer, R. E. 2019a. “Computer Games in Education.” Annual Review of Psychology, 70, 531-549. Mayer, R. E. 2019b. “Cognitive foundations of game-based learning.” In Plass, J.L., Mayer, R.E. & Homer, B.D., Eds. Handbook of Game-Based Learning, MIT Press, Cambridge MA
398 • References
Mayer, R. E. 2017. “Instructional based on Visualizations.” In Mayer, R.E. and Alexander P.A. (2017). Handbook of Research on Learning and Instruction, New York, Routledge. Mayer, R. E. 2014a. The Cambridge Handbook of Multimedia Learning, 2nd Edition. New York: NY, Cambridge University Press. Mayer, R. E. 2014b. Computer Games for Learning. MIT Press Cambridge, MA Mayer, R. E. 2014a. “Principles based on social cues in multimedia earning: Personalization, Voice, Embodiment, and Image Principles.” In The Cambridge Handbook of Multimedia Learning – 2nd Edition. R.E. Mayer,(Ed) New York: Cambridge University Press. Mayer. R. E. 2014a. “Cognitive theory of multimedia learning.” In Mayer, R.E. (Ed). The Cambridge Handbook of Multimedia Learning: 2nd Edition. New York: Cambridge University Press Mayer, R. E. 2014c. “Incorporating motivation into multimedia learning.” Learning & Instruction, 29, 171-173. Mayer, R. E. 2011. “Multimedia learning and Games.” In S. Tobias & D. Fletcher (Eds.). Can Computer Games be Used for Instruction? Greenwich, CT: Information Age Publisher Mayer, R. E. 2009. Multimedia Learning (2nd Ed.). New York: Cambridge University Press. Mayer, R. E. 2001. Multimedia Learning. New York: Cambridge University Press. Mayer, R. E., A. Bove, A. Bryman, R. Mars, and L. Tapangco. 1996. “When less is more: Meaningful learning from visual and verbal summaries of science textbook lessons.” Journal of Educational Psychology, 88, 64-73. Mayer, R. E. and P. Chandler. 2001. “When learning is just a click away: Does simple user interaction foster deeper understanding of multimedia messages?” Journal of Educational Psychology, 93, 390-397. Mayer, R. E. and C. S. DaPra. 2012. “An embodiment effect in computer-based learning with animated pedagogical agents.” Journal of Experimental Psychology: Applied, 18, 239-252 Mayer, R. E. and L. Fiorella. 2014. “Principles for Reducing Extraneous Processing in Multimedia Learning: Coherence, Signaling, Redundancy, Spatial Contiguity, and Temporal Contiguity Principles.” In R. E. Mayer, Ed. Cambridge Handbook of Multimedia Learning, 2nd Edition, New York: Cambridge University Press. Mayer, R. E. and J. K. Gallini. 1990. “When is an illustration worth ten thousand words?” Journal of Educational Psychology, 88, 715. . Mayer, R. E., M. Hegarty, S. Mayer, and J. Campbell. 2005. “When static media promote active learning: Annotated illustrations versus narrated animations in multimedia learning.” Journal of Experimental Psychology: Applied, 11, 256-265. Mayer, R. E. and J. Jackson. 2005. “The case for coherence in scientific explanations: Quantitative details can hurt qualitative understanding.” Journal of Experimental Psychology – Applied,11, 13-18. Mayer, R. E. and C. I. Johnson. 2010. “Adding instructional features that promote learning in a game-like environment.” Journal of Educational Computing Research, 42, 241-265. Mayer, R. E. and C. I. Johnson. 2008. “Revising the redundancy principle in multimedia learning.” Journal of Educational Psychology, 100, 380-386. Mayer, R. E., A. Mathias, and K. Wetzell. 2002. “Fostering understanding of multimedia messages through pretraining: Evidence for a two-stage theory of mental model construction.” Journal of Experimental Psychology: Applied, 8, 147-154. Mayer, R. E. and R. Moreno. 1998. “A split-attention effect in multimedia learning: Evidence for dual processing systems in working memory.” Journal of Educational Psychology, 90, 312-320.
References • 399
Mayer, R. E., V. Sims, H. Tajika. 1995. “A comparison of how textbooks teach mathematical problem solving in Japan and the United States.” American Educational Research Journal, 32, 443-460. Mayer, R. E., A. Stull, K. DeLeeuw, K. Almeroth, B. Bimber, D. Chun, M. Bulger, J. Campbell, A. Knight, H. Zhang. 2009. “Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes.” Contemporary Educational Psychology, 34, 51-57 McCrudden, M.T., G. Schraw, S. Lehran, and A. Poliquin. 2007. “The effect of causal diagrams on text learning.” Contemporary Educational Psychology, 32, 367-388. McDaniel, M.A., K. M. Wildman, and J. L. Anderson. 2012. “Using quizzes to enhance summative assessment performance in a web-based class: An experimental study.” Journal of Applied Research in Memory and Cognition, I, 18-26. Neivelstein, F., T. van Gog, G. V. Dijck, and H. P. A. Boshuizen. 2013. “The worked example and expertise reversal effect in less structured tasks: Learning to reason about legal cases.” Contemporary Educational Psychology, 118-125. Nesbit, J.C. and O. O. Adesope. 2006. “Learning with concept and knowledge maps: A meta-analysis.” Review of Educational Research, 76, 413-448. Moos, D.C., and R. Azevedo. 2008. “Self-regulated learning with hypermedia: The role of prior domain knowledge.” Contemporary Educational Psychology, 33, 270-298. Moreno, R. 2004. “Decreasing cognitive load for novice students: Effects of explanatory versus corrective feedback in discovery-based multimedia.” Instructional Science, 32, 99-113. Moreno, R. 2006. “Does the modality principle hold for different media? A test of the methods-affects-learning hypothesis.” Journal of Computer Assisted Learning, 33, 149-158. Moreno, R. 2007. “Optimizing learning from animations by minimizing cognitive load: Cognitive and affective consequences of signaling and segmentation methods.” Applied Cognitive Psychology, 21, 765-781. Moreno, R. and R. E. Mayer. 2000. “A coherence effect in multimedia learning: The case for minimizing irrelevant sounds in the design of multimedia messages.” Journal of Educational Psychology, 92, 117-125. Moreno, R. and R. E. Mayer. 2002. “Learning science in virtual reality multimedia environments: Role of methods and media.” Journal of Educational Psychology, 94, 598-610. Moreno, R. and R. E. Mayer. 2004. “Personalized messages that promote science learning in virtual environments.” Journal of Educational Psychology, 96, 165-173. Moreno, R. and R. E. Mayer. 2005. “Role of guidance, relfection, and interactivity in an agentbased multimedia game.” Jounral of Educational Psychology, 97, 117-128. Moreno, R., and L. Ortegano-Layne. 2008. “Do classroom exemplars promote the application of principles in teach education? A comparison of videos, animations, and narratives.” Educational Technology Research & Development, 56, 449-465. Ncube, L.B. 2010. “A simulation of learning manufacturing: The Lean Lemonade Tycoon 2.” Simulation & Gaming, 41, 568-586. Nievelstein, F., T. Van Gog, G. Van Dijck, and H. P. A. Boshuizen. 2013. “Instructional support for novice law students; Reducing search processes and explaining concepts in cases.” Applied Cognitive Psychology, 25, 408-413. Nokes-Malach, T.J., J. E. Richey, and S. Gadgil. 2015. “When is it better to learn together? Insights from research on collaborative learning.” Educational Psychology Review, 28, 645-656/ Noroozi, O., A. Weinberger, H. J. A. Biemans, M. Mulder, M. Chizari. 2012. “Argumentation-based computer supported collaborative learning (ABCSCL): A synthesis of 15 years of research.” Educational Research Review, 7, 79-106.
400 • References
Nussbaum, E.M. 2008. “Using argumentation Vee Diagrmas (AVDs) for Promoting Argument-Counterargument Integration in Reflective Writing.” Journal of Educational Psychology, 100, 549-565. Nuthall, G.A. 2007. The Hidden Iives of Learners. Wellington, New Zealand: New Zealand Council for Educational Research. Pai, H.H., D. A. Sears, and Y. Maeda. 2014. “Effects of small-group learning on transfer: A Meta-analysis.” Educational Psychology Review 27, 79-102. Paik, E.S. and G. Schraw. 2013. “Learning with animation and illusions of understanding.” Journal of Educational Psychology, 105:278-289. Parong, J. and R. E. Mayer. 2018. “Learning science in immersive virtual reality.” Journal of Educational Psychology, 110, 785-797. Pashler, H., M. McDaniel, D. Rohrer, and R. Bjork. 2008. „Learning styles concepts and evidence.” Psychological Science in the Public Interest, 9, 105-119. Patchan, M.M., C. D. Schunn, and R. J. Correnti. 2016. “The nature of feedback: How peer feedback features affect students’ implementation rate and quality of revisions.” Journal of Educational Psychology, 108, 1098-1120. Peterson, S.E. 1992. “The cognitive functions of underlining as a study technique.” Reading Research and Instruction, 31, 49-56. Pilegard, C., and R. E. Mayer. 2016. “Improving academic learning from computer-based narrative games.” Contemporary Educational Psychology, 44-45. 12-20. Plass, J. L., B. D. Homer, and C. K. Kinzer. 2015. “Foundations of Game-Based Learning.” Educational Psychologist, 50:4, 258-28. Plass, J.L., P. A. O’Keefe, B. C. Homer, J. Case, E. O. Hayward, M. Stein, and K. Perline. 2013. “The Impact of individual, competitive, and collaborative mathematics game play on learning, performance, and motivation.” Journal of Educational Psychology, 105, 1050-1066. Primack, B.A., M. V. Carroll, M. McNamare, M. L. Klem, B. King, M. Rich, C. W. Chan, C.W. and S. Nayak. 2012. “Role of video games in improving health-related outcomes: A systematic review.” American Journal of Preventive Medicine, 42, 630-638. Quilici, J.L., and R. E. Mayer. 1996. “Role of examples in how students learn to categorize statistics word problems.” Journal of Educational Psychology, 88, 144-161. Renkl, A. 2017. “Instruction based on examples.” In R.E. Mayer and P.A. Alexander (Eds.) Handbook of Research on Learning and Instruction New York: Routledge. Renkl, A. 2014. “The Worked Out Examples Principles in Multimedia Learning.” In R.E. Mayer (Eds.) The Cambridge Handbook of Multimedia Learning. New York: Cambridge University Press. Renkl, A., and K. Scheiter. 2017. “Studying visual displays: How to instructionally support learning.” Educational Psychology Review, 29(3), 599-621. Rey, G.D. and A. Fischer. 2013. “The expertise reversal effect concerning instructional explanations.” Instructional Science, 41, 407-429. Rey, G.D. and N. Steib. 2013. “The personalization effect in multimedia learning: The influence of dialect.” Computers in Human Behavior, 29, 2022-2028. Richey, J.E., and T. J. Nokes-Malach. 2013. “How much is too much? Learning and motivation effects of adding instructional explanations to worked examples.” Learning and Instruction, 25, 104-124,. Riener, C. and D. Willingham. 2010. “The myth of learning styles.” Change,42, 33-35. Roelle, J., K. Berthold, and A. Renkl. Online 2013. “Two instructional aids to optimize processing and learning from instructional explanations.” Instructional Science, p 1-22
References • 401
Roelle, J., and K. Berthold. 2013. “The expertise reversal effect in prompting focused processing of instructional explanations.” Instructional Science, 41, 635-656. Roelle, J., and M. Nückles. 2019. “Generative learning versus retrieval practice in learning from text: The cohesion and elaboration of the text matters.” Journal of Educational Psychology. Rohrer, D., R. F. Dedrick, S. Stershic. 2015. “Interleaved practice improves mathematics learning.” Journal of Educational Psychology, 107, 900-908. Rohrer, D. 2015. “Student instruction should be distributed over long time periods.” Educational Psychology Review, 27:635-643. Rohrer, D. 2012. “Interleaving helps students distinguish among similar concepts.” Educational Psychology Review, 24, 355-367. Rohrer, D., R. F. Dedrick, M. K. Hartwig, and C. N. Cheung. 2019. “A randomized controlled trial of interleaved mathematics practice.: Journal of Educational Psychology. Advance online publication. https://doi.org/10.1037/edu0000367 Rohrer, E. and K. Taylor. 2007. “The shuffling of mathematics problems improves learning.” Instructional Science, 35, 481-498. Rohrer, D. and K. Taylor. 2006. “The effects of overlearning and distributed practice on the retention of mathematics knowledge.” Applied Cognitive Psychology, 20, 1209-1224. Rop, G., M. van Wermeskerken, J. A. de Nooijer, P. P. J. L. Verkoeijen, T. van Gog. 2018. “Task experience as a boundary condition for the negative effects of irrelevant information on learning.” Educational Psychology Review, 30, 229-253. Roseth, C.J., A. J. Saltarelli, and C. R. Glass. 2011. ”Effects of face-to-face and computer-mediated constructive controversy on social interdependence, motivation, and achievement.” Journal of Educational Psychology, 103, 804-820. Sackett, D. L., W. M. Rosenberg, J. A. Gray, R. B. Haynes, and W. S. Richardson. 1996. “Evidence based medicine: What it is and what is isn’t.” British Medical Journal, 312, 71-72. Sampayo-Vargas, S., et al. 2013. “The effectiveness of adaptive difficulty adjustments on students’ motivation and learning in an educational computer game.” Computers & Education 69: 452-462. Schalk, L., R. Schumacher, A. Barth, and E. Stern. 2018. „When problem-solving followed by instruction is superior to the traditional tell-and-practice sequence.” Journal of Educational Psychology, 110, 596-610. Scheiter, K., P. Gerjets, T. Huk, B. Imhof, and Y. Kammerer. 2009. „The effects of realism in learning with dynamic visualizations.” Learning and Instruction, 19(6), 481-494. Schneider, S., J. Dyrna, L. Meier, M. Beege, and G. D. Rey. 2018. “How affective charge and text-picture connectedness moderate the impact of decorative pictures on multimedia learning.” Journal of Educational Psychology, 110, 233-249. Schmidgall, S.P., A. Eitel, K. Scheiter. 2019. „Why do learners who draw perform well? Investigating the role of visualization, generation and externalization in learner-generated drawing.” Learning and Instruction, 60, 138-153. Schnackenberg, H.L. and H. J. Sullivan. 2000. “Learner control over full and learn computer based instruction under differing ability levels.” Educational Technology Research and Development, 48, 19-35. Schroeder, N.L. and A. T. Cenkci. 2018. “Spatial contiguity and spatial split-attention effects in multimedia learning environments: A Meta-analysis.” Educational Psychology Review, 30: 679-701.
402 • References
Schuler, A., K. Scheiter, and P. Gerjets. 2013. „Is spoken test always better? Investigating the modality and redundancy effect with longer text presentation.” Computers in Human Behavior, 1500-1601. Schwartz, D.L. and J. D. Bransford. 1998. “A time for telling.” Cognition and Instruction, 16, 475-522. Schweppe, J., A. Eitel, and R. Rummer. 2015. “The multimedia effect and its stability over time.” Learning and Instruction, 38, 24-33. Schworm, S. and A. Renkl. 2007. “Learning argumentation skills through the use of prompts for self-explaining examples.” Journal of Educational Psychology, 99, 285-296. Schwamborn, A., R. E. Mayer, H. Thillmann, C. Leopold, and D. Leutner. 2010. “Drawing as a generative activity and drawing as a prognostic activity.” Journal of Educational Psychology, 102(4), 872- 879.. Sears, D.A. and J. M. Reagin. 2013. “Individual versus collaborative problem solving: Divergent outcomes depending on task complexity.” Instructional Science, 41 (6), 1153-1172. Shapiro, A.M. and L. T. Gordon. 2012. “A controlled study of clicker-assisted memory enhancement in college classrooms.” Applied Cognitive Psychology, 26, 635-643. Sitzmann, T. 2011. “A meta-analytic examination of the instructional effectiveness of computer-based simulation games.” Personnel Psychology, 64, 489-528. Sitzmann, T., K. G. Brown, W. J. Casper, K. Ely, R. D. Zimmerman. 2008. “A review and meta-analysis of the nomological network of trainee reactions.” Journal of Applied Psychology, 93, 280-295. Spanjers, I.A.E, T. van Gog, P. Wouters, J. J. G. van Merrienboer. 2013. “Explaining the segmentation effect in learning from animations: The role of pausing and temporal cueing.” Computers and Education, 59, 274-280. Spanjers, I.A.E., P. Wouters, T. van Gog, and J. J. G. van Merrienboer. 2011. “An expertise reversal effect of segmentation in learning from animated worked-out examples.” Computers in Human Behavior, 27, 46-52. Son, L.K. and D. A. Simon. 2012. “Distributed learning: Data, metacognition, and educational implications.” Educational Psychology Review, 24, 379-399. Strayer, D.L., D. Crouch, and F. A. Drews. 2006. “A comparison of the cell-phone driver and the drunk driver.” Human Factors, 46, 640-649. Stull, A., and R. E. Mayer. 2007. “Learning by doing versus learning by viewing: Three experimental comparison fo learner-generated versus author-generated graphic organizers.” Journal of Educational Psychology, 99, 808-820. Sung, Y.T., J. M. Yang, H. Y. Lee. 2018. “The effects of mobile-computer supported collaborative learning: Meta-analysis and critical synthesis.” Review of Educational Research, 88, 868-805. Sung, E., and R. E. Mayer. 2012a. “Five facets of social presence in online distance education.” Computers in Human Behavior, 28, 1738-1747. Sung, E., and R. E. Mayer. 2012b. “When graphics improve liking but not learning from online lessons.” Computers in Human Behavior, 28, 1618-1625. Sweller, J. 2005. “Implications of cognitive load theory for multimedia learning.” In R.E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. New York: Cambridge University Press. Sweller, J., P. Chandler, P. Tierney, and M. Cooper. 1990. “Cognitive load as a factor in the structuring of technical material.” Journal of Experimental Psychology: General, 119(2), 176–192. Sweller, J. and G. A. Cooper. 1985. “The use of worked examples as a substitute for problem solving in learning algebra.” Cognition and Instruction, 2, 59-89.
References • 403
Sweller, J., J. J. van Merriënboer, and F. Paas. 2019. “Cognitive architecture and instructional design: 20 years later.” Educational Psychology Review, 1-32 Taylor, K. and D. Rohrer. 2010. “The effects of interleaved practice.” Applied Cognitive Psychology, 24: 837-848. Um, E.R., J. L. Plass, E. O. Hayward, and B. D. Homer. 2012. “Emotional design in multimedia learning.” Journal of Educational Psychology, 104, 485-498. U.S. Department of Education. 2002. Strategic plan for 2002-2007. Cited in Burkhardt & Schoenfeld (2003). U.S. Department of Education. 2010. Office of Planning, Evaluation, and Policy Development, Evaluation of Evidence-based Practices in Online Learning: A Meta-analysis and Review of Online Learning Studies. Washington, D.C. Van der Kleij, F. M., R. C. Feskens, and T. J. Eggen. 2015. ”Effects of feedback in a computer-based learning environment on students’ learning outcomes: A meta-analysis.” Review of educational research, 85(4), 475-511. Van der Land, S., A. P. Schouten, F. Feldberg, B. van den Hooff, and M. Huysman. 2013. „Lost in space? Cognitive fit and cognitive load in 3D virtual environments.” Computers in Human Behavior, 29: 1054-1064 Van Gog, T., and K. Scheiter. 2010. “Eye tracking as a tool to study and enhance multimedia learning.” Learning and Instruction, 20, 95-99. Van Gog, T., and J. Sweller. 2015. “Not new, but nearly forgotten: The testing effect decreases or even disappears as the complexity of learning materials increases.” Educational Psychology Review, 27, 247-264. Van Harsel, M., V. Hoogerheide, P. Verkoeijen, and T. van Gog. 2019. “Effects of different sequences of examples and problems on motivation and learning.” Contemporary Educational Psychology. 58, 260-275. Van Meter, P., and J. Garner. 2005. “The promise and practice of learner-generated drawing: Literature review and synthesis.” Educational Psychology Review, 17, 285-325. Wang, F., W. Li, R. E. Mayer, and H. Liu. 2018. “Animated pedagogical agents as aids in multimedia learning: Effects on eye-fixations during learning and learning outcomes.” Journal of Educational Psychology, 110,250-268. Watson, G.J., R. Butterfield, and C. C. Curran. 2010. “Do dynamic work instructions provide an advantage over static instructions in a small scale assembly task?” Learning and Instruction, 20:84-93. Wijnia, L., S. M. M. Loyens, E. Derous, H. G. Schmidt. 2015. “How important are student-selected versus instructor-selected literature resources for students’ learning and motivation in problem-based learning?” Instructional Science, 43, 39-58. Wittwer, J. and A. Renkl. 2008. “Why instructional explanations often do not work: A framework for understanding the effectiveness of instructional explanations.” Educational Psychologist, 43, 49-64. Wang, N., W.L. Johnson, R.E. Mayer, P. Rizzo, E. Shaw, and H. Collins. 2008. “The politeness effect: Pedagogical agents and learning outcomes.” International Journal of Human Computer Studies, 66, 98-112. Watson, G., J. Butterfield, R. Curran, C. Craig. 2010. “Do dynamic work instructions provide an advantage over static instructions in a small scale assembly task?” Learning and Instruction, 20, 84-93.
404 • References
Weaver, J.P., R. Y. Chastain, D. A. DeCaro, and M. S. DeCaro. 2018. “Reverse the routine: Problem solving before instruction improves conceptual knowledge in undergraduate physics.” Contemporary Educational Psychology, 52 36-47. Wenz, H. J., M. Zupanic, K. Klosa, B. Schneider, and G. Karsten. 2014. “Using an audience response system to improve learning success in practical skills training courses in dental studies—a randomised, controlled cross-over study.” European Journal of Dental Education, 18, 147. Wittwer, J, and A. Renkl. 2010. “How effective are instructional explanations in example-based learning? A meta-analytic review.” Educational Psychology Review, 22, 393-409. Wouters, P., C. van Nimwegen, H. van Oostendrop, and E. D. van der Speck. 2013. “A meta-analysis of the cognitive and motivational effects of serious games.” Journal of Educational Psychology, 105, 249-265. Wouters, P. and H. van Oosstendrop. 2013. “A meta-analytic review of the role of instructional support in game-based learning.” Computers & Education, 60, 412-425. Xie, H., R. E. Mayer, F. Wang, and Z. Zhou. 2019. „Coordinating visual and auditory cueing in multimedia learning.” Journal of Educational Psychology, 111, 235-255. Young, M., S. Slota, A. B. Cutter, G. Jalette, G. Mullin, B. Lai, Z. Simeoni, M. Tran, M., and M. Yukhymenko. 2012. „Our princess is in another castle: A review of trends in serious gaming for education.” Review of Educational Research, 82, 61-89. Yue, C. L., E. L. Bjork, and R. A. Bjork. 2013. “Reducing verbal redundancy in multimedia learning: An undesired desirable difficulty?” Journal of Educational Psychology, 105,266-277. Yoon, T. E., and J. R. George. 2013. “Why aren’t organizations adopting virtual worlds?” Computers in Human Behavior, 29:772-790. Zacharia, Z.C., C. Manoli, N. Xenofontos, et al. 2015. “Identifying potential types of guidance for supporting student inquiry when using virtual and remote labs in science: A literature review.” Education Tech Research Dev (2015) 63: 257. Zhu, C. and D. Urhahne. 2018. “The use of learner response systems in the classroom enhances teachers’ judgment accuracy.” Learning and Instruction, 58, 255-262.
About the Author Ruth Colvin Clark has focused her career on bridging the gap between academic research and practitioner application in instructional methods. A specialist in instructional design and workforce learning, she holds a doctorate in instructional psychology and served as training manager for Southern California Edison before founding her own company, Clark Training & Consulting. Clark was president of the International Society for Performance Improvement and received the Thomas Gilbert Distinguished Professional Achievement Award in 2006. She was selected as an ASTD Legend Speaker at the 2007 International Conference & Exposition. Her recent books include Scenario-Based E-Learning and E-Learning and the Science of Instruction co-authored with Richard Mayer. She resides in southwest Colorado and Phoenix, Arizona, and divides her time among speaking, teaching, and writing.
405
Index
A
Abercrombie, S., 187 Abrami, P. C., 319, 320 Abridged text, audio narration with, 147–148 Academic evidence, 26–28 Accents, 173 Accuracy, 109, 124, 263 Achievement, feedback and, 249 Action games, 346 Active learning, 70–87. See also Engagement drawing activities for, 83–84 effectiveness of, 71–73 and engagement grid, 73–84 questioning strategies for, 76–79 self-explanation strategy for, 79–81 spacing effect in, 240 teach-back strategy for, 81–83 training myths about, 16–17 underlining and highlighting as strategies for, 75–76 Active processing, germane cognitive load and, 65 Active processor, working memory as, 56–58 Adams, D. M., 349–350 Adaptive games, 344–345 Adesope, O. O., 84, 145, 235 Adjuncts, 182 Ainsworth, A. T., 111 Ainsworth, S., 80, 111 Amadeiu, F., 130 Anecdotes, 103, 184, 185, 188. See also Stories
Animated online agents, 169–171 Animations, 118–134 cues for, 96 guidelines on, 384 illustrating dynamic change with, 120 in immersive virtual reality, 125–128 managing cognitive load with, 128–133 for novice vs. experienced learners, 99–100 as performance support, 121–122 of procedural steps, 305–306 scenario-based, 334 and spatial ability of learners, 125 still visuals vs., 118–122, 124, 125, 197–198, 384 teaching procedures with, 119–120, 123–125 topic- and goal-specific, 119–122 Apparent learning, 236 Application practice assignments, 233–235 Approachability, 286 Arbuthnott, K. D., 7 Armstrong, M. B., 353 Assembly tasks, 311 Assignment deadlines, 243 Attention cueing to draw, 95–96, 129–130 online agents to direct, 170–171 and personalization, 162–163 to words and visuals, 53, 55 Audio and text explanations, 139, 141, 154 407
408 • Index
audio-only narration vs., 145 cognitive load with, 148–150 conditions for using, 146–148 guidelines on, 385 Audio control, 146 Audio narration cognitive load with, 148–150 conditions for using, 143–144 in direct instructional designs, 371 for examples, 385 for explanations, 140, 141, 143–145, 154 in games, 356–357 guidelines on, 385 instructions via, 99–100 maximizing benefits of, 144–145 by online agents, 169, 172 Audio-visual instructions, 99–100 Auditory center, of brain, 148, 191, 357 Auditory cues, for graphics, 95–96 Auditory learners, 7, 142, 150 Authentic instruction, 319 Automated skills, 237, 239 Automaticity, 237 drill and practice to build, 309–310 in learning, 61–62 over-learning for, 239 Availability, instructor, 161–162 Awareness of comprehension, 187–188 Ayres, P. N., 120
B
Behavioral engagement in engagement grid, 73–74 and explanations, 387 guidelines on, 386–387 underlining and highlighting for, 75–76 Behavioral processes, psychological and, 17 Behavioral response, in practice, 232 Behaviorism, 296 Berthold, K., 78
Bird, S., 241 Bisra, K., 279 Bjork, R. A., 245–246 Blocked practice, 244–246, 309 Body humors, 5 Boucheix, J. M., 129–130 Boundary conditions, 378 Brain activity measures, 40–41 Brain processes animations and, 119 decorative visuals and, 103–104 examples and, 209–210 explanations and, 148–150 integrated text/visuals and, 152–153 personalization and, 162–163 practice and, 235–237 stories and, 188–189 visuals and, 95–98 Branched scenarios, 328–329 Bransford, J. D., 246–247 Brevity of audio narration, 145 of explanations, 192–194, 287, 291 Brewer, N. S., 99–100 Build time performance, with animations vs. still visuals, 121–122 Burcham, S., 80 Butcher, K. R., 96–98
C
Cache 17 (game), 70–74, 349–350, 352, 360 Capacity limits, working memory, 56–58, 119 Captions, 153 Casual games, 346 Categorization, in tables, 112–113 CBT (computer-based training), 9, 20. See also E-learning Ceiling effects, 42–43 Cell phones, 4 Cenkci, A. T., 152
Index • 409
Chandler, P., 194 Checklists, for peer feedback, 263 Chi, M. T. H., 80, 216 Chunking. See also Segmentation for explanations, 150, 194, 287, 291 in teaching procedures, 313 Circuit Game, 354–356, 358 Clarebout, G., 168–169 Clark, R. C., 348, 349, 351, 354 Classroom(s) audio narration in, 144 clickers in, 277–278 flipped, 288, 290 lectures in, 78–79 social presence in, 161, 286 virtual, 277–278, 303 Clicker questions, 78–79, 277–278, 290 Closed captioning, 147 Cognitive load with animations, 128–133 with collaboration, 174 content-specific graphics and, 109 examples and, 208 with explanations, 148–150, 387 extraneous, 63–64 with games, 347, 360 germane, 65, 235, 236, 347 with immersive virtual reality, 128, 197 with integrated text and visuals, 152–153 intrinsic, 62–63 and learning, 62–65 optimizing, 65 in scenario-based learning, 328, 335 and segmentation, 223 and self-explanatory visuals, 142 Cognitive load theory, 62 Cognitive overload, 58, 335 Coherence effect, 188, 234–235, 351 Collaboration in game-playing, 354, 362
in scenario-based learning, 330 and social presence, 173–175 Color cueing, 129–130 Communication, 158, 384–385. See also specific modes Comparisons among examples, 219–221, 223–224, 283–284 practice exercises using, 246–249 of solution approaches, 284, 332 Competition, 354, 362 Completion examples, 283 Complexity of games, 390 of problems, 328 of tasks, 124, 175, 239, 240 of visuals, 98, 106–107, 145, 195–196, 351 Comprehension, 186–188 Computer-based training (CBT), 9, 20. See also E-learning Concentration games, 347 Concept maps, 84, 94, 323 Conceptual understanding, 109, 119, 312 Constraints in games, 345–346 process, 323, 328–329, 377 Constructive criticism, 260–261 Content coverage, 182, 183 Content-general graphics, 112–113 Content-specific critical thinking skills, 319 Content-specific graphics, 107–111 Context-specific critical thinking skills, 318 Continguity principle, 221–224 Contrast highlighting, 130, 145 Control, learner, 127, 171–172, 274, 336 Conversational language, 163–165, 173, 286, 356, 357 Cooper, G. A., 208 Cooperation, 158
410 • Index
Coping models and strategies, 223–224, 261–262 Correct, as feedback response, 258–259, 375 Corrective feedback, 255 Correlational studies, 35–36 “Couch potato” mindset, 119 Counter examples, 284–285 Course ratings, 12–13, 20 and motivation, 54–55 and social presence, 36, 161–162 and use of visuals, 36 Criterion-referenced feedback, 260, 263 Critical thinking. See also Scenario-based learning and problem-first approach to instruction, 319–320 in strategic tasks, 212–213 training to improve, 318–319 Criticism, constructive, 260–261 Cues for animations, 129–130, 132 audio narration with, 145 auditory, 95–96 for graphics, 95–96 social, 158–163 visual, 95–96, 129–130, 132 Culture, social presence and, 165, 168 Curated resources, 330–331
D
DaPra, C. S., 169–171 Data-based guidelines, 25–26 Deadlines, assignment, 243 Debriefing activities, 248, 337 Decision making, learner, 14–16, 20–21 Decision-making tasks, IVR for, 128 Decorative graphics, 100–106, 113 conditions for using, 104–106 described, 100–101 harming learning with, 102–104 De Koning, B. B., 130, 132 De La Paz, S., 282
Delayed feedback, 255, 256, 262–263 Delayed learning duration of practice and, 238 interleaved vs. blocked practice and, 244–245 problem-first sequences and, 289–290 spacing effect and, 241–242 Deliberate practice, 232–233 DeLozier, S. J., 277, 288 Demonstrations. See also Worked examples of procedural steps, 305–306 in scenario-based learning, 329 DeNisi, A., 254 Descriptions, explanations for, 141 Design-a-Plant (game), 259, 357 Desirable difficulties, 289–290 Desired outcome, of scenario, 327 Diagrams, 98, 111, 122, 314 Dialogue, 319 Digital games. See Games Digital media, 11 Direct instructional designs, 368–376 behaviorism and, 296 explanations in, 274, 275 games vs., 377 methods, 370–376 problem-based designs vs., 368–370 teaching procedures with, 297–310 Directive lessons drill and practice in, 309–310 feedback with, 310 job aids for, 307–308 lesson introduction, 298–299 practice in, 308–310 procedural steps, 305–306 response to errors in, 331–332 supporting topics, 300–304 Disabilities, learners with, 146 Discrimination response, 246 Discussion boards, social presence on, 167
Index • 411
Distance learning, social presence in, 167–168 Distraction, 104, 188 Distributed (spaced) practice, 240–242, 250 Domain-specific training, 119–122, 389 Drawing assignments for active learning, 83–84 with animations, 131–132 learning from, 107–110, 114 Drill and practice exercises, 309–310, 347 Dual channels, of working memory, 56–58 Dunlosky, J., 15, 75–76 Duolingo, 353 Dynamic change, illustrating, 120
E
Ebbinghaus, Hermann, 240 EEG (electroencephalography), 40 Effect sizes, 37–38, 368 E-learning. See also Computer-based training (CBT); Virtual classrooms audio and text explanations in, 146–147 audio narration in, 144 improving outcomes from, 15–16 learner decisions in, 14 length of explanations in, 287 online agents in, 168–173 personalization in, 163 questions in examples for, 218 scenario-based, 334 separated text and visuals in, 153 social cues in, 158–160 visuals in, 92 Electroencephalography (EEG), 40 Emotional context, goals with, 128 Emotional design, 189–190 Engagement, 21. See also Active learning; Psychological engagement with animations, 119, 131–133
behavioral, 73–76, 386–387 and cognitive load, 223 with content-general graphics, 112–113 with examples, 215–216, 219, 385 examples to improve, 215–216 with explanations, 274–275, 283–285 in games, 360–361 guidelines on improving, 386–387 with immersive virtual reality, 132 learning affected by, 72 and less is more approach, 198 necessity of, 85 and practice, 230 social, 354, 362 training myths about, 16–17 Engagement grid, 73–84 English, personalized materials in, 164 Errors in scenario-based learning, 322, 331–332 with serious consequences, 239, 240, 324–325 Evidence invalid, 379 practitioner, 26–28 Evidence-based practice, 24–48 academic vs. practitioner evidence in, 26–28 common threads in, 378–381 graphics in, 28–41 for instructional professionals, 25–26 limitations in, 41–43 measuring learning, 30–31 in medicine, 4–5 recent research on, 43–46 Examples, 206–227 and brain processes, 209–210 completion, 283 counter, 284–285 engagement with, 215–216, 283–285
412 • Index
in explanations, 282–283 formatting, 221–224 guidelines on, 385 high- vs. low-variable, 213–215 incorrect, 224, 290 invention tasks vs., 224–225 power of, 206 for routine tasks, 210, 211 and self-explanations, 217–221 for strategic tasks, 210, 212–213 of supporting topics for procedures, 300 Exercises, separating text and visuals for, 153 Exercise-tracking devices, 254 Expectance-Value, 54 Experienced learners collaboration for, 174 graphics for, 32–33, 99–100 high- vs. low-variable examples for, 214–215 learner decision making by, 15 and long-term memory, 58–60 memory capacity of, 66 questions in explanations for, 279–280 Experimental research, 29 Expert comparison, 284, 332 Expertise, 60–61, 323–324 Expert models, 377 Explanations (generally), 274–292 brevity of, 287, 291 in direct instructional designs, 371–372 engagement with, 283–285 examples and models in, 282–285 guidance from, 377 guidelines on, 387 learning from, 76–77 leveraging social presence in, 285–286 principled, 141, 275–276 questions in, 276–281
sequencing of problem solving and, 289–290, 312–314, 319–320 of supporting topics for procedures, 300 timing of, 288–290 visuals in, 281 Explanations of visuals, 138–156 audio narration for, 143–145 brain processes and, 148–150 formats for, 141 importance of, 274–275 in less is more approach, 192–194 placement of, 64, 150–153 power of, 138 and self-explanatory visuals, 141–143 still visuals vs. animations in, 119 text-only, 150 text with audio for, 146–148 worked examples vs., 212–213 Explanatory feedback, 255, 266 benefits of providing, 258–259 in games, 355–356 on procedures, 310 Exploratory learning. See Scenario-based learning Extraneous cognitive load, 63–64 Eye-tracking data, 39–40
F
Facebook, 158, 176 Face-to-face collaboration, 175 Feedback, 254–268. See also Explanatory feedback best types of, 257–260 in coping models, 224 corrective, 255 criterion-referenced, 260, 263 defined, 256–257 delayed, 255, 256, 262–263 in direct instructional designs, 375–376
Index • 413
in games, 343, 355–356 guidance from, 377 in guided instruction, 323 guidelines on, 386, 387 immediate, 255, 256, 262–263, 310 in-person, 255, 261 for instructional professionals, 265 instructive, 331 intrinsic, 331 mediated, 255, 256, 261 negative, 255, 260–262, 266 peer, 263–265 positive, 255, 256, 261 during practice, 249 on procedures, 310, 388 in scenario-based learning, 331, 332, 338 on self-explanation, 218 timing of, 262–263 “Feed forward,” 258 Feldon, M. K., 282 Fiorella, L., 82, 109–110 Fiorella, L. T., 84, 123–124 First-person language, 164, 357 First-person perspective, in animations, 123–124 Fitbit, 254, 256 Flemish, personalized materials in, 164 Flipped classrooms, 288, 291 Flounder factor, 322, 336 Fong, C. J., 261, 262 Formal language, 161–165 Formatting, for examples, 221–224 Fortnite, 342 Fulton, L. V., 243
G
Gadgil, S., 247, 284, 332 Game-playing sessions, multiple, 350, 359 Games, 342–364 in active learning, 70–72 described, 343–346
designing, for learning, 346–347 drill and practice exercises in, 310 effective, 354–358 engagement and learning promoted by, 85 guidelines on, 389–390 learning, 342, 361–362 maximizing learning value of, 377–378 motivating learners with, 54, 352–354 personalization with, 164, 165 promoting learning with, 348–352 training myths about, 17–19, 21 types of, 346 for workforce learning, 359–361 Garner, J., 109 Gartmeier, M., 334 Gender, of online agents, 171–172 Generative learning activities, 84, 234–235 Generative processing, 65 Generic critical thinking skills, 318–319 Gentner, D., 219, 223 George, J. R., 126 German, personalized materials in, 164 Germane cognitive load, 65, 235, 236, 347 Gestures, by online agents, 170–171 Ginns, P., 143, 164 Glogger-Frey, I., 225 Goals with emotional context, 128 in games, 344–345 instructional, 107, 119–122, 141 Golke, S., 187–188 Graphic fidelity of media in games, 350, 351, 359 in scenario-based learning, 332–335 Graphics. See also Visuals content-general, 112–113 content-specific, 107–111
414 • Index
decorative, 100–106 effectiveness of, 92–93, 113 in evidence-based practice, 28–41 in examples, 385 and expertise, 60 guidelines on, 384 higher course rating scores with, 13 maximizing learning with, 378 and transformation of content, 83–84 types of, 33–34, 93–95 Group play, in games, 354 Guided instruction (guidance) for drawing assignments, 109–110 negative feedback with, 261 in problem-based learning designs, 376–377 on procedures, 388 in scenario-based learning, 322–323, 328–331, 389
H
Hand-held mobile devices, 11 Handouts, 312 Hands, animations including, 124–125 Harmsen, R., 322–323 Harvey, William, 4–5 Hattie, J., 39, 257 Hattie, J. A. C., 39, 243, 249, 254, 265, 368 Hegarty, M., 14 Heidig, S., 168–169 Higher-level cognitive tests, 44 Higher level learning, timing of feedback for, 262 Highlighting, 75–76, 130, 145 High-spatial ability learners, 125 High-variable examples, 213–215 Hoogerheide, V., 82 Hudson River, emergency landing on, 230 Humors, of body, 5 Hybrid learning designs, 370
I
Identity, social, 167 Immediate feedback, 255, 256, 262–263, 310 Immediate learning, 42 and duration of practice, 238 interleaved vs. blocked practice and, 244–245 visuals influence on, 98–99 Immersive learning. See Scenario-based learning Immersive virtual reality (IVR), 380 animations in, 125–128 course rating scores for lessons with, 13 engagement with, 132 in less is more approach, 197 over-inflated training using, 182 Incorrect, as feedback response, 258–259, 375 Incorrect examples, 224, 290 Inductive approach in scenario-based learning, 322 Inferential comments, 96–98 In-person feedback, 255, 261 Instructional context, as research limitation, 42 Instructional goals, 107, 119–122, 141 Instructional professionals evidence-based practice for, 25–26 feedback for, 265 Instructional psychology, 9, 378–379 Instructional techniques and cognitive load, 63–64, 66 and engagement grid, 74–84 learner feedback/modification of, 265 and learning domains, 43 learning process and, 55–56 Instructive design, 322 Instructive feedback, 331 Instructor-led training, 20 Instructor role, in scenario-based learning, 336–337
Index • 415
Integration with prior knowledge, 53 of text and visuals, 53, 55–56, 151–153, 221–224 visuals to aid, 96–98 Interaction, with game, 343 Interest adding, in less is more approach, 183, 189–190 and learning, 186, 199 Interface, game, 350, 351, 359 Interleaving effect, 244–246, 309, 375 Intimacy, 167 Intrinsic cognitive load, 62–63 Intrinsic feedback, 331 Invention learning, 289–290 Invention tasks, 224–225, 313, 314 IVR. See Immersive virtual reality
J
Jackson, J., 193 Jaeger, A. J., 187 Job aids, 121, 313. See also Performance supports over-learning and acceptability of, 239, 240 for teaching procedures, 307–308 Job context, 236, 298 Johnson, C. I., 355–356, 358
K
Kauffman, D. F., 112–113 Kim, N. J., 323 Kirschner, F., 174 Kirschner, P. A., 8 Kluger, A. N., 254 Knowledge resources, 330–331 Kornell, N., 245–246 Kratzig, G. P., 7 Kuhl, T., 120 Kuhlmann, S., 82
L
Landers, R. N., 353 Lane, H. C., 333 Language learning games, 346–347 Layout, for text and visuals, 153, 154 Lazonder, A. W., 322–323 Learner-generated graphics, 114, 131–132. See also Drawing assignments Learners characteristics of, 42 control for, 127, 171–172, 274, 336 decision making by, 14–16, 20–21 individual differences among, 45, 47 Learner satisfaction, instructor availability and, 161–162 Learning, 52–67. See also Active learning; E-learning; Scenario-based learning apparent, 236 automaticity in, 61–62 from classroom lectures, 78–79 and cognitive load, 62–65 collaborative vs. solo, 173–175 decorative visuals and, 102–104 delayed, 238, 241–242, 244–245, 289–290 designing games for, 346–347 distance, 167–168 from drawing assignments, 107–110 efficiency of, in scenario-based environments, 336 and expertise, 60–61 feedback’s effects on, 254–255 games to promote, 348–352 with graphics, 28–30, 46–47 immediate, 42, 98–99, 238, 244–245 in immersive virtual reality, 125–128
416 • Index
with integrated text and visuals, 151–152 interest and, 199 invention, 289–290 investments in, 6 liking and, 12–14, 380–381 long-term, 98–99 long-term memory and, 58–60 measuring, 30–31, 42 motivation and, 45, 54–55, 380–381 music’s effects on, 191–192 online agents for, 168–173 over-learning, 238–240 practice and, 76–77, 249 with self-explaining visuals, 110–111 social cues’ effect on, 162–163 still visuals vs. animations for, 118–119 stories’ effect on, 185–189 from textual explanations, 77–78 working memory and, 56–58 Learning domains, instructional methods and, 43 Learning games, 342, 361–362 “Learning host” mindset, 165–166, 286 Learning objectives, 298, 299, 321–322, 360–361 Learning process, 53–56, 170–171 Learning styles, 6–8, 20, 142, 150 Lee, D. Y., 124 Lego, 142 Lehman, S., 186–188 Leopold, C., 17 Less is more approach, 182–200, 379 adding interest in, 189–190 explanations in, 192–194 identifying “too much,” 184 music and, 190–192 and reasons for over-inflating training, 182–183 stories in, 185–189
visuals in, 195–198 Lesson introduction, 298–299 Lesson overview, 298, 299 Lesson practice, 247–248 Lessons, length of, 41 Li, W., 171 Liking, learning and, 12–14, 380–381 Likourezos, V., 214–215 Line drawings, 94, 98, 106–107, 114 Linek, S. B., 171–172 LinkedIn, 158 Literacy, verbal and visual, 92 Liu, H., 148 Localization, of feedback, 264 Loibl, K., 290, 312–313, 320 Long-term learning, visuals for, 98–99 Long-term memory, 53, 58–60. See also Integration Lower level learning, feedback for, 262 Low-spatial ability learners, 125 Low-variable examples, 213–215
M
Ma, Yo-Yo, 231 Makransky, G., 40, 152, 197 Manual procedures, animations of, 124–125 Marcus, N., 124 Marsh, E. J., 312 Mason, L., 131–132 Massed practice, 240–242 Mayer, R. E., 13, 24, 36, 38, 54, 72, 78, 84, 101–103, 126–129, 132, 144, 146, 151–153, 165, 166, 169–171, 173, 185–186, 191–194, 198, 213–214, 259, 277, 287, 303, 346, 351, 352, 354–358, 368, 370, 380 McDaniel, M. A., 76 Media graphic fidelity of, 332–335, 350, 351, 359 for scenario-based learning, 332–335
Index • 417
spaced practice using, 242–243 training myths about use of, 9–11 Mediated feedback, 255, 256, 261 Medicine, evidence-based practice in, 4–5, 25 Memory retrieval hooks, 236–237 Memory support aids, 312 Memory tests, 44 Mensa, 231 Mentorship, 319 Meta-analysis, as synthetic research, 37–39 Mistakes. See Errors MIT Handbook for Game-Based Learning, 342 Mitosis, 33–34 Mixed (interleaved) practice, 244–246, 309, 375 Mnemonics, 300 Modality principle, 221 Models, in explanations, 282–283 Moreno, R., 143, 259, 334, 335 Motivation with adaptive games, 344–345 feedback and, 256, 257, 260–262 with games, 344–345, 352–354 with immersive virtual reality, 126–127 learning and, 45, 54–55, 380–381 and online agents, 169–170 with problem-based learning designs, 369 with scenario-based learning, 318, 324 Multimedia lessons explanations in, 194 learner feedback on, 265 scenario-based, 333–335, 337 time compression in, 324 Music, 184, 190–192
N
Narratives, in games, 350, 351, 359, 362. See also Stories
National Opinion Research Center, 35 Native language, audio narration in, 145 Negative feedback, 255, 260–262, 266 Negative feelings, visuals to elicit, 104–106 Nesbit, J. C., 84, 145 Neutral feedback, 255, 256 Newspaper layouts, 39–40 Nievelstein, F. T., 212, 330 Normative feedback, 255–257, 259–261, 375 Norway, social presence in, 168 Note taking, 311–312 Novice learners audio narration for, 145 collaboration for, 174 decorative visuals for, 104 graphics for, 32, 95, 99–100, 107 high- vs. low-variable examples for, 214–215 questions in explanations for, 279–280 segmentation of animations for, 129 No Yellow Brick Road, 18–19, 113, 378 Nuckles, M., 234–235 Nuthall, G. A., 263
O
Oculus Rift, 126 Online agents, 168–173, 286 learner selection of, 171–172 realism of, 169–170 support for learning processes by, 170–171 voice quality of, 172–173 Online collaboration, 175 Online games, 342 Online support tools, 175 On screen instruction, 95, 147–148 Openmindedness, 167 Organization, semantic visuals and, 96 Orientation, for games, 356–358
418 • Index
Ortegano-Lane, L., 334, 335 Outlines, 112–113 Overconfidence, 15, 52, 66 Over-inflated training, 182–184 Over-learning, 238–240 Overload, cognitive, 58, 335 Over-the-shoulder perspective, in animations, 124, 306
P
Pacing, as learner decision, 15 Paik, E. S., 119 Parong, J., 13, 54, 126–128, 132, 380 Partial demonstrations, 329 Pashler, H. M., 8 Patchan, M. M., 263–264 Peer feedback, 263–265 Performance supports. See also Job aids animations vs. still visuals as, 121–122 for procedures, 298, 310–311, 313, 314, 388 Personable instructors, course ratings for, 13 Personable training, 160–161 Personalization. See also Social presence and brain processes, 162–163 conversational language for, 164–165 in direct instructional designs, 371–372 in games, 356, 357, 362 in mediated settings, 286 in narration, 385 Pew Research Center, 286 Photographs, 94, 106–107, 114 Pictorial graphics, 94 Pilegard, C., 352 Pilot testing, 240 Plass, J. L., 354 Politeness, 165 Position papers, 112 Positive feedback, 255, 256, 261
Positive feelings, 104–106, 189 Power law of practice and skill, 237, 238 PowerPoint presentations, 126–127, 197, 353 Practice, 230–251 and brain processes, 235–237 with comparison exercises, 246–249 defined, 232–233 feedback during, 249 with games, 352, 359 power of, 231 for procedures, 308–310, 313, 388 quantity of, 237–240 recall vs. application, 233–235 in scenario-based learning, 323–324 sequencing of, 244–246 spacing of, 240–244 with supporting topics, 300, 302–303 using examples vs., 207–209 Practice assignments in direct instructional designs, 374–375 learning from, 76–77 Practice sessions, duration of, 237–238 Practitioner evidence, 26–28 Praise, 257, 264 Presentations explanations in, 147–148, 274, 275 games vs., 349–350, 353 PowerPoint, 126–127, 197, 353 principled, 276 psychological activity during, 280–281 visual organizers for, 112 Pretraining assignments in direct instructional designs, 370–371 for games, 356–358 games in, 352
Index • 419
Prework comparison practice exercises as, 246–247 examples vs. invention tasks in, 224–225 in flipped classrooms, 288 for learning procedures, 388 Principled explanations, 141, 275–276 Prior knowledge, 18, 32, 53, 78, 299 Problem-based learning designs, 368–370, 376–377. See also Scenario-based learning Problem data, for scenario-based learning, 327 Problem-first approach to instruction critical thinking and, 319–320 learning outcomes from, 289–290 for procedures, 312–314 in scenario-based learning, 325–326 Problem solving. See also Practice; Strategic tasks collaborative, 174 content-specific graphics for, 109 examples to build, 282 games for, 353 prework activities requiring, 224–225 in scenario-based learning, 322 sequencing of explanations and, 289–290, 312–314, 319–320 skill modeling for, 282–283 and working memory, 209–210 Procedural steps, 305–306 Procedures, 296–315 animations of, 119–120, 123–125, 198, 199 defined, 296–297 directive lessons on, 298–299 explanation format for, 141 feedback on, 310 guidelines on teaching, 388 job aids for, 307–308
performance support for, 310–311 practice for, 308–310 reference support for, 311–312 scenarios for teaching, 325 sequencing of problem solving and explanations for, 312–313 supporting topics for, 300–304 Process constraints, 323, 328–329, 377 Processes, teaching, 141 Productive failure, 289–290 Progress comments, 261, 310 Psychological engagement with clicker questions, 277 in engagement grid, 73–74 with examples, 216 and explanations, 387 with games, 343 guidelines on, 386 in presentations, 280–281 in teach-back assignments, 82 with underlining and highlighting, 75–76 Psychological processes, 17, 46
Q
Questioning strategies, 76–79 Questions clicker, 78–79, 277–278, 290 in direct instructional designs, 372–373 examples with, 217–218 in explanations, 276–281 self-explanation, 217–218, 372–373 Quilici, J. L., 213–214
R
Ratings for anecdotes and graphics, 103 of animations vs. still visuals, 119 of courses (See Course ratings) Rawson, K. A., 15
420 • Index
Realism of online agents, 169–170 in scenario-based learning, 321, 338 Recall-level performance, graphics and, 109 Recall practice assignments, 233–235 Reference support for procedures, 311–312 Reflection, 79–80, 331–332, 389, 390 Rehearsal, 53, 56 Relational visuals, 94 Relevance of material, discussing, 298, 299 Renkl, A., 217, 218, 224 Repeated game play, 350, 359 Repetition, 234, 239 Re-reading text, drawing assignments vs., 109–110 Research academic vs. practitioner, 26–28 experimental, 29 recent, on evidence-based practice, 43–46 synthetic, 37–39 Respect, social, 167 Response systems, 78 Response technology, 277–278 Responsiveness, of game, 343 Retention, over-learning and, 238 Retrieval, of new knowledge, 53 Rey, G. D., 173 Rhodes, M. G., 277, 288 Roelle, J., 78, 234–235 Rohrer, D., 237–238, 244, 245 Role play, 334, 359 Rop, G., 104 Routine tasks animations and visuals for, 122 examples for, 210, 211, 226, 385 procedures as, 296–297 Rules, game, 345–346 Rummel, N., 290, 312–313
S
Sackett, D. L., 25 Sampayo-Vargas, S., 344–345 Scaffolding, 328, 376 Scenario-based learning, 320–339 appropriate conditions for, 324–325 building expertise with, 323–324 challenges with, 335–337 comparisons in, 248 costs of developing, 337 described, 320 feedback in, 331 guided instruction in, 322–323, 328–331 guidelines on, 388–389 inductive approach in, 322 job-realistic situation in, 321 lesson in, 325–332 media for, 332–335 motivation with, 318 reflection in, 331–332 scenarios in, 321–322, 325–327 Scenarios, 320 adding, to build additional skills, 336 branched, 328–329 designing, 325–327 in games, 359 guidelines on, 389 pre-planning of, 321–322 in problem-based learning designs, 376 Schmidgall, S. P., 44, 46, 83 Schneider, S., 104–106 Schraw, G., 119 Schroeder, N. L., 152 Schuler, A., 145 Schwamborn, A., 108 Schwartz, D. L., 246–247 Schweppe, J., 98–99, 104 Schworm, S., 218 Secondary language, explanations in, 145, 146
Index • 421
Second-person language, 164, 357 Segmentation, 129, 222–223, 372. See also Chunking Self, feedback targeting, 255 Self-confidence, 66 Self-explanation for active learning, 79–81 with animations, 132–133 benefits of inducing, 278–279 creating visuals for, 110–111 examples to promote, 217–221 Self-explanation questions in direct instructional designs, 372–373 examples with, 217–218, 283 explanations with, 278–279 games with, 356, 358 on supporting topics, 300 Self-explanatory visuals, 141–144 Self-revelations, 165–166 Semantic visuals, 93–94, 96 Separated text and visuals, 151–153, 221 Sequencing of practice, 244–246 of problems, 328 of problem solving and explanations, 289–290, 312–314, 319–320 of supporting topics, 300–304 in teaching procedures, 312–314 Sharing, social, 167 Shin, D. H., 124 Simms, V., 101 Simple problems, 328 Simple visuals, 98, 106–107, 195–196 Simulations, 17–19, 310, 333, 337 Sink, H. E., 312 Sitzmann, T., 12, 36, 348, 349, 351–352, 353 Skill building, 206, 237, 238 Skill gaps, practice focusing on, 232–233 Skill modeling, 282–283 Skinner, B. F., 296
Sociability, 158 Social behaviors, online agents with, 169–170 Social cues, 158–163 Social engagement, in games, 354, 362 Social games, 346 Social identity, 167 Social media, 158, 176 Social presence, 158–178 collaboration and, 173–175 conversational language and, 163–165 course ratings and, 36, 161–162 in distance learning, 167–168 and impact of personalization on brain processes, 162–163 increasing, 163–175 learning host mindset and, 165–166 leveraging, 158, 285–286 for online agents, 168–173 personable training to improve, 160–161 politeness and, 165 Social respect, 167 Social sharing, 167 Solo learning, collaborative vs., 173–175 Solo play, in games, 354 Solution approach comparison of, 284, 332 in scenario-based learning, 330 Spacing effect, 240–244, 250, 309, 314 Spanjers, I. A. E., 129, 222–223 Spatial ability, animations and, 125 Spatial procedural tasks, 122 Speech synthesizers, 172 Spiral technique, 307 Standalone instruction, games for, 350–352 Static online agents, 169–170 Steib, N., 173 Still captures, of animations, 306 Still visuals
422 • Index
animations vs., 118–122, 124, 125, 197–198, 384 illustrating dynamic change with, 120 as performance support, 121–122 of procedures, 120, 124 simple vs. complex, 195–196 and spatial ability of learners, 125 Stories, 185–189 beneficial effects of, 190 brain processes and, 188–189 learning and, 185–188 and learning host mindset, 165–166 over-inflated training with, 184 training myths about, 17–19 Strategic tasks comparing examples to learn, 219–221 examples for, 210, 212–213, 226, 385 high- vs. low-variable examples for, 213–215 procedures vs., 296, 297 Strayer, D. L., 52 Structured engagement opportunities, 373–375 Stull, A., 72, 78 Subtasks, procedure, 388 Success criteria, for scenario, 327 Sullenberger, Chesley “Sully,” 230, 231, 236 Summarizing, as practice exercise, 250 Sung, Y. T., 13, 36, 167 Supplementary instruction, with games, 350–352, 359–360 Supporting topics, for procedures, 300–304, 313 Sweller, J., 208 Sweller, John, 207–208, 235 Synchronization, of audio and text explanations, 149–150 Synchronous collaboration, 173 Synthetic research, 37–39
T
Tables, 94, 96, 112–113 Tajika, H., 101 Task, feedback targeting, 255, 257–258 Task complexity animations and, 124 collaborative learning and, 175 overlearning and, 239, 240 Task context, supporting topics in, 304 Task criterion, feedback focusing on, 255, 256, 259–260 Task process, feedback targeting, 255, 258 Taylor, K., 237–238 Teach-back strategy, 56, 81–83, 85 Technical detail, in explanations, 193 Technology, 9–11, 379–380. See also specific types Technology of Teaching (Skinner), 296 Temporal visuals, 94 Testing effect, 235 Tests and testing, 30–31, 44–45 Text for examples, 385 guidelines on, 385 placement of, 138, 150–153, 221 Text-based scenarios, 333–335 Text + graphic lessons, 29–30 Text instructions, 122 Text-only explanations, 138–141, 150, 154 cognitive load with, 148–150 in games, 356–357 learning from, 77–78 placement of, 151–153 Text-only lesson, text + graphic lesson vs., 29–30 Thematic visuals, 113 Third-person perspective, in animations, 123–124 3-D representations, 128 Time compression, 324 Timing of explanations, 288–290
Index • 423
of feedback, 255, 262–263 of testing, 44–45 Topic-specific training, animations for, 119–122 Tracing activities, 131–132 Training computer-based, 9, 20 domain-specific, 119–122, 389 games vs. traditional methods, 349–350 to improve critical thinking, 318–319 instructor-led, 20 over-inflated, 182–184 personable, 160–161 spreading practice over, 242 Training myths, 4–22 about active engagement, 16–17 about games, stories, and simulations, 17–19 about learner decision making, 14–16 about learning styles, 6–8 about media use, 9–11 and evidence-based practice in medicine, 4–5 and investments in learning, 6 and liking and learning, 12–14 Transformation of content, 72–73, 83, 84 Tree diagrams, 84, 94, 96 Troubleshooting, 324, 325 Turkish, personalized materials in, 164 Twitch games, 346 2-D representations, 128, 333
U
Um, E. R., 189–190, 199 Underlining, 75–76, 85 United States, social presence in, 168 Urhahne, D., 79 U.S. Army, 10 U.S. Department of Education, 11
V
Van der Kleij, F. M., 259, 262 Van der Land, S., 128 Van Gog, T., 235 Van Harsel, M., 209 Van Meter, P., 109 Variables, in synthetic research, 38–39 Verbal literacy, 92 Verbal self-explanations, self-explaining visuals vs., 110–111 Video games, 342 Videos, 81, 92, 141, 209 Virtual classrooms, 277–278, 303. See also E-learning Virtual Factory game, 164 Virtual worlds (VW), 125–126 Visible author technique, 166 Visible Learning and the Science of How We Learn (Hattie and Yates), 39 Visual agendas, 281 Visual center, of brain, 148 Visual cues, 95–96, 129–130, 132 Visual learners, 7, 142, 150 Visual literacy, 92 Visual organizers, 112 Visual perspective, in animations, 123–124, 306 Visuals, 92–115. See also Animations; Graphics and brain processes, 95–98 complex vs. simple, 106–107 content-general graphics, 112–113 content-specific graphics, 107–111 decorative graphics as, 100–106 in direct instructional designs, 371 effectiveness of graphics, 92–93, 113 explaining (See Explanations) in explanations, 281, 291 and learning styles, 7 in less is more approach, 195–198 for long-term learning, 98–99 for novice vs. experienced students, 99–100
424 • Index
over-inflated training with, 184 types of graphics, 93–95 Voice quality, of online agents, 172–173 VW (virtual worlds), 125–126
W
Wang, F., 165 Wang, N., 170–171 Water-pitcher role, 192 Watson, G. J., 121, 311 Weaver, J. P., 289, 312, 319 “What Is It?” content, 300, 301 Wijnia, L., 331 Worked examples, 207–213. See also Demonstrations defined, 207–209 in direct instructional designs, 372–373 formatting, 221–224 high- vs. low-variable, 213–215 invention tasks vs., 224–225 for problem solving, 210 for routine tasks, 210–212, 385 for strategic tasks, 210, 212–213, 385 in teach-back lesson, 82 using practice vs., 208–209 Working memory animations and, 128 and automaticity, 309 capacity of, 66 explanations and, 194 and extraneous cognitive load, 64 learning and, 56–58 music and, 191 and placement of explanations, 151 and problem solving, 209–210 processing centers of, 148 Wouters, P., 72, 348, 349, 351–354
Y
Yates, G., 39, 257 Yoon, T. E., 126 YouTube, 210, 274, 286 Yue, C. L., 147–148
Z
Zhang, Q., 109–110 Zhu, C., 79 Zoom effect, 130