Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice [2 ed.] 0323995942, 9780323995948

This second edition of Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice gives behavior a

292 97 10MB

English Pages 485 [488] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Cover
Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice
Copyright
Contents
Contributors
Preface
References
Section 1 Practice competencies
Chapter 1 Preference assessment and reinforcer evaluation
Introduction
Preference assessments for identifying reinforcers
Major types of preference assessment
Choosing a type of preference assessment
Deciding how often to conduct preference assessments
Conducting preference assessments efficiently
Preference assessments for nontangible items
Accounting for cultural differences
Preference as social validity
Assessing preference for interventions
Indices of happiness
Preference assessment in transition services
Additional applications of preference assessments
Training people to conduct preference assessments
Chapter summary
References
Chapter 2 Treatment integrity and procedural fidelity
Researching treatment integrity
Descriptive assessments of treatment-integrity errors
Measuring treatment integrity
Direct assessment
Indirect assessment
Interpreting treatment integrity
Reporting treatment integrity
Responding to integrity errors in practice
Conclusions
References
Further reading
Chapter 3 Functional analysis: Contemporary methods and applications
Functional analysis
Challenges, safeguards, and modifications
Inconclusive outcomes
Strategies for achieving differentiated results
Summary
References
Chapter 4 Video modeling
Video modeling
Applications in staff training
Training at various stages of employment
Standalone training or packaged intervention
Benefits of VM
Variations of VM
Selection of the performer
Point-of-view/perspective
Number of exemplars
Use of nonexamples
Number of video viewings
On-screen text
Voiceover instruction
Other training considerations
Instructional materials
Training activities that require active responding
Summary
References
Chapter 5 Creating graphs and visual data displays☆
Creating graphs and visual data displays
Common graph types for SCEDs
Line graphs
Bar graphs
Cumulative records
Combining graph types
Essential features of SCED graphs
Accurate data entry and sourcing
Axes and labels
Data representation
Legend
Phase-change lines and labels
Figure caption
Quality features of SCED graphs
Maximizing data-ink ratio and reducing chartjunk
Formatting considerations
Aspect ratio
Graphing software
Microsoft Excel
Google Sheets
GraphPad Prism and Systat SigmaPlot
Alternative programs for graphing
Graphing training
Task analyses
Formative graphing templates
Video models
Behavioral skills training
Chapter summary
References
Chapter 6 Supervising ABA trainees and service providers
Supervising ABA trainees and service providers
Research basis and evidence support
Critical practice considerations
The supervisor must consider
Present recommendations for service delivery
Recommendations for research inquiry
Summary
Appendix A
Appendix B
Appendix C
Appendix D
Appendix E
References
Further reading
Chapter 7 Applied behavior analysis and college teaching
What is teaching?
Higher education
Instructional systems and innovations
Programmed Instruction
Personalized System of Instruction
Total Performance System
Interteaching
Behavioral analytic training systems and instructional technology labs
Themes & tactics in contemporary behavioral approaches to college teaching
Procrastination & motivation
Participation, engagement, & responding
Teaching more in less time
Other applications of behavior analysis to college teaching
Instructional design & practical considerations
Context, macrosystems, & metacontingencies
Syllabi, instructional design, & contingencies
Future directions
References
Section 2 Technology, telehealth, and remote service delivery
Chapter 8 Technology guidelines and applications
Telebehavioral health evidence base
Practical considerations when preparing for TBH services
Practice guidelines when implementing TBH services
Chapter summary
References
Chapter 9 Data recording and analysis
Data recording and analysis
Contextualizing data and visual displays of information
Data recording and analysis
Common data types in ABA
Common recording methods in ABA
Common data analyses in ABA
Alternative data-related behaviors
Additional data types
Nonnumeric data types & their uses
Technology for collecting data on behavior
Technology for collecting data on the environment
Chapter summary
Conflict of interest
Financial support
References
Chapter 10 Behavior analytic supervision conducted remotely
Systems to promote best practices
Technology
Supervision relationship
Scope and sequence
Delivery of content/ensuring competency
Evaluating supervision effectiveness
Future research
Summary
References
Chapter 11 Teleconsultation to service settings
Review of the literature
Teleconsultation using the problem-solving model
Rapport building in teleconsultation
Confidentiality and privacy in teleconsultation
Recommendations to protect consultee privacy and confidentiality
Recommendations for teleconsultation service delivery
Equipment and software
Scheduling and planning sessions
Communicating with consultees
Establishing roles and responsibilities
Conclusion
References
Chapter 12 Telehealth-delivered family support ☆
Introduction
Research evidence supporting the use of telehealth in behavior analysis
Development of the Iowa telehealth model for challenging behavior
Critical practice considerations for developing and providing services via telehealth
Preservice considerations
Client suitability: Behavior analysts should evaluate the suitability of a potential client prior to initiating telehealth ...
Caregiver characteristics and preferences: Behavior analysts should consider caregiver characteristics and preferences prio ...
Technology: Behavior analysts should consider whether their clients have sufficient connectivity and hardware required for ...
Behavior analyst competence and practice setting: Behavior analysts should consider whether their own training, comfortabil ...
Service delivery models: Behavior analysts should consider the telehealth model that best supports the client and the clien ...
Legal and professional boundaries: Behavior analysts should consider feasibility of telehealth services for each client bas ...
Clinical service considerations
Client suitability: Behavior analysts should monitor for client behaviors that may diminish their appropriateness for tel ...
Caregiver characteristics and preferences: Behavior analysts should monitor caregiver preference for telehealth and a careg ...
Other challenges during telehealth: Behavior analysts should monitor for other challenges that may make telehealth services ...
Using a hybrid service model
International and cultural considerations
Preservice considerations: Behavior analysts should become aware of the client and family’s access, cultural history, live ...
Clinical service considerations: Behavior analysts should stay abreast and practice cultural responsiveness
Conclusion
References
Section 3 Professional development
Chapter 13 Diversity and multiculturalism
Current literature and clinical practice considerations
Culturally and linguistically diverse learners
Assessment
Families and caregiver training
Ethics and DEI
Recommendations for service delivery and research
ABA agencies and organizations
Continuing education for professionals
University training and preparation
Supervisory practices
Leadership development
Mentoring
Research
Chapter summary
References
Chapter 14 Ethics and ethical problem solving
Behavioral systems analysis
Behavioral systems and behavioral systems analysis
Using behavioral systems analysis to improve ethical behavior
Example 1 Incorporating consumer choice into treatment decisions
Examples of behavioral systems to incorporate consumer choice into treatment
Rewards and incentives
Form of treatment
Summary
Example 2 Monitoring adverse events of behavioral treatment
Coercion
Science is not inherently coercive
Examples of identifying, monitoring, and reducing adverse events and effects
Identifying and monitoring adverse events and effects
Reducing adverse events and effects
Summary
Example 3 Using collaborative consumer feedback loops to improve practice and standards
Feedback loops
Practical social validity assessments
Collaborative social validity assessments
Example of how providers can engage with the neurodiversity movement
Summary
Conclusion
Conflict of interest
References
Chapter 15 Organizational behavior management in human services
Organizational behavior management in human services
OBM consultation model
OBM-human services domains
Additional practice considerations
References
Chapter 16 Practice and consultation in health, sport, and fitness
Introduction to health, fitness, and sport and relevance for behavioral practitioners
Health
Weight management
Assessment
Intervention
Healthy eating
Assessment
Interventions
Rigid and picky eating
Fitness
Physical activity
Assessment and measurement
Interventions
Token economy systems
Group contingencies
Self-management
Sport
What is sport?
Assessment
Interventions
Package interventions
Antecedent interventions
Consequence interventions
Feedback interventions
Adaptive sports
Additional considerations
Ethical practice and consultation
Training in health, sport, and fitness
Summary
References
Chapter 17 Conducting and disseminating research
The behavior analyst as researcher
Implementation strategies
Leadership direction and support
Merging research and practice
Research teams
Research ethics
Writing for publication and public speaking
Incentives-positive reinforcement
Summary and conclusions
References
Index
Back Cover
Recommend Papers

Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice [2 ed.]
 0323995942, 9780323995948

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

APPLIED BEHAVIOR ANALYSIS ADVANCED GUIDEBOOK

This page intentionally left blank

APPLIED BEHAVIOR ANALYSIS ADVANCED GUIDEBOOK A Manual for Professional Practice Second Edition Edited by

JAMES K. LUISELLI Melmark New England, Andover, MA, United States

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2023 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. ISBN 978-0-323-99594-8 For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Nikki P. Levy Acquisitions Editor: Joslyn T. Chaiprasert-Paguio Editorial Project Manager: Barbara L. Makinster Production Project Manager: Sajana Devasi P K Cover Designer: Matthew Limbert Typeset by STRAIVE, India

Contents Contributors xi Preface xv

Section 1  Practice competencies 1. Preference assessment and reinforcer evaluation

3

Judah B. Axe, Christopher A. Tullis, Caleb R. Davis, and Mei-Hua Li Introduction 3 Preference assessments for identifying reinforcers 5 Preference as social validity 19 Additional applications of preference assessments 23 Chapter summary 24 References 25

2. Treatment integrity and procedural fidelity

33

Tiffany Kodak, Samantha Bergmann, and Mindy Waite Researching treatment integrity 34 Measuring treatment integrity 39 Reporting treatment integrity 55 Responding to integrity errors in practice 56 Conclusions 59 References 59 Further reading 62

3. Functional analysis: Contemporary methods and applications

63

John Michael Falligant, Brianna Laureano, Emily Chesbrough, and Samantha Hardesty Functional analysis 65 Summary 76 References 77

4. Video modeling

83

Florence D. DiGennaro Reed, Sandra A. Ruby, Matthew M. Laske, and Jason C. Vladescu Video modeling Applications in staff training

83 84 v

vi

Contents

Standalone training or packaged intervention 86 Variations of VM 89 Summary 102 References 103

5. Creating graphs and visual data displays

107

Daniel R. Mitteer, Michael P. Kranak, Ashley M. Fuhrman, and Brian D. Greer Creating graphs and visual data displays 107 Common graph types for SCEDs 108 Essential features of SCED graphs 112 Quality features of SCED graphs 119 Graphing software 121 Graphing training 124 Chapter summary 128 References 129

6. Supervising ABA trainees and service providers

133

Amber L. Valentino and Mia N. Broker Supervising ABA trainees and service providers 133 Research basis and evidence support 134 Critical practice considerations 138 Summary 147 Appendix A 148 Appendix B 148 Appendix C 149 Appendix D 150 Appendix E 151 References 151 Further reading 153

7. Applied behavior analysis and college teaching

155

Traci M. Cihon, Bokyeong Amy Kim, John Eshleman, and Brennan Armshaw What is teaching? 156 Higher education 159 Instructional systems and innovations 160 Themes & tactics in contemporary behavioral approaches to college teaching 166 Instructional design & practical considerations 175 Future directions 179 References 180



Contents

vii

Section 2  Technology, telehealth, and remote service delivery 8. Technology guidelines and applications

191

Brittany J. Bice-Urbach Telebehavioral health evidence base 192 Practical considerations when preparing for TBH services 194 Practice guidelines when implementing TBH services 200 Chapter summary 210 References 210

9. Data recording and analysis

217

David J. Cox, Asim Javed, Jacob Sosine, Clara Cordeiro, and Javier Sotomayor Data recording and analysis 217 Contextualizing data and visual displays of information 218 Data recording and analysis 220 Alternative data-related behaviors 228 Chapter summary 238 Conflict of interest 239 Financial support 239 References 239

10. Behavior analytic supervision conducted remotely

247

Lisa N. Britton and Tyra P. Sellers Systems to promote best practices 248 Future research 260 Summary 261 References 262

11. Teleconsultation to service settings

265

Evan H. Dart, Nicolette Bauermeister, Courtney Claar, Ashley Dreiss, Jasmine Gray, and Tiara Rowell Review of the literature 266 Teleconsultation using the problem-solving model 268 Rapport building in teleconsultation 271 Confidentiality and privacy in teleconsultation 272 Recommendations for teleconsultation service delivery 275 Conclusion 280 References 280

viii

Contents

12. Telehealth-delivered family support

285

Kelly M. Schieltz, Matthew J. O’Brien, and Loukia Tsami Introduction 285 Research evidence supporting the use of telehealth in behavior analysis 286 Critical practice considerations for developing and providing services via telehealth 291 Conclusion 311 References 312

Section 3  Professional development 13. Diversity and multiculturalism

321

Brian Conners Current literature and clinical practice considerations 322 Recommendations for service delivery and research 326 Chapter summary 337 References 337

14. Ethics and ethical problem solving

341

Matthew T. Brodhead and Noel E. Oteto Behavioral systems analysis 342 Using behavioral systems analysis to improve ethical behavior 345 Example 1 Incorporating consumer choice into treatment decisions 345 Summary 350 Example 2 Monitoring adverse events of behavioral treatment 351 Summary 357 Example 3 Using collaborative consumer feedback loops to improve practice and standards 358 Summary 363 Conclusion 364 Conflict of interest 364 References 364

15. Organizational behavior management in human services

369

James K. Luiselli Organizational behavior management in human services OBM consultation model OBM-human services domains

369 372 374



Contents

ix

Additional practice considerations 383 References 385

16. Practice and consultation in health, sport, and fitness

393

Julie M. Slowiak, Janet Dai, Sarah Davis, and Rocky Perez Introduction to health, fitness, and sport and relevance for behavioral practitioners 393 Health 394 Fitness 404 Sport 410 Additional considerations 418 Summary 420 References 422

17. Conducting and disseminating research

437

James K. Luiselli, Frank Bird, Helena Maguire, and Rita M. Gardner The behavior analyst as researcher 437 Implementation strategies 440 Summary and conclusions 456 References 457

Index 461

This page intentionally left blank

Contributors Brennan Armshaw West Virginia University, Morgantown, WV, United States Judah B. Axe Simmons University, Boston, MA, United States Nicolette Bauermeister Department of Educational and Psychological Studies, College of Education, University of South Florida, Tampa, FL, United States Samantha Bergmann Department of Behavior Analysis, University of North Texas, Denton, TX, United States Brittany J. Bice-Urbach Medical College of Wisconsin, Milwaukee, WI, United States Frank Bird Melmark, Berwyn, PA, United States Lisa N. Britton Britton Behavioral Consulting, Pinole, CA, United States Matthew T. Brodhead Department of Counseling, Educational Psychology, and Special Education, Michigan State University, East Lansing, MI, United States Mia N. Broker Trumpet Behavioral Health, United States Emily Chesbrough Kennedy Krieger Institute, Baltimore, MD, United States Traci M. Cihon University of North Texas, Denton, TX, United States Courtney Claar Department of Educational and Psychological Studies, College of Education, University of South Florida, Tampa, FL, United States Brian Conners Brian Conners, BCBA, LLC, Pompton Lakes; Seton Hall University, South Orange, NJ, United States Clara Cordeiro Behavioral Data Science Research Lab, Endicott College, Beverly, MA, United States David J. Cox Behavioral Data Science Research Lab, Endicott College, Beverly, MA, United States Janet Dai The Chicago School of Professional Psychology, Chicago, IL, United States xi

xii

Contributors

Evan H. Dart Department of Educational and Psychological Studies, College of Education, University of South Florida, Tampa, FL, United States Caleb R. Davis Simmons University, Boston, MA, United States Sarah Davis Brock University, St. Catharines, ON, Canada Florence D. DiGennaro Reed University of Kansas, Department of Applied Behavioral Science, Lawrence, KS, United States Ashley Dreiss Department of Educational and Psychological Studies, College of Education, University of South Florida, Tampa, FL, United States John Eshleman Retired Professor, Galesburg, IL, United States John Michael Falligant Kennedy Krieger Institute; Johns Hopkins University School of Medicine, Baltimore, MD, United States Ashley M. Fuhrman Severe Behavior Program, Children’s Specialized Hospital–Rutgers University Center for Autism Research, Education, and Services (CSH–RUCARES), Somerset; Department of Pediatrics, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States Rita M. Gardner Melmark, Berwyn, PA, United States Jasmine Gray Department of Educational and Psychological Studies, College of Education, University of South Florida, Tampa, FL, United States Brian D. Greer Severe Behavior Program, Children’s Specialized Hospital–Rutgers University Center for Autism Research, Education, and Services (CSH–RUCARES), Somerset; Department of Pediatrics, Rutgers Robert Wood Johnson Medical School, New Brunswick; Rutgers Brain Health Institute, Rutgers University, Piscataway, NJ, United States Samantha Hardesty Kennedy Krieger Institute; Johns Hopkins University School of Medicine, Baltimore, MD, United States Asim Javed Behavioral Data Science Research Lab, Endicott College, Beverly, MA, United States Bokyeong Amy Kim University of North Texas, Denton, TX, United States Tiffany Kodak Department of Psychology, Marquette University, Milwaukee, WI, United States



Contributors

xiii

Michael P. Kranak Department of Human Development and Child Studies, Oakland University; Oakland University Center for Autism, Rochester, MI, United States Matthew M. Laske University of Kansas, Department of Applied Behavioral Science, Lawrence, KS, United States Brianna Laureano Kennedy Krieger Institute; Johns Hopkins University School of Medicine, Baltimore, MD, United States Mei-Hua Li Simmons University, Boston, MA, United States James K. Luiselli Clinical Development and Research, Melmark New England, Andover, MA, United States Helena Maguire Clinical Development and Research, Melmark New England, Andover, MA, United States Daniel R. Mitteer Severe Behavior Program, Children’s Specialized Hospital–Rutgers University Center for Autism Research, Education, and Services (CSH–RUCARES); Department of Pediatrics, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States Matthew J. O’Brien The University of Iowa Stead Family Children’s Hospital, Carver College of Medicine, Stead Family Department of Pediatrics, Iowa City, IA, United States Noel E. Oteto Department of Counseling, Educational Psychology, and Special Education, Michigan State University, East Lansing, MI, United States Rocky Perez Western Michigan University, Kalamazoo, MI, United States Tiara Rowell Department of Educational and Psychological Studies, College of Education, University of South Florida, Tampa, FL, United States Sandra A. Ruby University of Kansas, Department of Applied Behavioral Science, Lawrence, KS, United States Kelly M. Schieltz The University of Iowa Stead Family Children’s Hospital, Carver College of Medicine, Stead Family Department of Pediatrics, Iowa City, IA, United States Tyra P. Sellers TP Sellers, LLC, Highlands Ranch, CO, United States Julie M. Slowiak University of Minnesota Duluth, Duluth, MN, United States

xiv

Contributors

Jacob Sosine Behavioral Data Science Research Lab, Endicott College, Beverly, MA; Behavioral Health Center of Excellence, Los Angeles, CA, United States Javier Sotomayor Behavioral Data Science Research Lab, Endicott College, Beverly, MA; Habita Behavioral Care, San Diego, CA, United States Loukia Tsami University of Houston, Clear Lake, Center for Autism and Developmental Disabilities, Houston, TX, United States Christopher A. Tullis Georgia State University, Atlanta, GA, United States Amber L. Valentino Trumpet Behavioral Health, United States Jason C. Vladescu Caldwell University, Department of Applied Behavior Analysis, Caldwell, NJ, United States Mindy Waite Department of Psychology, University of Wisconsin-Milwaukee, Milwaukee,WI, United States

Preface This second edition of Applied Behavior Analysis Advanced Guidebook includes practice domains that have continued to evolve over the years to become more refined and thus remain the core competencies for behavior analysts and other behavioral practitioners. This new guidebook also includes more recent developments such as telehealth modalities and technology-­ assisted services, with an emerging emphasis on ethics, diversity, multiculturalism, and expanded practice options. All chapters of the guidebook trace the historical basis for the topics reviewed, underscore the evidence support, present practitioner recommendations, and suggest ways to advance ­research-to-practice translation. My hope is that this guidebook captures the fast-paced evolution of ABA applications in children, youth, and adults; contributes to professional development; and improves organizations responsible for education, treatment, and client care. Contemporary behavior analysis not only remains firmly grounded in foundational principles (Baer, Wolf, & Risley, 1968) but also reflects new and innovative thinking while continuing to be driven by data, which point to context-informed change and respect for the attitudes and opinions of valued stakeholders (Wolf, 1978). I have been blessed with the guidance, direction, and good advice from many people who, whether they know it or not, have made this book possible—thank you Donald, Van, Carol, Gene, Jerry, Ned, Anne, Jill, ­ Spencer, Ray, David, Warren, Ron, Paul, Michel, Nirbhay, Gary, and Joe. I am indebted to Rita, Frank, and Helena for the opportunity to collaborate and share the inspired work we are doing. With gratitude and love, I dedicate this book to my family, Tracy, Gabrielle, and Thomas, and our feline friends, Ellie, Bunny, and Sophie. James K. Luiselli Clinical Development and Research, Melmark New England, Andover, MA, United States

References Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97. https://doi.org/10.1901/ jaba.1968.1-91. Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214. https://doi.org/10.1901/jaba.1978.11-203.

xv

This page intentionally left blank

SECTION 1

Practice competencies

This page intentionally left blank

CHAPTER 1

Preference assessment and reinforcer evaluation Judah B. Axea, Christopher A. Tullisb, Caleb R. Davisa, and Mei-Hua Lia a Simmons University, Boston, MA, United States Georgia State University, Atlanta, GA, United States

b

Introduction Positive reinforcement is the most basic principle and procedure in applied behavior analysis (ABA). Skinner (1938, 1953) discovered and defined positive reinforcement as when a stimulus repeatedly follows a type of behavior resulting in an increase in the future frequency of that behavior. For example, a therapist working with a child with a disability may be teaching the child to initiate conversations with peers. If the therapist presents a positive reinforcer, such as a high-five, immediately after each instance of initiating a conversation, the future frequency of initiations will increase. But how does the therapist know what will function as a positive reinforcer, particularly for clients diagnosed with autism or other developmental or intellectual disabilities who have intensive needs and limited language? There are several ways. First, a therapist could deliver an item immediately after instances of a certain behavior and record whether the frequency of that behavior increases. This is the most direct way to identify as stimulus as a reinforcer, but it tends to be time consuming in practice. Second, a therapist could ask a client or their caregivers what functions as a reinforcer and/or observe what the client interacts with but these methods are often unreliable. Third, a therapist can offer items to a client and observe which items they select and engage with or consume. Behavior analysts often use this third method, termed, “preference assessment.” See Fig. 1 for a schematic of different types of preference assessment. The second method—asking caregivers and observing the client—is often used first to record a list of potential reinforcers that are then tested in preference assessments. Recording how often a client engages with items relative to other items allows them to be considered low-, moderate-, or

Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00001-5

Copyright © 2023 Elsevier Inc. All rights reserved.

3

4

Applied behavior analysis advanced guidebook

Fig.  1  Preference assessment decision chart (https://www.appliedbehavioranalysis. com/preference-assessments/).

high-preference items; such designations are referred to as a “preference hierarchy.” Preference hierarchies may be used to isolate high-preference items for intensive teaching or independent responses, while moderate-preference items are used for solitary play and prompted responses. Preference assessment results are often reported with bar graphs with the items on the x-axis and the “percentage of trials selected” on the y-axis (see Fig. 2). In terms of predictive validity, research has shown that items selected in a preference assessment usually function as reinforcers (Curiel, Curiel, & Poling, 2021; Hagopian, Rush, Lewin, & Long, 2001; Kang et  al., 2013; Kodak, Fisher, Kelley, & Kisamore, 2009; Lanner, Nichols, Field, Hanson, & Zane, 2010; Piazza, Fisher, Hagopian, Bowman, & Toole, 1996). Another use of preference assessments is measuring social validity.When clients cannot verbally express preferences for interventions, therapists, work sites, or living arrangements, they may be given choices of these items and situations. This is a more objective and reliable method of assessing social validity (observing selection behaviors) compared to responding to questionnaires and interviews (verbal report). Providing choices is particularly important when helping clients plan for the transition from school to adult life to ensure they are involved in selecting vocational tasks, leisure items, social situations, and living arrangements. In this chapter, we describe using



Preference assessment and reinforcer evaluation

5

Pecentage of Trials Selected

100 90 80 70 60 50 40 30 20 10 0 squishy toy

music

puzzle magazine spin toy

cards

drawing

Items Fig. 2  Graph of a preference assessment.

preference assessments to identify reinforcers and measure social validity, as well as other applications.

Preference assessments for identifying reinforcers Major types of preference assessment Six types of preference assessments have been used to identify preferred stimuli, described as (a) single stimulus, (b) paired stimulus, (c) multiple stimuli with replacement, (d) multiple stimuli without replacement, (e) free operant, and (f) restricted response. Many of these assessments are conducted in a trial-based format. Prior to implementing these assessments, a therapist gathers of pool of 5 to 8 items derived from interviews with caregivers and observations of the client. These items might be foods, drinks, toys, or any items that appear to function as reinforcers. A therapist may ask a parent to complete the Reinforcer Assessment for Individuals with Severe Disabilities (RAISD; Fisher, Piazza, Bowman, & Amari, 1996) or a questionnaire asking about potential reinforcers across several senses (e.g., taste, touch; Fig. 3).We describe the unique characteristics of each type of preference assessment and considerations for conducting them. To implement the single stimulus preference assessment (SS; Hagopian et al., 2001; Pace, Ivancic, Edwards, Iwata, & Page, 1985), the therapist presents one item at a time in a trial-based format and observes the client’s response, which may be reaching for or looking at the item. The therapist

6

Applied behavior analysis advanced guidebook

Fig. 3  The Reinforcement Assessment for Individuals with Severe Disabilities (RAISD).

rotates through a variety of stimuli, presenting each one several times and allowing the client to briefly (e.g., 30 s) consume or engage with the item. Stimuli selected in a high proportion of opportunities or for the longest duration are considered high preference (Kodak et al., 2009). Unlike other types of preference assessment, the SS does not require the client to scan



Preference assessment and reinforcer evaluation

7

an array and choose from multiple items. However, for this reason, this method may not produce a preference hierarchy if multiple items are selected in a high proportion of opportunities, that is, all items may appear highly preferred. Like the SS, the paired stimulus preference assessment (PS; Fisher et al., 1992; Paclawskyj & Vollmer, 1995), also known as the forced-choice or paired-choice preference assessment, is implemented in a trial-based format with two items presented simultaneously and the instruction to “pick one.” The therapist presents pairs of items in all possible combinations, and each pairing may be assessed multiple times, resulting in many trials. For example, including 8 items in a PS in all possible pairings with each item paired with each other item in both positions (right and left) results in 56 trials. Preference is determined by calculating the proportion of opportunities an item was selected when it was available. See Fig. 4 for a sample PS data sheet. Due to the need to choose between two items each trial, the PS results in a preference hierarchy. However, given the repeated testing of items paired with all other items, the PS often takes longer to implement than the SS and other methods. Two preference assessment strategies involve presenting several (e.g., 5–8) items in an array each trial. To implement the multiple stimuli with replacement preference assessment (MS; Keen & Pennell, 2010; Windsor, Piché, & Locke, 1994), the therapist arranges items in a line in front of the client. With a large array, it is important to point to all items and ensure the client looks at them. After the client selects an item, consumes it, or briefly (e.g., 30 s) engages with it, the therapist places the item back in the array, rearranges the order of items, and begins a new trial. On each subsequent trial, the client may choose from all the original items. Like the PS, the MS provides a measure of relative preference, but the MS requires fewer trials. However, because each item is available every trial, the client may choose only the most preferred item(s), resulting in the incorrect assumption that the other items do not function as reinforcers when they might (i.e., false negatives). For example, in an MS with cookies, chips, and candy, if the client chooses candy in every trial, one might incorrectly conclude that cookies and chips are not reinforcers. To overcome this limitation, DeLeon and Iwata (1996) suggested not replacing each chosen item in the next trial, a method called the multiple stimulus without replacement preference assessment (MSWO; DeLeon & Iwata; Richman, Barnard-Brak, Abby, & Grubb, 2016). To implement the MSWO, the therapist presents all items, and the client selects one. After the

8

Applied behavior analysis advanced guidebook

Paired Stimulus Preference Assessment (4 items) Item A: _____________________ Item B: _____________________ Item C: _____________________ Item D: _____________________

Item A selected: Item B selected: Item C selected: Item D selected:

Date: Child: Teacher: Trial # 1. 2. 3. 4. 5. 6.

Item selection A B C A A D B C D B C D

Date: Child: Teacher: Trial # 1. 2. 3. 4. 5. 6.

Item selection A B C A A D B C D B C D

Item selection A B C A A D B C D B C D

Date: Child: Teacher: Trial # 1. 2. 3. 4. 5. 6.

Item selection A B C A A D B C D B C D

Date: Child: Teacher: Trial # 1. 2. 3. 4. 5. 6. Date: Child: Teacher: Trial # 1. 2. 3. 4. 5. 6.

_____ times _____ times _____ times _____ times

Highest preferred items (selected highest number of times): Item selection A B C A A D B C D B C D

Moderately preferred items (selected moderate number of times): Lowest preferred items (selected fewest number of times):Ǩ

Fig. 4  Sample data sheet for the paired stimulus preference assessment (PS) (https:// ebip.vkcsites.org/wp-content/uploads/2016/03/EBIP_Paired-Stimulus_Data-Sheet_4Items.pdf ).



Preference assessment and reinforcer evaluation

9

client consumes or briefly engages with the item, the therapist leaves that item out of the subsequent trials and rearranges the order of the remaining items.This process continues until there are no items remaining or no items are chosen (see Fig.  5). A common way to score an MSWO is to assign points to the item selected each trial (Ciccone, Graff, & Ahearn, 2005). For MSWO for 5 items Item A: ____________________________ Item B: ____________________________ Item C: ____________________________ Item D: ____________________________ Item E: ____________________________

Sum of trial #s for A: __________________ Sum of trial #s for B: __________________ Sum of trial #s for C: __________________ Sum of trial #s for D: __________________ Sum of trial #s for E: __________________

Date: Child name: Teacher name: Trial Item # selected 1 2 3 4 5

Placement of item selected x x x x x x x x x x x x x x x

Date: Child name: Teacher name: Trial Item # selected 1 2 3 4 5

Placement of item selected x x x x x x x x x x x x x x x

Placement of item selected x x x x x x x x x x x x x x x

Date: Child name: Teacher name: Trial Item # selected 1 2 3 4 5

Placement of item selected x x x x x x x x x x x x x x x

Date: Child name: Teacher name: Trial Item # selected 1 2 3 4 5

Date: Child name: Teacher name: Trial Item # selected 1 2 3 4 5

Highest preferred items (lowest summed trial #s): Placement of item selected x x x x x x x x x x x x x x x

Moderately preferred items (moderate summed trial #s): Lowest preferred items (highest summed trial #s):

Fig.  5  Sample data sheet for the multiple stimulus without replacement preference assessment (MSWO) (https://ebip.vkcsites.org/wp-content/uploads/2016/03/EBIP_ MSWO_Data-Sheet_5-items.pdf ).

10

Applied behavior analysis advanced guidebook

example, when assessing five items, the first item chosen receives a score of five points; the second item chosen receives a score of four points; and so on. After implementing the MSWO with five items five times, points for each item are summed to determine the preference hierarchy. The MS and MSWO are efficient methods of assessing preference, though the MSWO is more likely to identify multiple preferred items. Unlike the SS and PS, the MS and MSWO require a client to scan and choose from an array. A preference assessment method that does not use trials is the free operant preference assessment (FO; Clay, Schmitz, Clohisy, Haider, & Kahng, 2021; Roane, Vollmer, Ringdahl, & Marcus, 1998). To implement an FO, the therapist puts all items on a table or in a play or leisure area. During a brief session (e.g., 5 min), the therapist records the duration the client engages with each item. Items engaged with for longer durations are rendered most preferred (see Fig. 6). The FO is efficient if sessions are short, and the FO may produce a preference hierarchy as items are concurrently available and the client must choose between them. However, like the SS, the client may interact exclusively with the most preferred item(s), which may result in false negatives. The FO is particularly useful for assessing long-duration activities such as video games (Kodak et  al., 2009). Attention-maintained problem behavior may occur during FOs as attention is withheld. However, compared to the trial-based methods, the FO is less likely to evoke problem behavior as there are no demands to choose an item and no removal of reinforcers (Tung, Donaldson, & Kahng, 2017;Verriden & Roscoe, 2016). The final type of preference assessment is the response restriction preference assessment (RR; Boyle et  al., 2019; Hanley, Iwata, Lindberg, & Conners, 2003), which combines elements of the MSWO and FO. On each trial, which is 3 to 5 min, the therapist places several items in front of the client and tells them to play with whatever they like. Similar to the FO, items are not removed and the therapist records the duration of engagement with each item. Then, like the MSWO, the therapist removes the item that was engaged with the most and re-presents the remaining items for the next trial. Boyle et al. found that the RR was more likely than the FO to produce a preference hierarchy, but the RR took more time.

Choosing a type of preference assessment Research has shown that all six types of preference assessment are effective in identifying positive reinforcers and each type has pros and cons (see Fig. 7 for a summary). Therefore, it may be challenging to choose a format because there are no published recommendations for matching assessment



Preference assessment and reinforcer evaluation

11

Free Operant Observation Log Date:

Location: Item/activity

Approached

Teacher: Did not Engaged approach with

Child: Duration of engagement ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s ______ min, _____ s

Highest preferred items (approached frequently, engaged with for longest durations):

Moderately preferred items (approached, engaged with for shortest durations):

Low preferred items (did not approach):

Fig. 6  Sample data sheet for the free operant preference assessment (FO) (https://ebip. vkcsites.org/wp-content/uploads/2016/03/EBIP_Free-Operant_Data-Sheet.pdf ).

methods to types of clients. One rule of thumb is that the MSWO and PS are the most reliable methods although the PS takes more time (Kang et al., 2013). Some researchers have offered decision-making models for choosing a type of preference assessment. For example, Karsten et al. (2011) suggested starting with an MSWO and progressing through alternate ­ preference

12

Applied behavior analysis advanced guidebook

Fig.  7  Assets and potential barriers of the major types of preference assessment (Karsten, Carr, & Lepper, 2011, p. 350).

a­ ssessment types based on certain outcomes (see Fig. 8). Similarly, if a client engages in problem behavior, the therapist may switch to an FO, or if there is a position bias, the therapist may use an SS or present items closer together in a small container. Virués-Ortega et  al.’s (2014) decision-making model assists therapists in selecting a preference assessment by asking a series of questions about prerequisite skills, time constraints, problem behavior, preference hierarchy, and long-duration reinforcers (see Fig. 9). Most recently, the model by Lill, Shriver, and Allen (2021) guides the therapist to arrive at multiple assessment format options (see Fig. 10). Agreement among results of each option and efficiency are provided. This model involves preassessment considerations such as item selection (e.g., same class, equal portion) and motivational variables (i.e., restricting access to items 15 min prior to assessment). Given the absence of conclusive experimental evidence classifying assessments in relation to participant characteristics, these decision-making models may assist in individualizing assessments for clients.



Preference assessment and reinforcer evaluation

Problem behavior?

Multiple-stimulus (without replacement)

13

Minimum preference hierarchy?

Positional bias? Problem behavior?

Multiple-stimulus (without replacement) – toy box Minimum preference hierarchy?

Free-operant Minimum hierarchy?

Implement effective teaching procedures Increased PB or limited progress?

Add confidence to SPA results with formal or informal test of reinforcer strength Increased PB or limited progress?

Repeat preference assessment including 2–3 stimuli not assessed in initial SPA

Note: SPA = stimulus preference assessment; PB = problem behavior.

Fig. 8  Decision-making model (Karsten et al., 2011, p. 354).

Deciding how often to conduct preference assessments After selecting a type of preference assessment, a critical consideration is that preference is usually not stable. In other words, what functions as a reinforcer at one moment may not function as a reinforcer the next moment. This outcome occurs because reinforcers change in effectiveness based on motivating operations (MO), defined as (a) antecedent events or conditions

Fig. 9  Decision-making model (Virués-Ortega et al., 2014).

Fig. 10  Decision-making model (Lill et al., 2021, p. 1147).



Preference assessment and reinforcer evaluation

15

that temporarily increase (establishing operation; EO) or decrease (abolishing operation; AO) the value of a stimulus as a reinforcer (value-altering effect) and (b) increase (EO) or decrease (AO) the likelihood of engaging in behavior that has produced that stimulus in the past (behavior-altering effect; Michael & Miguel, 2020). For example, if a child has not eaten for a while, there may be an EO for food; if a child wants to color and is given paper but no marker, there may be an EO for a marker. A therapist might offer a Mr. Potatohead toy with each piece functioning as a reinforcer for inserting it, but when another child starts playing with a marble run game, the EO might change from the Mr. Potatohead toy to the marble run game. A child may have an EO for salty popcorn, and after consuming 20 pieces might have an AO for popcorn and an EO for juice. The preceding discussion suggests that if a therapist conducts a preference assessment on Monday at 9:00 am and the most-selected item is a toy truck, there was an EO for the truck at 9:00 am but the EO may be gone at 10:00 am (i.e., an AO for the truck). Gottschalk, Libby, and Graff (2000) demonstrated the effects of MOs on preference assessments among young clients with developmental disabilities, first with a PS to identify four moderately preferred edibles. In the EO condition, the clients had no access to the items for 24 or 48 h. In the AO condition, the clients had access to the items during the 24 h prior to the session as well as 10 min prior to the session. The clients approached the items in the EO condition more often than in the AO condition. Chappell, Graff, Libby, and Ahearn (2009) extended Gottschalk et  al. (2000) with three conditions: access immediately before the session, 10 min of deprivation, and 20 min of deprivation. Two of three participants showed higher preferences for items in the 20-min deprivation condition compared to the immediate and 10-min deprivation conditions. These results highlight why MOs need to be taken into account when conducting preference assessments. As preference assessment results are often not static, the relevant question for therapists should be: What items are preferred or not preferred under certain MO conditions, or right now? Additionally, results of preference assessments conducted months apart are unlikely to be consistent (MacNaul, Cividini, Wilson, & Di Paola, 2021). A way to frequently assess EOs for potential reinforcers is to conduct mini-preference assessments prior to teaching sessions, which may be structured (e.g., PS, MSWO) or unstructured (e.g., FO). If the same items are offered for many weeks, the child may lose interest in those items (i.e., an AO), thus it is recommended to increase the range of potential reinforcers and continually identify new reinforcers (Barbera, 2007).

16

Applied behavior analysis advanced guidebook

Conducting preference assessments efficiently Given the intensive needs of most clients receiving ABA services as well as the need to conduct preference assessments regularly due to changing MOs, procedures must be implemented efficiently. While retaining the predictive validity of selected items functioning as reinforcers, researchers have provided two types of adaptations for conducting preference assessments quickly. First, alter the number of stimulus presentations or time during assessment. For example, Clay et al. (2021) reduced FO sessions from 5 min to 1 min. Similarly, whereas the common method of implementing an MSWO is presenting the entire array five times, a more efficient method is presenting the entire array only once (Richman et al., 2016; Tullis, CannellaMalone, & Fleming, 2012). Presenting an MSWO array once or twice, compared to three times, may yield the same hierarchy but not the same high-preference item (Conine et al., 2021), though this is not a concern if the high-­preference item functions as a reinforcer. Second, the types of items presented may be altered, particularly in settings where certain items are not readily available for sampling or cumbersome to deliver repeatedly such as a preferred teacher and playing basketball. A solution to this challenge is to leverage representational forms of stimuli like pictures (Graff & Gibson, 2003) or video clips (Curiel, Curiel, Adame, & Li, 2020; Snyder, Higbee, & Dayton, 2012), perhaps displayed on a computer or tablet (Brodhead et al., 2016). Curiel et al. offered a digital tool that may be helpful in modifying stimuli using videos (https://mswopat.utrgv.edu). When using representational stimuli, it is important to ensure that clients can match items to pictures or videos (Clevenger & Graff, 2005) or can be taught this response. Using pictures to assess preference may be particularly efficient if the client selects a picture but does not engage with that reinforcer (Brodhead, Kim, & Rispoli, 2019; Groskreutz & Graff, 2009), though this may evoke problem behavior (Davis et al., 2010; Kang et al., 2011).

Preference assessments for nontangible items Another role of pictures and videos is to assess social and other nontangible items (Wolfe, Kunnavatana, & Shoemaker, 2018). Clay, Samaha, Bloom, Bogoev, and Boyle (2013) used a PS to offer choices of therapists who provided unique types of social interaction (e.g., tickles, head rubs, high fives). Morris and Vollmer (2019) developed the social interaction preference assessment (SIPA), which combines aspects of the MSWO and RR. For five trials each session, the therapist presents pictures of types of social



Preference assessment and reinforcer evaluation

17

interaction, and if an item is selected at least 80% of trials across two sessions, it is removed for the following sessions. Morris and Vollmer (2020a, 2020b) further evaluated the validity of the SIPA and how it compared to the MSWO and vocal PS. They found that compared to low preference social interactions, high preference interactions identified by the SIPA were most effective as reinforcers during teaching sessions. When comparing the SIPA to the MSWO and vocal PS (e.g., “do you want X or Y?”), the MSWO and SIPA produced valid outcomes for all participants, and the vocal PS produced valid outcomes for the verbal participants. The MSWO was more efficient than the SIPA, and the SIPA produced more valid results for clients with limited matching and tacting repertoires. Other examples of nontangible reinforcers are sounds (e.g., music) and smells (e.g., perfumes), which may be assessed using a PS. For example, Horrocks and Higbee (2008) presented two portable CD players that each played a different song. After briefly sampling each song, the client chose a song by pointing to one of the CD players.Wilder et al. (2008) held two air fresheners (one at a time) up to the client’s nose for several seconds and then asked them to choose one. Saunders and Saunders (2011) conducted preference assessments with nonambulatory adults with profound intellectual and sensory disabilities who activated adaptive switches to access auditory (e.g., music), tactile (e.g., vibration), visual (e.g., strobe light), and olfactory (e.g., defuser) stimulation. Clients with severe, multiple disabilities may also use eye gaze to indicate preference (Cannella, Sabielny, & Tullis, 2015).

Accounting for cultural differences Behavior analysts must consider how the cultural and linguistic background of each client may impact preference for potential reinforcers (Fong, Catagnus, Brodhead, Quigley, & Field, 2016). Given projections that the foreign-born population will grow from 13% (in 2016) to 19% by 2060 (United States Census Bureau, 2015), behavior analysts are serving more racially, culturally, and linguistically diverse clients. In ABA, cultural responsiveness and cultural humility must be strongly woven into the work of serving clients (BACB Ethics Code, 2020, Code 1.07) and training behavior analysts (Code 4.07). Food items are often used as reinforcers in behavioral programming, thus there are several considerations when suggesting or including food items in preference assessments. For example, Italian families may prefer dairy-based snacks (e.g., cheese) while Asian families may favor wheat-based cuisine

18

Applied behavior analysis advanced guidebook

(e.g., noodles, dumplings). Additionally, among families with incomes at or below the poverty line, it is important to exercise caution when suggesting food items that may be prohibitively expensive (Wright, 2019) and where food insecurity (i.e., “lacking consistent access to enough food for active and healthy living;” Tucker, Davis, Perez, Klein, & D’Amico, 2022, p. 737) is a concern (Beaulieu & Jimenez, 2022; Dennison et al., 2019). Resetar Volz and Cook (2009) recommended that schools and organizations serving verbal individuals such as adolescents diagnosed with emotional disturbance conduct surveys to assess preferences. They analyzed the results of 313 survey respondents at a residential school for children and adults and reported “For the item outing to fast food, African American youth rated it as significantly more preferred than both Caucasian and Other youth. Results revealed that Caucasian youth, on the other hand, were significantly more likely to rate outing to nice restaurant, playing video games, and playing outdoors as a preferred activity than African American and Other youth” (p. 787). Though these results were correlational with a weak-to-moderate effect size (0.2) and there are likely other contributing factors, a survey approach may be indicated for determining preferred items among diverse individuals residing in large groups. Cultural and linguistic backgrounds not only impact the selection of food items. In a study in Italy with 16 children with autism, Slanzi, Graziano, D’Angelo,Vollmer, and Conine (2020) found that screen-based technology devices (e.g., iPads) were selected in lower percentages than in a similar study conducted in the US. Slanzi et al. speculated that this was because, in contrast to programs in the US, the Italian children did not use screenbased devices as alternative communication devices and there was only one device in each child’s classroom. Slanzi et al. suggested that because Italian mothers are “more affectionate than their American counterparts” (p. 2437), the social interactions that came with playing with toys may have been more preferred than isolated play with a device. These studies underscore the need to identify potential reinforcers that align with cultural differences. The best way of exhibiting cultural humility (Kirby, Spencer, & Spiker, 2022) is including parents in the selection of potential reinforcers (Čolić, Araiba, Lovelace, & Dababnah, 2021; Deochand & Costello, 2022). To do this, behavior analysts may use open-ended interviews (Hanley, 2012) or culturally sensitive assessment tools (Moreno,Wong-Lo, & Bullock, 2014) to garner information from families regarding how their cultural background and preferences affect the selection of potential reinforcers for their child. As therapists become more sensitive to their clients’ needs, levels of mutual



Preference assessment and reinforcer evaluation

19

trust between therapists and parents will increase (Berlin & Fowkes, 1983). Although therapists do not need to speak their clients’ languages, familiarizing themselves with relevant cultural contexts, family virtues, and religious preferences will help establish rapport (Castillo, Quintana, & Zamarripa, 2000; Martinez & Mahoney, 2022). If language barriers arise, therapists should consider collaborating with certified interpreters who share cultural and linguistic backgrounds with the client (Dowdy, Obidimalor, Tincani, & Travers, 2021). Finally, when working with clients living in non-­Englishspeaking homes, therapists may use preference assessments to allow them to select their preferred language for instruction (Aguilar, Chan, White, & Fragale, 2017). In summary, we described six types of preference assessment and provided research-based considerations for selecting a type, accounting for MOs, conducting assessments efficiently, assessing nontangible reinforcers, and exercising cultural humility when selecting potential reinforcers. We now explain how preference assessments can be used to assess social validity and preferred environments.

Preference as social validity Social validity is the acceptability of the goals, procedures, and outcomes of an intervention program by direct and indirect consumers (Wolf, 1978). Rather than asking people what procedures they prefer, behavior analysts can arrange multiple procedures in a preference assessment. If this is not possible, an alternative assessment of preference and social validity is the extent to which the client displays happiness. These types of assessments of social validity are particularly important when planning a client’s transition from school to adult life.

Assessing preference for interventions Preference assessments may be used to allow clients to choose the interventions they receive. A straightforward process with verbal clients who can answer questions about preferences is using dialogue or questionnaires. Nonverbal clients must also have ways to “express their opinions” and be empowered with self-determination (Wehmeyer, 2020). Using this interpretation of preference assessment will equip more stakeholders to improve clients’ quality of life (Schwartz & Kelly, 2021) and self-advocacy skills. Hanley (2010) used preference assessments as an “objective measurement of social validity” (p. 13) to allow clients to choose their preferred

20

Applied behavior analysis advanced guidebook

interventions. For example, consider a client with problem behavior reinforced by attention.Two potential interventions are functional communication training (FCT) and noncontingent reinforcement (NCR). A behavior analyst could test both interventions with the following addition: when the behavior analyst uses FCT, they have the client touch a red card, and when using NCR, the client touches a blue card. Then, after many sessions indicating that both interventions are effective, the behavior analyst offers the client a choice of interventions by holding up the red and blue cards and allowing the client to choose one. The “concurrent operants” arrangement presented by Hanley (2010) has been used to allow clients to choose many interventions, including choosing between forward and backward chaining (Slocum & Tiger, 2011), interdependent and independent group contingencies (Groves & Austin, 2017), and book- and tablet-based picture activity schedules (Giles & Markham, 2017). Clients may also choose from videos displaying interventions (Huntington & Schwartz, 2021). In addition to the many benefits addressed above, letting clients and families choose their interventions is a way to practice cultural humility and compassionate care (Taylor, LeBlanc, & Nosik, 2019). A behavior analyst should be aware of how their own culture and personal biases lead to selecting interventions, and, rather, ensure that interventions align with the client’s culture (Slim & Celiberti, 2022). Engaging in two-way communication may avoid erroneous assumptions (Kalyanpur & Harry, 2012), and providing choices of interventions sends the message that the behavior analyst is adopting the client’s cultural perspective and continually seeking input.

Indices of happiness When it is difficult to allow clients to choose interventions, large-scale environments, and living arrangements, behavior analysts may use more descriptive methods to assess preference, such as measuring indices of happiness (Parsons, Reid, Bentley, Inman, & Lattimore, 2012; Tullis & Seaman-Tullis, 2019).When environmental arrangements are highly preferred, people usually respond in a way that would be termed “happy.” Alternatively, when less preferred conditions are present, people often respond in a “neutral” or “unhappy” manner. Parsons et al. (2012) outlined a systematic approach for defining happiness and unhappiness with non- or minimally verbal adult clients. First, they asked caregivers to indicate the indices (i.e., topographies) of happiness and unhappiness for each client. Examples of individually defined happiness



Preference assessment and reinforcer evaluation

21

were laughing, smiling, patting leg, and running; examples of individually defined unhappiness were hitting head, crying, pressing finger on eye, biting hand, frowning, and tipping over furniture. The caregivers also indicated situations in which the clients were happy (e.g., drawing, the lounge, leisure time) and unhappy (e.g., no activity, reading). Second, the researchers verified that indices of happiness occurred in the “happy” situations, and vice versa. Finally, the researchers used a PS to allow the clients to choose the “happy” or “unhappy” situations and they all chose the “happy” situations. Although this type of descriptive assessment has limitations related to precision, it may be a useful method in some contexts. Using preference assessments to capture social validity is a promising clinical practice, but some caution should be taken. First, it is important to incorporate additional variables needed to validate a client’s preference, such as the efficacy of an intervention. For example, if a client selects noncontingent reinforcement (NCR) when the procedure does not reduce a problem behavior, the preference assessment lacks validity. Second, when assessing social validity according to the guidelines provided by Wolf (1978), there may be conflicts between stakeholders or shifts in acceptability with changing environments. Steps should be taken to ensure the client remains at the center of the process.

Preference assessment in transition services Peterson, Aljadeff-Abergel, Eldridge,VanderWeele, and Acker (2021) wrote that “An individual is considered ‘self-determined’ when he/she makes his/her own choices about what to eat and when, where to live and with whom, what to wear each day, what to eat each day, what to do to earn money (or to stay at home and eat bonbons), where to go to school, etc.” (p. 301). As clients transition from school to adult life, it is critical to assess their acceptability of new environments. Transition services are a coordinated set of assessments, goal development, and skill acquisition for secondary students with disabilities to prepare for postschool environments (Kochhar-Bryant, Bassett, & Webb, 2009). These programs prepare students across the domains of employment, social and leisure activities, and living settings. Using preference assessments during transition services can ensure the non- or minimally verbal client is fully engaged in the process and making choices with respect to these three domains (Lohrmann-O’Rourke & Gomez, 2001; Tullis & Seaman-Tullis, 2019). Transition-based preference assessments may be direct (e.g., MSWO) or descriptive (e.g., indices of happiness).

22

Applied behavior analysis advanced guidebook

In terms of employment, behavior analysts may assess preference for the same elements that are relevant for people without disabilities (Lent, Brown, & Hackett, 2000) such as location, break conditions, work times, and reinforcers for task completion (Ninci, Gerow, Rispoli, & Boles, 2017). Reid et al. (2007) used both an MSWO and a PS with 12 adults who had severe disabilities by presenting items corresponding to work tasks (e.g., stamps for stamping envelopes), asking the clients to choose one, having the client engage in that work task for three minutes, presenting a choice of the remaining items, and so on. Worsdell, Iwata, and Wallace (2002) determined preference of vocational tasks (e.g., folding towels) by recording the duration of engagement with each task. Additionally, behavior analysts may use indices of happiness to determine if a work location or work shift is preferred or nonpreferred. These indices could be augmented by “verifying” preference using metrics related to time engaged with job-related items or frequencies of breaks. Although research on assessing preference for social and leisure activities is limited, Call, Trosclair-Lasserre, Findley, Reavis, and Shillingsburg (2012) used a concurrent operants preference assessment to assess the functional properties of social interactions, specifically where social interaction takes place, the theme of social interaction (e.g., playing Dungeons and Dragons, attending a sporting event), and the duration of the interactions. One participant indicated that social interaction was highly preferred with the remaining participants indicating a neutral preference. These data highlight the necessity of not solely assessing social stimuli but also assessing the client’s preference for the level or nature of social interactions. Choice of living arrangement may be one of the least investigated areas of the transition planning process and the most difficult to assess. The UN Convention on the Rights of Persons with Disabilities (United Nations, 2006) supports the assertion that choice of living arrangement is a basic right. Preference for living arrangement may be best described in terms of “quality of life” (Stancliffe & Keane, 2000), which has been an understudied construct (van Heijst & Geurts, 2015). Generally, preference, or the extent to which preference is incorporated, is core to the concept of quality of life (Schwartz & Kelly, 2021). In supported or independent living settings, typical measures of preference may be less appropriate than more descriptive forms (e.g., indices of happiness). As with other aspects of transition, these measures should be augmented by other observations to confirm preference. For example, a behavior analyst could correlate a goal of being happy living with a roommate with measures of how often the client and roommate



Preference assessment and reinforcer evaluation

23

i­nteract or are in the same room. Allowing a client to express their preference for a living environment provides them with a rich living experience. In summary, we described using preference assessments to allow clients to choose interventions and how behavior analysts can use indices of happiness to identify preferred environments for clients. We discussed how to assess preference in the context of transition services, particularly employment, social and leisure activities, and living settings. In a final section, we describe additional applications of preference assessments, specifically with additional populations and training people to conduct preference assessments.

Additional applications of preference assessments Preference assessments have been conducted across many populations and the lifespan, from the ages of 13 months (Rush, Kurtz, Lieblein, & Chin, 2005) to 95 years (Feliciano, Steers, Elite-Marcandonatou, McLane, & Areán, 2009). General education students (Schanding Jr., Tingstrom, & SterlingTurner, 2009) and students with or at risk for emotional disturbance (King & Kostewicz, 2014) have benefited from preference assessments. In one study, Paramore and Higbee (2005) conducted MSWOs with food items with adolescents with emotional disturbance, and those items were then used to reinforce on-task behavior. With elementary students at risk for emotional disturbance, King (2016) compared MSWOs and verbally indicating preferred items and found that for one student, the item selected most often in the MSWO was superior in terms of increasing the completion of academic tasks. With adults diagnosed with schizophrenia, Wilder, Ellsworth, White, and Schock (2003) and Wilder, Wilson, Ellsworth, and Heering (2003) found similar results of PSs and verbal indications of preference. Reyes, Vollmer, and Hall (2017) conducted preference assessments with sex offenders with intellectual disability as part of a broader assessment of the likelihood of reoffending. Finally, Raetz, LeBlanc, Baker, and Hilton (2013) found that results of MSWOs were stable in 5 out of 7 adults with dementia. Preference assessments have also been conducted in the context of organizational behavior management (OBM; e.g.,Wine, Reis, & Hantula, 2014). Simonian, Brand, Mason, Heinicke, and Luoma (2020) reviewed 12 studies that evaluated preference assessments in a variety of organizations and identified money (up to $10), gift cards, snacks, breaks, choice of work tasks, office supplies, and praise/recognition as potential reinforcers. About half of

24

Applied behavior analysis advanced guidebook

the studies used a PS or MSWO format, and the other studies used surveys or other indirect methods.

Training people to conduct preference assessments In the last decade, over 15 studies have been published on methods to teach staff, teachers, and parents to conduct preference assessments. The most common procedure is behavioral skills training (BST), consisting of instructions, modeling, role play, and feedback (Lavie & Sturmey, 2002; O’Handley, Pearson, Taylor, & Congdon, 2021). Other effective procedures are the “feedback sandwich” (positive-constructive-positive; Bottini & Gillis, 2021a) and video modeling, often with written or voice-over instructions (Delli Bovi,Vladescu, DeBar, Carroll, & Sarokoff, 2017; Rosales, Gongola, & Homlitas, 2015;Vladescu et al., 2021). In addition, researchers have validated online training (Bottini & Gillis, 2021b), self-instruction (Shapiro, Kazemi, Pogosjana, Rios, & Mendoza, 2016; Wishnowski, Yu, Pear, Chand, & Saltel, 2018), and telehealth (Higgins, Luczynski, Carroll, Fisher, & Mudford, 2017). This rich body of research indicates that once a behavior analyst chooses a type of preference assessment for a client, there are a host of available and effective procedures for training staff and others to conduct it.

Chapter summary Preference assessments have several purposes and copious research support. To identify reinforcers for behavioral programming, behavior analysts may choose from six types of preference assessment: SS, PS, MS, MSWO, FO, and RR. Research generally favors the PS and MSWO, though the FO is useful for reducing the likelihood of problem behavior and assessing long-­duration reinforcers. It is critical to account for MOs when conducting preference assessments as preferences and reinforcer efficacies may vary from moment to moment.There are several ways to make preference assessments more efficient by reducing trials and session duration. In addition, research supports conducting preference assessments with nontangible items such as social interactions and olfactory stimuli. Behavior analysts should incorporate cultural humility into selecting and testing potential reinforcers, as there are many racial, linguistic, and socioeconomic status variables that may affect the types of reinforcers that are acceptable to families. Including parents and guardians in selecting potential reinforcers is essential. Continually learning about each family’s culture will increase the chances of using acceptable reinforcers.



Preference assessment and reinforcer evaluation

25

Preference assessments have utility beyond identifying reinforcers to use in clinical programming. Behavior analysts may assess clients’ preferred interventions, another means of emulating cultural humility and compassionate care. Additionally, indices of happiness may be defined and used to assess preference for environments or activities, particularly related to the transition from school to adult life, such as vocational, leisure, and living environments. Preference assessments are not only for clients with developmental and intellectual disabilities but apply to typical students, students with emotional disturbance, and employees working in various organizations. Finally, there are many research-based methods for teaching others how to conduct preference assessments that lead to desirable outcomes for clients.

References Aguilar, J. M., Chan, J. M., White, P. J., & Fragale, C. (2017). Assessment of the language preferences of five children with autism from Spanish-speaking homes. Journal of Behavioral Education, 26(4), 334–347. https://doi.org/10.1007/s10864-017-9280-9. Barbera, M. L. (2007). The verbal behavior approach: How to teach children with autism and related disorders. Jessica Kingsley Publishers. Beaulieu, L., & Jimenez, G. C. (2022). Cultural responsiveness in applied behavior analysis: Self‐assessment. Journal of Applied Behavior Analysis. https://doi.org/10.1002/jaba.907. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://www. bacb.com/wp-content/bacb-compliance-code-future. Berlin, E. A., & Fowkes, W. C. (1983). A teaching framework for cross-cultural health care: Application in family practice. The Western Journal of Medicine, 139(6), 934–938. https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC1011028/. Bottini, S., & Gillis, J. (2021a). A comparison of the feedback sandwich, constructive-positive feedback, and within session feedback for training preference assessment implementation. Journal of Organizational Behavior Management, 41(1), 83–93. https://doi.org/10.10 80/01608061.2020.1862019. Bottini, S., & Gillis, J. (2021b). Use of an online training with virtual role play to teach preference assessment implementation. Journal of Developmental and Physical Disabilities. https:// doi.org/10.1007/s10882-021-09788-8. Boyle, M. A., Curtis, K. S., Forck, K. L., Fudge, B. M., Speake, H. N., & Pauls, B. P. (2019). A replication of the response‐restriction preference assessment. Behavioral Interventions, 34(4), 564–576. https://doi.org/10.1002/bin.1683. Brodhead, M. T., Abel, E. A., Al-Dubayan, M. N., Brouwers, L., Abston, G. W., & Rispoli, M. J. (2016). An evaluation of a brief multiple-stimulus without replacement preference assessment conducted in an electronic pictorial format. Journal of Behavioral Education, 25(4), 417–430. https://doi.org/10.1007/s10864-016-9254-3. Brodhead, M.T., Kim, S.Y., & Rispoli, M. J. (2019). Further examination of video‐based preference assessments without contingent access. Journal of Applied Behavior Analysis, 52(1), 258–270. https://doi.org/10.1002/jaba.507. Call, N. A., Trosclair-Lasserre, N. M., Findley, A. J., Reavis, A. R., & Shillingsburg, M. A. (2012). Correspondence between single versus daily preference assessment outcomes and reinforcer efficacy under progressive-ratio schedules. Journal of Applied Behavior Analysis, 45(4), 763–777.

26

Applied behavior analysis advanced guidebook

Cannella-Malone, H. I., Sabielny, L. M., & Tullis, C. A. (2015). Using eye gaze to identify reinforcers for individuals with severe multiple disabilities. Journal of Applied Behavior Analysis, 48(3), 680–684. https://doi.org/10.1002/jaba.231. Castillo, E. M., Quintana, S. M., & Zamarripa, M. X. (2000). Cultural and linguistic issues. In E. S. Shapiro, & T. R. Kratochwill (Eds.), Conducting school-based assessments of child and adolescent behavior (pp. 274–308). The Guilford Press. Chappell, N., Graff, R. B., Libby, M. E., & Ahearn, W. H. (2009). Further evaluation of the effects of motivating operations on preference assessment outcomes. Research in Autism Spectrum Disorders, 3(3), 660–669. https://doi.org/10.1016/j.rasd.2009.01.002. Ciccone, F. J., Graff, R. B., & Ahearn,W. H. (2005). An alternate scoring method for the multiple stimulus without replacement preference assessment. Behavioral Interventions, 20(2), 121–127. https://doi.org/10.1002/bin.177. Clay, C. J., Samaha, A. L., Bloom, S. E., Bogoev, B. K., & Boyle, M. A. (2013). Assessing preference for social interactions. Research in Developmental Disabilities, 34(1), 362–371. https:// doi.org/10.1016/j.ridd.2012.07.028. Clay, C. J., Schmitz, B. A., Clohisy, A. M., Haider, A. F., & Kahng, S. (2021). Evaluation of ­free-operant preference assessment: Outcomes of varying session duration and problem behavior.Behavior Modification,45(6),962–987.https://doi.org/10.1177/0145445520925429. Clevenger, T. M., & Graff, R. B. (2005). Assessing object‐to‐picture and picture‐to‐object matching as prerequisite skills for pictorial preference assessments. Journal of Applied Behavior Analysis, 38(4), 543–547. Čolić, M., Araiba, S., Lovelace, T. S., & Dababnah, S. (2021). Black caregivers’ perspectives on racism in ASD services: Toward culturally responsive ABA practice. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-021-00577-5. Conine, D. E., Morris, S. L., Kronfli, F. R., Slanzi, C. M., Petronelli, A. K., Kalick, L., et al. (2021). Comparing the results of one‐session, two‐session, and three‐session MSWO preference assessments. Journal of Applied Behavior Analysis, 54(2), 700–712. https://doi. org/10.1002/jaba.808. Curiel, H., Curiel, E. S. L., Adame, A., & Li, A. (2020). Multiple‐stimulus‐without‐replacement preference assessment tool. Behavioral Interventions, 35(4), 680–690. https://doi. org/10.1002/bin.1732. Curiel, H., Curiel, E. S. L., & Poling, A. (2021). Systematic identification of video preferences and reinforcers in children with autism. Behavior Analysis: Research and Practice, 21(2), 118–127. https://doi.org/10.1037/bar0000203. Davis, C. J., Brock, M. D., McNulty, K., Rosswurm, M. L., Bruneau, B., & Zane, T. (2010). Efficiency of forced choice preference assessment: Comparing multiple presentation techniques. The Behavior Analyst Today, 10(3–4), 440–455. https://doi.org/10.1037/ h0100682. DeLeon, I. G., & Iwata, B. A. (1996). Evaluation of a multiple‐stimulus presentation format for assessing reinforcer preferences. Journal of Applied Behavior Analysis, 29(4), 519–533. Delli Bovi, G. M., Vladescu, J. C., DeBar, R. M., Carroll, R. A., & Sarokoff, R. A. (2017). Using video modeling with voice-over instruction to train public school staff to implement a preference assessment. Behavior Analysis in Practice, 10(1), 72–76. https://doi. org/10.1007/s40617-016-0135-y. Dennison, A., Lund, E. M., Brodhead, M. T., Mejia, L., Armenta, A., & Leal, J. (2019). Delivering home-supported applied behavior analysis therapies to culturally and linguistically diverse families. Behavior Analysis in Practice, 12(4), 887–898. https://doi. org/10.1007/s40617-019-00374-1. Deochand, N., & Costello, M. S. (2022). Building a social justice framework for cultural and linguistic diversity in ABA. Behavior Analysis in Practice. https://doi.org/10.1007/ s40617-021-00659-4.



Preference assessment and reinforcer evaluation

27

Dowdy, A., Obidimalor, K. C.,Tincani, M., & Travers, J. C. (2021). Delivering culturally sound and high-quality behavior analytic services when working with an interpreter. Behavior Analysis: Research and Practice, 21(1), 51–64. https://doi.org/10.1037/bar0000206. Feliciano, L., Steers, M. E., Elite-Marcandonatou, A., McLane, M., & Areán, P. A. (2009). Applications of preference assessment procedures in depression and agitation management in elders with dementia. Clinical Gerontologist: The Journal of Aging and Mental Health, 32(3), 239–259. https://doi.org/10.1080/07317110902895226. Fisher, W., Piazza, C. C., Bowman, L. G., Hagopian, L. P., Owens, J. C., & Slevin, I. (1992). A comparison of two approaches for identifying reinforcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis, 25(2), 491–498. Fisher, W. W., Piazza, C. C., Bowman, L. G., & Amari, A. (1996). Integrating caregiver report with a systematic choice assessment. American Journal on Mental Retardation, 101, 15–25. Fong, E. H., Catagnus, R. M., Brodhead, M. T., Quigley, S., & Field, S. (2016). Developing the cultural awareness skills of behavior analysts. Behavior Analysis in Practice, 9(1), 84–94. https://doi.org/10.1007/s40617-016-0111-6. Giles, A., & Markham,V. (2017). Comparing book- and tablet-based picture activity schedules: Acquisition and preference. Behavior Modification, 41(5), 647–664. https://doi. org/10.1177/0145445517700817. Gottschalk, J. M., Libby, M. E., & Graff, R. B. (2000). The effects of establishing operations on preference assessment outcomes. Journal of Applied Behavior Analysis, 33(1), 85–88. https://doi.org/10.1901/jaba.2000.33-85. Graff, R. B., & Gibson, L. (2003). Using pictures to assess reinforcers in individuals with developmental disabilities. Behavior Modification, 27(4), 470–483. https://doi.org/ 10.1177/0145445503255602. Groskreutz, M. P., & Graff, R. B. (2009). Evaluating pictorial preference assessment: The effect of differential outcomes on preference assessment results. Research in Autism Spectrum Disorders, 3(1), 113–128. https://doi.org/10.1016/j.rasd.2008.04.007. Groves, E. A., & Austin, J. L. (2017). An evaluation of interdependent and independent group contingencies during the good behavior game. Journal of Applied Behavior Analysis, 50(3), 552–566. https://doi.org/10.1002/jaba.393. Hagopian, L. P., Rush, K. S., Lewin, A. B., & Long, E. S. (2001). Evaluating the predictive validity of a single stimulus engagement preference assessment. Journal of Applied Behavior Analysis, 34(4), 475–485. https://doi.org/10.1901/jaba.2001.34-475. Hanley, G. P. (2010). Toward effective and preferred programming: A case for the objective measurement of social validity with recipients of behavior-change programs. Behavior Analysis in Practice, 3(1), 13–21. Hanley, G. P. (2012). Functional assessment of problem behavior: Dispelling myths, overcoming implementation obstacles, and developing new lore. Behavior Analysis in Practice, 5(1), 54–72. Hanley, G. P., Iwata, B. A., Lindberg, J. S., & Conners, J. (2003). Response-restriction analysis: I. Assessment of activity preferences. Journal of Applied Behavior Analysis, 36(1), 47–58. https://doi.org/10.1901/jaba.2003.36-47. Higgins, W. J., Luczynski, K. C., Carroll, R. A., Fisher, W. W., & Mudford, O. C. (2017). Evaluation of a telehealth training package to remotely train staff to conduct a preference assessment. Journal of Applied Behavior Analysis, 50(2), 238–251. https://doi.org/ 10.1002/jaba.370. Horrocks, E., & Higbee, T. S. (2008). An evaluation of a stimulus preference assessment of auditory stimuli for adolescents with developmental disabilities. Research in Developmental Disabilities, 29(1), 11–20. https://doi.org/10.1016/j.ridd.2006.09.003. Huntington, R., & Schwartz, I. (2021). A preliminary examination of social preference across assessors. Behavioral Interventions. https://doi.org/10.1002/bin.1858.

28

Applied behavior analysis advanced guidebook

Kalyanpur, M., & Harry, B. (2012). Cultural reciprocity in special education: Building family-­ professional relationships. Paul H. Brookes Publishing Co. Kang, S., O’Reilly, M., Lancioni, G., Falcomata,T. S., Sigafoos, J., & Xu, Z. (2013). Comparison of the predictive validity and consistency among preference assessment procedures: A review of the literature. Research in Developmental Disabilities, 34(4), 1125–1133. https:// doi.org/10.1016/j.ridd.2012.12.021. Kang, S., O'Reilly, M. F., Fragale, C. L., Aguilar, J. M., Rispoli, M., & Lang, R. (2011). Evaluation of the rate of problem behavior maintained by different reinforcers across preference assessments. Journal of Applied Behavior Analysis, 44(4), 835–846. Karsten, A. M., Carr, J. E., & Lepper, T. L. (2011). Description of a practitioner model for identifying preferred stimuli with individuals with autism spectrum disorders. Behavior Modification, 35(4), 347–369. https://doi.org/10.1177/0145445511405184. Keen, D., & Pennell, D. (2010). Evaluating an engagement-based preference assessment for children with autism. Research in Autism Spectrum Disorders, 4(4), 645–652. https://doi. org/10.1016/j.rasd.2009.12.010. King, S. A. (2016). Multiple-stimulus without replacement preference assessment for students at risk for emotional disturbance. Journal of Behavioral Education, 25(4), 431–454. https:// doi.org/10.1007/s10864-016-9256-1. King, S. A., & Kostewicz, D. E. (2014). Choice-based stimulus preference assessment for children with or at-risk for emotional disturbance in educational settings. Education and Treatment of Children, 37(3), 531–558. https://doi.org/10.1353/etc.2014.0026. Kirby, M. S., Spencer, T. D., & Spiker, S. T. (2022). Humble behaviorism redux. Behavior and Social Issues. https://doi.org/10.1007/s42822-022-00092-4. Kochhar-Bryant, C., Bassett, D. S., & Webb, K. W. (2009). Transition to postsecondary education for students with disabilities. Corwin Press. Kodak, T., Fisher, W. W., Kelley, M. E., & Kisamore, A. (2009). Comparing preference assessments: Selection- versus duration-based preference assessment procedures. Research in Developmental Disabilities, 30(5), 1068–1077. https://doi.org/10.1016/j.ridd.2009.02.010. Lanner, T., Nichols, B., Field, S., Hanson, J., & Zane, T. (2010). The clinical utility of two reinforcement preference assessment techniques: A comparison of duration of assessment and identification of functional reinforcers. The Behavior Analyst Today, 10(3–4), 456–466. https://doi.org/10.1037/h0100683. Lavie, T., & Sturmey, P. (2002). Training staff to conduct a paired-stimulus preference assessment. Journal of Applied Behavior Analysis, 35(2), 209–211. https://doi.org/10.1901/ jaba.2002.35-209. Lent, R. W., Brown, S. D., & Hackett, G. (2000). Contextual supports and barriers to career choice: A social cognitive analysis. Journal of Counseling Psychology, 47(1), 36–49. https:// doi.org/10.1037/0022-0167.47.1.36. Lill, J. D., Shriver, M. D., & Allen, K. D. (2021). Stimulus preference assessment ­decision-making system (SPADS): A decision-making model for practitioners. Behavior Analysis in Practice, 14(4), 1144–1156. https://doi.org/10.1007/s40617-020-00539-3. Lohrmann-O’Rourke, S., & Gomez, O. (2001). Integrating preference assessment within the transition process to create meaningful school-to-life outcomes. Exceptionality, 9(3), 157–174. https://doi.org/10.1207/S15327035EX0903_6. MacNaul, H., Cividini, M. C., Wilson, S., & Di Paola, H. (2021). A systematic review of research on stability of preference assessment outcomes across repeated administrations. Behavioral Interventions, 36(4), 962–983. https://doi.org/10.1002/bin.1797. Martinez, S., & Mahoney, A. (2022). Culturally sensitive behavior intervention materials: A tutorial for practicing behavior analysts. Behavior Analysis in Practice. https://doi.org/ 10.1007/s40617-022-00703-x. Michael, J., & Miguel, C. F. (2020). Motivating operations. In J. O. Cooper, T. E. Heron, & W. L. Heward (Eds.), Applied behavior analysis (3rd ed., pp. 372–394). Pearson.



Preference assessment and reinforcer evaluation

29

Moreno, G., Wong-Lo, M., & Bullock, L. M. (2014). Assisting students from diverse backgrounds with challenging behaviors: Incorporating a culturally attuned functional behavioral assessment in prereferral services. Preventing School Failure, 58(1), 58–68. https:// doi.org/10.1080/1045988X.2012.763156. Morris, S. L., & Vollmer, T. R. (2019). Assessing preference for types of social interaction. Journal of Applied Behavior Analysis, 52(4), 1064–1075. https://doi.org/10.1002/jaba.597. Morris, S. L., & Vollmer, T. R. (2020a). Evaluating the stability, validity, and utility of hierarchies produced by the social interaction preference assessment. Journal of Applied Behavior Analysis, 53(1), 522–535. https://doi.org/10.1002/jaba.610. Morris, S. L., & Vollmer, T. R. (2020b). A comparison of methods for assessing preference for social interactions. Journal of Applied Behavior Analysis, 53(2), 918–937. Ninci, J., Gerow, S., Rispoli, M., & Boles, M. (2017). Systematic review of vocational preferences on behavioral outcomes of individuals with disabilities. Journal of Developmental and Physical Disabilities, 29(6), 875–894. https://doi.org/10.1007/s10882-017-9560-2. O’Handley, R. D., Pearson, S., Taylor, C., & Congdon, M. (2021). Training preservice school psychologists to conduct a stimulus preference assessment. Behavior Analysis in Practice, 14(2), 445–450. https://doi.org/10.1007/s40617-020-00537-5. Pace, G. M., Ivancic, M. T., Edwards, G. L., Iwata, B. A., & Page, T. J. (1985). Assessment of stimulus preference and reinforcer value with profoundly retarded individuals. Journal of Applied Behavior Analysis, 18(3), 249–255. Paclawskyj,T. R., &Vollmer,T. R. (1995). Reinforcer assessment for children with developmental disabilities and visual impairments. Journal of Applied Behavior Analysis, 28(2), 219–224. Paramore, N. W., & Higbee, T. S. (2005). An evaluation of a brief multiple-stimulus preference assessment with adolescents with emotional-behavioral disorders in an educational setting. Journal of Applied Behavior Analysis, 38(3), 399–403. https://doi.org/10.1901/jaba.2005.76-04. Parsons, M. B., Reid, D. H., Bentley, E., Inman, A., & Lattimore, L. P. (2012). Identifying indices of happiness and unhappiness among adults with autism: Potential targets for behavioral assessment and intervention. Behavior Analysis in Practice, 5(1), 15–25. Peterson, S. M., Aljadeff-Abergel, E., Eldridge, R. R., VanderWeele, N. J., & Acker, N. S. (2021). Conceptualizing self-determination from a behavioral perspective: The role of choice, self-control, and self-management. Journal of Behavioral Education, 30(2), 299–318. https://doi.org/10.1007/s10864-020-09368-4. Piazza, C. C., Fisher,W.W., Hagopian, L. P., Bowman, L. G., & Toole, L. (1996). Using a choice assessment to predict reinforcer effectiveness. Journal of Applied Behavior Analysis, 29(1), 1–9. https://doi.org/10.1901/jaba.1996.29-1. Raetz, P. B., LeBlanc, L. A., Baker, J. C., & Hilton, L. C. (2013). Utility of the multiple‐stimulus without replacement procedure and stability of preferences of older adults with dementia. Journal of Applied Behavior Analysis, 46(4), 765–780. https://doi.org/10.1002/jaba.88. Reid, D. H., Parsons, M. B., Towery, D., Lattimore, L. P., Green, C. W., & Brackett, L. (2007). Identifying work preferences among supported workers with severe disabilities: Efficiency and accuracy of a preference-assessment protocol. Behavioral Interventions, 22(4), 279–296. https://doi.org/10.1002/bin.245. Resetar Volz, J. L., & Cook, C. R. (2009). Group-based preference assessment for children and adolescents in a residential setting: Examining developmental, clinical, gender, and ethnic differences. Behavior Modification, 33(6), 778–794. Reyes, J. R.,Vollmer, T. R., & Hall, A. (2017). Comparison of arousal and preference assessment outcomes for sex offenders with intellectual disabilities. Journal of Applied Behavior Analysis, 50(1), 27–37. https://doi.org/10.1002/jaba.364. Richman, D. M., Barnard-Brak, L., Abby, L., & Grubb, L. (2016). Multiple-stimulus without replacement preference assessment: Reducing the number of sessions to identify preferred stimuli. Journal of Developmental and Physical Disabilities, 28(3), 469–477. https:// doi.org/10.1007/s10882-016-9485-1.

30

Applied behavior analysis advanced guidebook

Roane, H. S., Vollmer, T. R., Ringdahl, J. E., & Marcus, B. A. (1998). Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis, 31(4), 605–620. Rosales, R., Gongola, L., & Homlitas, C. (2015). An evaluation of video modeling with embedded instructions to teach implementation of stimulus preference assessments. Journal of Applied Behavior Analysis, 48(1), 209–214. https://doi.org/10.1002/jaba.174. Rush, K. S., Kurtz, P. F., Lieblein, T. L., & Chin, M. D. (2005). The utility of a paired-choice preference assessment in predicting reinforcer effectiveness for an infant. Journal of Early and Intensive Behavior Intervention, 2(4), 247–251. https://doi.org/10.1037/h0100317. Saunders, M. D., & Saunders, R. R. (2011). Innovation of a reinforcer preference assessment with the difficult to test. Research in Developmental Disabilities, 32(5), 1572–1579. https:// doi.org/10.1016/j.ridd.2011.01.049. Schanding, G. T., Jr., Tingstrom, D. H., & Sterling-Turner, H. E. (2009). Evaluation of stimulus preference assessment methods with general education students. Psychology in the Schools, 46(2), 89–99. https://doi.org/10.1002/pits.20356. Schwartz, I. S., & Kelly, E. M. (2021). Quality of life for people with disabilities: Why applied behavior analysts should consider this a primary dependent variable. Research and Practice for Persons with Severe Disabilities, 46(3), 159–172. Shapiro, M., Kazemi, E., Pogosjana, M., Rios, D., & Mendoza, M. (2016). Preference assessment training via self‐instruction: A replication and extension. Journal of Applied Behavior Analysis, 49(4), 794–808. https://doi.org/10.1002/jaba.339. Simonian, M. J., Brand, D., Mason, M. A., Heinicke, M. R., & Luoma, S. M. (2020). A systematic review of research evaluating the use of preference assessment methodology in the workplace. Journal of Organizational Behavior Management, 40(3–4), 284–302. https://doi. org/10.1080/01608061.2020.1819933. Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. Appleton-Century. Skinner, B. F. (1953). Science and human behavior. Macmillan. Slanzi, C. M., Graziano, M., D’Angelo, G., Vollmer, T. R., & Conine, D. E. (2020). Relative preferences for edible and leisure stimuli in children with autism: A replication in Italy. Journal of Applied Behavior Analysis, 53(4), 2429–2439. https://doi.org/10.1002/jaba.666. Slim, L., & Celiberti, D. (2022). Culturally aware practice. In J. A. Sadavoy, & M. L. Zube (Eds.), A scientific framework for compassion and social justice: Lessons in applied behavior analysis (pp. 94–99). Routledge. Slocum, S. K., & Tiger, J. H. (2011). An assessment of the efficiency of and child preference for forward and backward chaining. Journal of Applied Behavior Analysis, 44(4), 793–805. https://doi.org/10.1901/jaba.2011.44-793. Snyder, K., Higbee, T. S., & Dayton, E. (2012). Preliminary investigation of a video-based stimulus preference assessment. Journal of Applied Behavior Analysis, 45(2), 413–418. https://doi.org/10.1901/jaba.2012.45-413. Stancliffe, R. J., & Keane, S. (2000). Outcomes and costs of community living: A matched comparison of group homes and semi-independent living. Journal of Intellectual and Developmental Disability, 25(4), 281–305. https://doi.org/10.1080/13668250020019584. Taylor, B. A., LeBlanc, L. A., & Nosik, M. R. (2019). Compassionate care in behavior analytic treatment: Can outcomes be enhanced by attending to relationships with caregivers? Behavior Analysis in Practice, 12(3), 654–666. https://doi.org/10.1007/s40617-01800289-3. Tucker, J. S., Davis, J. P., Perez, L. G., Klein, D. J., & D’Amico, E. J. (2022). Late adolescent predictors of homelessness and food insecurity during emerging adulthood. Journal of Adolescent Health. https://doi.org/10.1016/j.jadohealth.2021.10.035. Tullis, C. A., Cannella-Malone, H. I., & Fleming, C.V. (2012). Multiple stimulus without replacement preference assessments: An examination of the relation between session number and effectiveness. Journal of Developmental and Physical Disabilities, 24(4), 337–345. https://doi.org/10.1007/s10882-012-9273-5.



Preference assessment and reinforcer evaluation

31

Tullis, C. A., & Seaman-Tullis, R. L. (2019). Incorporating preference assessment into transition planning for people with autism spectrum disorder. Behavior Analysis in Practice, 12(3), 727–733. https://doi.org/10.1007/s40617-019-00353-6. Tung, S. B., Donaldson, J. M., & Kahng, S. (2017). The effects of preference assessment type on problem behavior. Journal of Applied Behavior Analysis, 50(4), 861–866. https://doi. org/10.1002/jaba.414. United Nations. (2006). Convention on the rights of persons with disabilities. Retrieved from https://www.un.org/development/desa/disabilities/convention-on-the-rights-ofpersons-withdisabilities.html. United States Census Bureau. (2015). In S. L. Colby, & J. M. Ortman (Eds.), Projections of the size and composition of the U. S. population: 2014-2060. https://www.census.gov/content/ dam/Census/library/publications/2015/demo/p25-1143.pdf. van Heijst, B. F. C., & Geurts, H. M. (2015). Quality of life in autism across the lifespan: A meta-analysis. Autism, 19(2), 158–167. https://doi.org/10.1177/1362361313517053. Verriden, A. L., & Roscoe, E. M. (2016). A comparison of preference‐assessment methods. Journal of Applied Behavior Analysis, 49(2), 265–285. https://doi.org/10.1002/ jaba.302. Virués-Ortega, J., Pritchard, K., Grant, R. L., North, S., Hurtado-Parrado, C., Lee, M. S. H., et al. (2014). Clinical decision making and preference assessment for individuals with intellectual and developmental disabilities. American Journal on Intellectual and Developmental Disabilities, 119(2), 151–170. https://doi.org/10.1352/1944-7558-119.2.151. Vladescu, J. C., Mery, J. N., Marano-Frezza, K. E., Breeman, S. L., Campanaro, A. M., & Naudé, G. P. (2021). Comparing video modeling and computer-based instruction to teach preference assessment implementation. Journal of Organizational Behavior Management. https://doi.org/10.1080/01608061.2021.1965940. Wehmeyer, M. L. (2020). Self-determination in adolescents and adults with intellectual and developmental disabilities. Current Opinion in Psychiatry, 33(2), 81–85. https://doi. org/10.1097/YCO.0000000000000576. Wilder, D. A., Ellsworth, C., White, H., & Schock, K. (2003). A comparison of stimulus preference assessment methods in adults with schizophrenia. Behavioral Interventions, 18(2), 151–160. https://doi.org/10.1002/bin.132. Wilder, D. A., Schadler, J., Higbee, T. S., Haymes, L. K., Bajagic, V., & Register, M. (2008). Identification of olfactory stimuli as reinforcers in individuals with autism: A preliminary investigation. Behavioral Interventions, 23(2), 97–103. https://doi.org/10.1002/bin.257. Wilder, D. A., Wilson, P., Ellsworth, C., & Heering, P. W. (2003). A comparison of verbal and tangible stimulus preference assessment methods in adults with schizophrenia. Behavioral Interventions, 18(3), 191–198. https://doi.org/10.1002/bin.136. Windsor, J., Piché, L. M., & Locke, P. A. (1994). Preference testing: A comparison of two presentation methods. Research in Developmental Disabilities, 15(6), 439–455. https://doi. org/10.1016/0891-4222(94)90028-0. Wine, B., Reis, M., & Hantula, D. A. (2014). An evaluation of stimulus preference assessment methodology in organization behavior management. Journal of Organizational Behavior Management, 34(7), 7–15. https://doi.org/10.1080/01608061.2013.873379. Wishnowski, L. A., Yu, C. T., Pear, J., Chand, C., & Saltel, L. (2018). Effects of computer‐ aided instruction on the implementation of the MSWO stimulus preference assessment. Behavioral Interventions, 33(1), 56–68. https://doi.org/10.1002/bin.1508. Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11(2), 203–214. https://doi.org/10.1901/jaba.1978.11-203. Wolfe, K., Kunnavatana, S. S., & Shoemaker, A. M. (2018). An investigation of a video-based preference assessment of social interactions. Behavior Modification, 42(5), 729–746. https://doi.org/10.1177/0145445517731062.

32

Applied behavior analysis advanced guidebook

Worsdell, A. S., Iwata, B. A., & Wallace, M. D. (2002). Duration-based measures of preference for vocational tasks. Journal of Applied Behavior Analysis, 35(3), 287–290. https://doi. org/10.1901/jaba.2002.35-287. Wright, P. I. (2019). Cultural humility in the practice of applied behavior analysis. Behavior Analysis in Practice, 12(4), 805–809. https://doi.org/10.1007/s40617-019-00343-8.

CHAPTER 2

Treatment integrity and procedural fidelity Tiffany Kodaka, Samantha Bergmannb, and Mindy Waitec a

Department of Psychology, Marquette University, Milwaukee, WI, United States Department of Behavior Analysis, University of North Texas, Denton, TX, United States c Department of Psychology, University of Wisconsin-Milwaukee, Milwaukee, WI, United States b

Treatment integrity is a necessary component of any study or practice because it indicates the extent to which the independent variable (IV) was consistently implemented as programmed (Gresham, MacMillan, BeebeFrankenberger, & Bocian, 2000) and responsible for changes in targeted behavior. Studies and service delivery with sufficient treatment integrity (also referred to as procedural fidelity, procedural integrity, treatment fidelity, and adherence) provide the researcher or practitioner with confidence that behavioral outcomes obtained during the intervention can be attributed to the intervention and not uncontrolled, extraneous variables. In research, treatment integrity is important to verify that the IV under investigation is responsible for the observed effects on the dependent variable (DV), whereas treatment integrity in practice is necessary to conclude that the client’s behavior change occurred due to the intervention being implemented with sufficient integrity to result in a clinically meaningful improvement. For example, if a behavior technician implements an intervention with high integrity and the client acquires the targeted skill, then the clinical team can have confidence that the treatment was likely responsible for the behavior change, whereas skill acquisition during low-integrity intervention implementation calls into question which IVs were responsible for the behavior change. The current chapter will focus on treatment integrity in terms of whether the IV is implemented accurately and consistently; this may be referred to as adherence or content integrity (Sanetti & Collier-Meek, 2014). Treatment integrity is described in some fields as a multidimensional construction, and one of those dimensions is adherence. An observer recording whether the implementer provided a tangible item following

Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00002-7

Copyright © 2023 Elsevier Inc. All rights reserved.

33

34

Applied behavior analysis advanced guidebook

correct responses during an instructional period is an example of adherence. In addition to adherence, treatment integrity can be measured and assessed for quantity and process dimensions. Measuring and reporting the dosage, or how often the intervention was implemented, is referred to as quantity integrity (Sanetti & Collier-Meek, 2014). An observer recording the number of times that the skill-acquisition intervention was implemented over the course of one week is an example of quantity integrity. The quality of implementation, or how well the intervention was implemented, is referred to as process integrity (Sanetti & Collier-Meek, 2014). An observer recording the latency from the client’s correct response to the delivery of the tangible item (Carroll, Kodak, & Adolf, 2016) is an example of quality. Treatment-integrity errors can reduce the efficacy and efficiency of intervention for individuals who receive behavior-analytic ­services. Descriptive assessments of behavior-analytic interventions ­ implemented by school personnel and behavioral technicians show frequent ­treatment-integrity errors occur in practice (Breeman, Vladescu, DeBar, Grow, & Marano, 2020; Carroll, Kodak, & Fisher, 2013; Kodak, Cariveau, LeBlanc, Mahon, & Carroll, 2018).Treatment integrity can affect whether and how quickly the target behavior changes. To understand the impact of treatment integrity on the efficacy and efficiency of behavioral interventions, researchers manipulated treatment integrity as an IV. In addition, teaching intervention agents how to implement interventions with high treatment integrity is often of interest to researchers and practitioners; therefore, treatment integrity is also a DV in research and practice. This chapter will first describe research that has manipulated treatment integrity as an IV. Thereafter, we will provide a description of how behavior analysts can collect data on treatment integrity as a DV. Next, we will provide examples of how to respond to treatment-integrity errors that occur in practice. Finally, we will describe the importance of reporting treatment integrity in research and practice. Suggestions for additional areas of research inquiry will be suggested throughout the chapter.

Researching treatment integrity Researchers have manipulated treatment integrity as an IV to examine the effects of integrity errors on the efficacy and efficiency of intervention.This body of research is divided into examinations of the effects of treatment-­ integrity errors on behavior reduction or skill acquisition. A scoping review



Treatment integrity

35

of treatment-integrity errors is beyond the current chapter, and there are several recent reviews that summarize the literature on treatment-integrity errors, measurement, and reporting (Brand, Henley, DiGennaro Reed, Gray, & Crabbs, 2019; Collier-Meek, Fallon, & Gould, 2018; Falakfarsa et  al., 2021). Therefore, the current chapter includes brief summaries of key findings from research on treatment integrity. Researchers have investigated the effects of two types of ­treatment-integrity errors on intervention outcomes: errors of omission and errors of commission. An error of omission occurs when the researcher or practitioner fails to implement some component(s) of the treatment protocol. For example, a teacher does not provide praise and access to a tangible item following a student’s correct response (Kodak et al., 2018). An error of commission occurs when a researcher or practitioner engages in some behavior that is not part of the protocol, such as adding intervention components or delivering those components at incorrect times. For example, a behavior analyst may vary the length of an instruction by adding additional words that are not part of the learner’s skill-acquisition protocol (e.g., “What is it? Come on; you know what this is”; Carroll et al., 2013). Research examining errors of omission shows these errors decrease the efficacy and efficiency of intervention, although the levels of treatment integrity necessary to maintain engagement in appropriate behavior in behavior-reduction interventions are lower than levels necessary to acquire new skills in skill-acquisition interventions. For example, St. Peter Pipkin, Vollmer, and Sloman (2010; Experiment 1 Subset 1) manipulated errors of omission during a differential reinforcement of alternative behavior (DRA) analogue with college students. Participants continued to emit the appropriate behavior rather than the problem behavior even when integrity dropped to low levels (i.e., 40% integrity). These results suggested that DRA was a robust intervention even with high levels of omission errors. In comparison, even small decrements in integrity produced by errors of omission can hinder skill acquisition. For example, Bergmann, Kodak, and LeBlanc (2017) compared errors of omission of reinforcement on skill acquisition with two typically developing children. In both experiments, when the researchers omitted reinforcement following correct responses on approximately 17% to 20% of trials (i.e., approximately 80% integrity), participants required double the number of sessions of instruction to reach mastery (Experiment 1) and one participant failed to acquire the targeted skill (Experiment 2).

36

Applied behavior analysis advanced guidebook

Errors of commission can also reduce the efficacy and efficiency of interventions for behavior reduction and skill acquisition. St. Peter Pipkin et al. (2010; Experiment 1 Subset 2) programmed a DRA analogue which included errors of commission following problem behavior.When the DRA was implemented with 60% or greater integrity, the participants engaged in low rates of problem behavior and high rates of appropriate behavior. However, when integrity dropped to 40% and 20%, the participants began to engage in higher rates of problem behavior compared to appropriate behavior. These results suggested that DRA can continue to have robust effects even when commission errors follow problem behavior 40% of the time. In contrast, skill acquisition can be impacted greatly by errors of commission. DiGennaro Reed, Reed, Baez, and Maguire (2011) examined the effects of errors of commission on skill acquisition by providing reinforcers following 0%, 50%, or 100% of incorrect responses (i.e., 100%, 50%, and 0% treatment integrity) for participants with autism spectrum disorder (ASD). Results showed all three participants had higher levels of correct responses in the 0% errors condition, whereas correct responding was similarly low in the 50% and 100% errors conditions. Bergmann et al. (2017) extended these findings by showing 20% commission errors also hindered learning. More recently, researchers have examined the effects of varying levels of isolated or combined errors of omission and commission on behavior reduction and skill acquisition. In a parametric analysis of the effects of omission errors during response interruption and redirection (RIRD) on levels of vocal stereotypy, Colón & Ahearn, (2019) compared 25%, 50%, 75%, and 100% integrity with implementation of RIRD contingent on vocal stereotypy. They found levels of integrity at or above 50% resulted in reductions in vocal stereotypy for participants with ASD. In a parametric analysis of the effects of combined errors of omission and commission on acquisition, Bergmann, Kodak, and Harman (2021) arranged instruction that included errors during 0%, 5%, 10%, 15%, 20%, 25%, and 50% of instructional trials. On programmed-error trials, depending on the participant’s response, the computer program either made an error of omission by withholding reinforcement (i.e., failing to provide praise and a point) following a correct response or made an error of commission by providing reinforcement following an incorrect response. Results showed reinforcement errors occurring during 15% or fewer trials (i.e., at least 85% integrity) produced mastery-level responding for most participants. Reinforcement errors occurring during 20% or more of trials (i.e., ≥80% integrity) delayed or prevented acquisition.



Treatment integrity

37

Descriptive assessments of treatment-integrity errors The prevalence of treatment-integrity errors in practice is an understudied area of research on treatment integrity. However, the few descriptive assessments of integrity errors in practice suggest practitioners and stakeholders do engage in errors of omission and commission when providing behavioral intervention to individuals with developmental disabilities. For example, Carroll et al. (2013) conducted a descriptive assessment of educational teaching practices in schools for children with ASD (Experiment 1). They observed the teaching behaviors of (a) establishing ready behavior, (b) securing attention, (c) providing a clear instruction, (d) presenting the instruction once, (e) providing praise contingent on a correct response, (f) delivering a tangible or edible item contingent on a correct response, (g) providing a controlling prompt, and (h) responding to problem behavior by not attending, removing demands, blocking problem behavior, or some combination (based on the child’s intervention plan). In the observations, special education teachers, a speech pathologist, and paraeducators implemented these instructional components correctly during 21% to 74% of trials. Establishing ready behavior was the most frequently observed teacher behavior (occurred during 74% of trials), whereas providing a controlling prompt and delivering a contingent tangible following a correct response were the least frequently observed teacher behaviors (occurred during 41% and 21% of trials, respectively). In a follow-up study, Kodak et al. (2018) measured the same teacher behaviors, plus four additional teacher behaviors, during receptive-­ identification instruction in six special education classrooms. The four additional teacher behaviors included (a) withholding reinforcement following an error or no response, (b) randomizing presentation of materials, (c) not providing unplanned prompts, and (d) repeating instructions. Observed trials (N = 290) were divided into two categories: trials for skills in later stages of acquisition (n = 231; 80% of trials) and skills that were not-yet-mastered (n = 59; 20% of trials). Results showed substantially higher levels of teacher behavior for skills in later stages of acquisition in comparison to not-yet-mastered skills. For example, 8 of the 11 teacher behaviors occurred during 70% or more of trials for skills in later stages of acquisition, whereas only 3 of the 11 teacher behaviors occurred during 70% or more of trials for not-yet-mastered skills. Interestingly, the two teacher behaviors that occurred in the fewest portion of trials were the same across types of skills; the teachers presented a

38

Applied behavior analysis advanced guidebook

controlling prompt and ignored and blocked problem behavior on fewer than 30% of trials for skills in later stages of acquisition and 15% of trials for not-yet-mastered skills. Descriptive assessments of errors during interventions for behavior reduction also show the frequency of errors of omission and commission in natural settings. In a descriptive assessment of teacher errors during the implementation of timeout from play, Foreman, St. Peter, Mesches, Robinson, and Romano (2021) examined five students’ behavior intervention plans that included timeout from play contexts following problem behavior. The authors scored omission errors when the teacher failed to use timeout within 2 min of the occurrence of problem behavior. Commission errors were scored when timeout was implemented when no problem behavior occurred during the 2 min before timeout. Teachers were most likely to make omission errors because they rarely implemented timeout following problem behavior. Teachers were unlikely to make errors of commission but did implement timeout when behavior not included in the behavior intervention plan occurred on two occasions. Overall integrity for timeout ranged from 3% to 50% across students. Nevertheless, the low levels of integrity observed in the descriptive assessment did result in reductions in problem behavior for two students when evaluated in Experiment 2. A descriptive assessment of teacher errors in implementation of RIRD in a specialized program for children with ASD showed teachers implemented RIRD with relatively high levels of integrity (i.e., component integrity above 85%, except for one component and one teacher; Colón & Ahearn, 2019). However, teachers frequently engaged in omission errors because they did not consistently implement RIRD contingent on vocal stereotypy. Overall integrity for RIRD implementation ranged from 29.1% to 89.7% across three teachers. The results of descriptive assessments may not be surprising if we assume that these integrity errors are likely occurring due to the absence of training. However, the teachers in Kodak et al. (2018) had participated in mandated instruction on a curriculum that included behavior-analytic practices (e.g., reinforcement, prompts). In addition, findings in the behavioral consultation literature show that teachers who are trained to implement behavioral interventions with 100% integrity show reductions to low levels of integrity within ten days of training (Noell, Witt, Gilbertson, Ranier, & Freeland, 1997; Sanetti, Luiselli, & Handler, 2007; Witt, Noell, LaFleur, & Mortenson, 1997). Even employees who were currently enrolled in a master’s degree program in Behavior Analysis and providing services to children with ASD in a private clinic also made errors on components of intervention (Breeman



Treatment integrity

39

et al., 2020). Thus, although staff and stakeholder training is a critical component of service delivery to increase the likelihood of correct intervention implementation, additional steps are needed to monitor treatment integrity throughout intervention to maintain accurate implementation. Taken together, results from experimental manipulations and descriptive analyses of treatment integrity provide evidence that treatment-integrity errors can have detrimental effects on client outcomes and errors occur in practice, respectively. Therefore, researchers and practitioners must measure treatment integrity. In the absence of these data, it is unlikely that practitioners can determine whether an intervention is being implemented with sufficient integrity to result in attainment of intervention goals for clients or that the interventions are the reason for behavior change.

Measuring treatment integrity The assessment of treatment integrity requires valid and reliable measurement systems (Gresham et al., 2000). An accurate measurement system pertains to how well the measure captures the objective, topographical features of the treatment (Johnston & Pennypacker, 1993). The level of specificity varies in research and practice from molar (e.g., use differential reinforcement with children to increase instruction following) to molecular (e.g., provide the instruction “hang up your coat,” if the child hangs up their coat within 5 s, say, “good job hanging up your coat!”; Gresham et  al., 2000). Treatment-integrity data can be collected via direct and indirect assessments. Both direct and indirect assessments can be accurate measures of treatment integrity and require specifying and defining the treatment and its components for measurement before implementation (Gresham et al., 2000).

Direct assessment Direct assessment of treatment integrity involves trained observers systematically observing implementation of intervention in  vivo or from video. Direct assessments are considered the most accurate method for assessing treatment integrity (Gresham et al., 2000). Direct assessment may also be the most common; for example, most studies on performance feedback (75.9%, N = 58) used direct observation to collect treatment-integrity data (CollierMeek et al., 2018). An observation system for direct assessment of treatment integrity requires the development of operational definitions for each component of treatment. A task analysis that includes clearly described components and subtasks that account for specific verbal, physical, temporal, and spatial components of intervention should be created (Gresham et al., 2000).

40

Applied behavior analysis advanced guidebook

However, there can be variations in the measures used for data collection. Depending on the level of specificity, the complexity of treatment-­integrity measures can vary greatly in research and practice. The most complex treatment integrity measurement system includes a task analysis with each treatment component described with molecular-level detail (refer to Table 1 for an example). Data are collected on the occurrence or nonoccurrence of each component on every opportunity (see Fig. 1 for an example). On the considerably less detailed end of the continuum, an all-or-nothing rating is Table 1  Task analysis of teacher behavior and definitions in discrete-trial instruction. Teacher behavior

Definition

1

Establishes ready behavior

2

Secures attention

3

Clear instruction

4

Provides a prompt

5

Delivers praise

6

Delivers reinforcer

7

Implements error correction

8

Ignores/blocks problem behavior

Teacher waits to present an instruction until the student does not engage in disruptive movements of the limbs and is oriented toward the teacher (i.e., shoulders facing the teacher). Teacher requires the student to look (prompted or unprompted) at training materials before presenting the instruction. Teacher presents an instruction that is concise, clearly specifies the target behavior, and does not include unnecessary words. Teacher provides vocal model of target response within specified time period (e.g., within 5 s of instruction). Praise is delivered within 5 s of a correct unprompted or prompted response. A preferred tangible or edible item is delivered within 5 s of a correct unprompted or prompted response. Removes and then re-presents the target, repeats the instruction, and waits 5 s for a correct response. The teacher does not provide verbal or physical attention, minimizes facial expression following problem behavior, and continues with the current trial. If it is necessary to block dangerous behavior, the teacher rearranges the environment or uses the minimum amount of physical interaction necessary to keep the student safe.

Note. Some teacher behaviors and definitions are from Carroll et al. (2013).



Treatment integrity

41

Fig. 1  Example of occurrence and nonoccurrence data collection to calculate global and component integrity. Note. Sample data sheet for monitoring treatment integrity with an occurrence and nonoccurrence data sheet with hypothetical data for DTI. Mark whether the therapist implemented the trial component (1) as described, (0) with an error, or (X) no opportunity. The occurrence and nonoccurrence datasheet can be used to calculate multiple treatment-integrity estimates. Calculate individual component scores by summing the number of times the component was implemented correctly, divide by the number of opportunities, and multiply by 100 to obtain a percentage. Calculate a global score by summing the number of components implemented correctly, divide by the number of opportunities, and multiply by 100 to obtain a percentage. Calculate all-ornothing by component by summing the number of components implemented correctly (i.e., 2 in this example), divide by the number of components, (i.e., 9), and multiply by 100 to obtain a percentage (i.e., 22%). Calculate all-or-nothing by trial by summing the number of trials with all components implemented correctly (i.e., 2 in this example), divide by the number of trials (i.e., 5), and multiply by 100 to obtain a percentage (i.e., 40%).

42

Applied behavior analysis advanced guidebook

obtained for how well the behavior-change agent implemented the treatment in each trial (see Table 2, last column for an example). Regardless of the complexity of the measurement system, Gresham et al. (2000) suggests that treatment components should be made as explicit as possible to reduce the likelihood that observer bias or inference influences data collection. Occurrence and Nonoccurrence of Components. Measuring the occurrence and nonoccurrence of each component involves collecting data on whether each component of treatment (e.g., delivers reinforcement) was implemented correctly on each opportunity (e.g., Trial 1, Trial 2, etc.). For example, Cook et al. (2015) collected data on each component during each trial when novice instructors implemented discrete-trial instruction (DTI) with a learner with ASD. Occurrence and nonoccurrence data for each component assesses the degree of treatment integrity for each component over time and is considered the gold standard (Gresham et  al., 2000). To collect occurrence and nonoccurrence of each component, a task analysis must be created. The data collection system should allow a data collector to indicate whether each component was implemented as intended. Refer to Fig. 1 for an example of data collection on the occurrence and nonoccurrence of each component of DTI per trial within a teaching opportunity. Using data obtained from occurrence and nonoccurrence of component measures, one can compute global and component integrity. Global integrity is an aggregate score of all components across all opportunities (Cook et al., 2015). To compute global integrity, sum the number of components correctly implemented (i.e., all of the 1 s in Fig. 1), divide the sum by the total number of components that could have been observed (i.e., all of the 1 s and 0 s in Fig. 1), and multiply the quotient by 100 to obtain a single percentage for the observation. Refer to Table 2 and Fig. 1 for example calculations of global integrity during a 0-s prompt delay session of DTI. Global integrity can be easy and convenient to communicate and provides a snapshot or overall picture of accuracy of the intervention implementation; however, a global score can obscure treatment-integrity errors (Cook et al., 2015). Component integrity yields a percentage for each individual component in an intervention. To compute component integrity, sum the number of occurrences wherein the individual component was implemented correctly, divide by the number of opportunities to implement the component, and multiply the quotient by 100 to obtain a percentage. Repeat the computations for each individual component. Refer to Table 2 and Fig. 1 for example calculations of component integrity for four ­ components

Table 2  Comparison of outcomes of treatment integrity measures for a 0-s prompt delay session. Trial

1

Client behavior

Attended PC

2

Attended PC

3

No Attend PC

4

Attended PC

5

Attended PC

Total

Implementer behavior

✓ Attending ✓ SD ✓ Prompt SR+ ✓ Attending ✓ SD ✓ Prompt SR+ Attending ✓ SD ✓ Prompt SR+ ✓ Attending ✓ SD Prompt SR+ ✓ Attending ✓ SD ✓ Prompt SR+

Global scoring

Event recording (prompts)

All-or-nothing by trial

0

I

0

1

0

I

0

1

1

0

I

0

1

1

1

0

II

0

3/4

1

1

1

0

I

0

13/20 65%

4/5 80%

5/5 100%

5/5 100%

0/5 0%

6

0/5 0%

Attending

Component scoring SD Prompt

SR+

3/4

1

1

1

3/4

1

1

2/4

0

2/4

Note. This table is adapted from Kodak et al. (accepted). The table shows how different treatment integrity measures will yield different estimates of treatment integrity for the same observation. A checkmark represents that the Implementer correctly implemented the component. An x represents that the Implementer made an error when implementing the component.

44

Applied behavior analysis advanced guidebook

of DTI. Component-integrity scores can highlight areas of poor performance, and these data can be used to guide training (Cook et al., 2015). Gresham et  al. (2000) recommends computing both global and component scores across observation periods, whereas Cook et al. (2015) recommends using component-integrity scores. Further comparisons of global and ­component-integrity scores could help researchers and practitioners identify the ideal conditions to collect and analyze each type of integrity. A limitation of using occurrence and nonoccurrence of components to estimate treatment integrity is that it can be challenging to collect these data while observing implementation live. In other words, collecting these data likely requires an observer to view a video recording of the session. If live data collection is the only option, occurrence and nonoccurrence data can be collected for a selection of components rather than all components of a protocol. Although some components of the IV may be more essential to producing changes in the DV than others, additional research needs to be conducted to determine the most crucial components of behavioral interventions. Once identified, treatment integrity data collection could focus solely on these specific components, which may make data collection during direct observation easier. Checklists. Collecting data with a checklist involves indicating whether treatment components were implemented. Like occurrence and nonoccurrence measures, creating a task analysis is the first step to using a checklist to determine treatment integrity. A checklist may include (a) dichotomous ratings (i.e., yes or no) or (b) a rating scale that measures components implemented correctly and incorrectly. A checklist that includes dichotomous ratings is an all-or-nothing by component measure of integrity because the component must be corrected each time to be scored as implemented correctly. When using a checklist, one must determine how the scores will be summed to produce a percentage of treatment integrity for the observation (Sanetti & Collier-Meek, 2014). For example, treatment integrity from a dichotomous checklist can be computed by summing the number of components implemented correctly, dividing by the number of components, and multiplying by 100 to obtain a percentage. Refer to Fig. 2 for an example checklist that can be used to score implementation of components of DTI. If using ratings, the scores can be summed and divided by the number of components to produce an average score. Completing checklists while observing intervention implementation live (whether in-person or virtually) or via recorded video is recommended for direct assessment, but some research suggests that checklists are used most often



Treatment integrity

45

Fig. 2  Example of a checklist to assess treatment integrity. Note. Sample data sheet for monitoring treatment integrity with a checklist with hypothetical data for DTI. N/A = not applicable. Mark whether therapist implemented each component as described by the protocol (Y), did not implement (N), or there was no opportunity to implement (N/A). Calculate the total correct components by summing all Ys, divide by the number of components, and convert to a percentage.

as ­postobservation measures (see Indirect Assessment below; Collier-Meek, Sanetti, Gould, & Pereira, 2021). A generic checklist may be used to assess the treatment integrity during an observation session, but customized checklists reflecting specific components of treatment are recommended (Collier-Meek, Sanetti, Gould, & Pereira, 2021). For example, an observer could use a generic checklist for

46

Applied behavior analysis advanced guidebook

DTI with general components (e.g., uses correct prompting strategy, provides reinforcer according to schedule) or one specified for the individual client (e.g., requires attending, presents the programmed discriminative stimulus, provides a model prompt following a 0-s delay, provides praise and token following correct responses). An individualized checklist can facilitate discrimination of correctly implemented treatment components and generate more specific data on errors for monitoring internal validity and training needs. Nevertheless, it could be time-consuming to create individualized checklists for many interventions and clients (Collier-Meek, Sanetti, Gould, & Pereira, 2021). Future research could evaluate treatment-integrity estimates using generic and customized checklists, decisions that supervisors might make about training and intervention with data produced by each checklist, and preferences of data collectors for each checklist type. Event Recording. Event recording can be used to determine how frequently an intervention component is implemented. Event recording involves collecting data (e.g., tally) each time a component is implemented as planned. For example, event recording could be used to count the number of behavior-specific praise statements delivered by a teacher during an instructional period (Collier-Meek, Sanetti, Gould, & Pereira, 2021).To compute integrity with event recording, one sums the number of times the intervention component was implemented correctly (see Table 2). Event recording may be easier to collect during an observation than occurrence and nonoccurrence data. However, event recording rarely includes all components of an intervention (Collier-Meek, Sanetti, Gould, & Pereira, 2021). Deciding which components to include could involve determining the active variables in treatment, for which there are few data to guide that decision. Event recording is most appropriate when a behavior is discrete, brief, implemented multiple times (e.g., frequency of praise rather than setting up materials) in an observation period, and not occurring so frequently that data collection is difficult (Collier-Meek, Sanetti, Gould, & Pereira, 2021). Refer to Fig. 3 for an example event recording form used to measure an instructor’s reinforcer delivery following a student’s desired classroom behaviors. An issue with event recording is that this measure may not account for missed opportunities (errors of omission) nor instances wherein extra components were implemented (errors of commission). However, specific errors of omission and commission may be added to the data sheet (Vollmer, Sloman, & St. Peter Pipkin, 2008). For example, in a DRA procedure, the observer could count the times that the implementer failed to provide a



Treatment integrity

47

Fig. 3  Example of a data sheet for event recording to assess treatment integrity. Note. Sample data sheet for monitoring event recording for treatment integrity with hypothetical data for a differential reinforcement procedure. The procedure includes programmed token reinforcers for the child engaging in on-task behavior and programmed attention if the child raises their hand before speaking. This example also allows tracking of errors of omission and commission in the third and fourth rows. Mark a tally each time the instructor implements each component. Count the instances of each behavior.

reinforcer following the appropriate behavior (i.e., error of omission) or delivered the reinforcer following inappropriate behavior (i.e., error of commission; see Fig. 3). Further, event recording does not control for the number of opportunities a component could be implemented across time. For example, a teacher may correctly implement all instances of an intervention component for two days yet have different event recording counts per day due to varied opportunities to implement the component. Future research could explore interventions for which event recording is a valid method for estimating treatment integrity. All-or-Nothing by Trial/Opportunity. In this measurement method, referred to as whole-session integrity by Brand et  al. (2019), a single rating is given per trial/opportunity. The trial/opportunity is marked as implemented with integrity if and only if the practitioner correctly implemented all applicable components. For example, only when the practitioner presented the discriminative stimulus, provided the correct prompt,

48

Applied behavior analysis advanced guidebook

i­mplemented the error-correction strategy, and delivered the reinforcer as written would the trial be marked as correct. An error of omission or commission on any of the prescribed components would result in no credit for correct implementation. To compute all-or-nothing integrity, one sums the trials/opportunities implemented correctly, divides the sum by the total trials/opportunities within an observation period, and multiplies by 100 to obtain a percentage (see example in Table 2, last column; see sample data sheet in Fig. 4). Because a single error implementing any of the components results in the whole trial/opportunity being marked as no integrity, this method is more stringent than other treatment-integrity measures. However, there are limitations to the use of all-or-nothing integrity measures. If each trial/opportunity is given a single score, a researcher or practitioner cannot determine the degree of error (i.e., one or more errors). Furthermore, there is no additional guidance on which components were incorrect when using all-or-none by trial/opportunity. This could impact one’s conclusions about internal validity and limits selection of which components need to be retrained. The latter issue would result in a need to retrain all steps of intervention, which could be resource intensive, when it could be that the implementer needs to practice one or two components.

Fig. 4  Example of a data sheet for all-or-nothing by trial to assess treatment integrity. Note. Sample data sheet for monitoring treatment integrity via all-or-nothing by trial with hypothetical data for DTI. For each trial, circle Y if the instructor implemented all of the components correctly. Circle N if the instructor made one or more errors on any components within a trial. Sum the number of trials with all components correct, divide by the number of trials observed, and covert to a percentage.



Treatment integrity

49

Time Sampling. The use of whole- or partial-interval recording or momentary time sampling are options for assessing treatment integrity. Time sampling involves dividing an observation period into intervals and recording the presence or absence of the intervention component(s) during the interval.These sampling methods are most appropriate when a behavior does not have a discrete start and end time, occurs for an extended duration, or occurs frequently throughout an observation period. To compute treatment integrity, one sums the intervals wherein the intervention was implemented correctly, divides by the number of intervals in the observation period, and multiplies by 100 to obtain a percentage. Especially if partial interval or momentary time sampling is used, a benefit of this approach could be only observing implementation at specific points of time such as the first instance in each interval or at specific points in time, respectively. Although Vollmer et al. (2008) recommended and provided an example of collecting treatment-integrity data using interval recording, using time sampling to assess treatment integrity has limited empirical support (Collier-Meek, Sanetti, Gould, & Pereira, 2021), and more research is needed to assess the validity, reliability, and feasibility of this assessment method and for which behavioral interventions this method may be applicable. Potential Limitations of Direct Assessment. Directly measuring each component of intervention every instance in which it should be implemented would be ideal, but measuring treatment integrity in this way is unrealistic in many contexts. The frequency and duration of intervention can affect how many and which steps are observed regularly. Due to the resources required to conduct direct observations, this method is typically used to assess only a sample. It is likely that interventions with few, discrete steps can be observed more regularly, whereas an intervention implemented throughout the course of the day is observed less regularly (Sanetti & Collier-Meek, 2014). Because of this, it is possible that direct observations of treatment integrity may not accurately reflect the integrity of a procedure over time, especially if treatment-integrity data are not collected from the outset of intervention and frequently throughout the course of intervention (Noell, 2008). Although direct observation is widely considered the preferred assessment method, observer reactivity can be a concern; however, this concern has not been supported by research (e.g., Codding, Livanis, Pace, & Vaca, 2008). Observer reactivity occurs when an implementer provides intervention differently as a function of being observed. That is, data collected during the observation may not be an accurate representation of integrity

50

Applied behavior analysis advanced guidebook

outside of the observation. Recommendations to reduce observer reactivity include conducting observations on a random schedule; conducting multiple, short observations; arranging the environment such that the observation is as unobtrusive as possible; and not communicating the purpose of the observation (Gresham, 2014).

Indirect assessment Rather than collecting data on an implementer’s provision of intervention components from live or video observations, treatment integrity can be assessed using indirect assessments (e.g., rating scales, permanent products). Direct observation is recommended (Gresham et  al., 2000); nevertheless, there may be situational constraints that preclude direct assessment such as when there are very limited staff resources or when direct observation of intervention is constrained due to implementation in protected or sensitive areas (e.g., a procedure to reduce bed wetting is implemented in the middle of the night, caregivers do not wish for practitioners to be present while bathing their child). Although limited in scope, indirect assessments can provide some estimate of treatment integrity. Rating Scales. Indirect behavior rating scales (e.g., Likert scales) are commonly completed following an observation session and may include only one rating for the entire observation period (Gresham, 2014). Rather than recording the occurrence and nonoccurrence of the components of the intervention, the observer scores implementation along a continuum. For example, the observer would record the extent to which they agree with the statement, “The implementer provided clear instructions.” Refer to Fig. 5 for an example rating scale of an instructor’s implementation of DTI. Another approach to rating scales involves providing a score or estimate within a range (Suhrheinrich et al., 2020). For example, Suhrheinrich et al. (2020) compared occurrence and nonoccurrence data to three-point and five-point Likert scales in treatment integrity estimates of providers’ implementation of Pivotal Response Training. The Likert scales included anchors that captured ranges (e.g., in the five-point Likert scale, a score of four indicated that the “Provider implements competently most of the time but misses some opportunities [80–99%],” Suhrheinrich et al., 2020, p. 34). The researchers found high exact agreement between occurrence and nonoccurrence (called Trial-by-Trial by Suhrheinrich et al., 2020) and the five-point Likert scale. In addition to a general rating for overall integrity, rating scales can be used to estimate integrity of individual components. That is, the observer



Treatment integrity

51

Fig.  5  Example of a data sheet of a rating scale to assess treatment integrity. Note. Sample data sheet for monitoring treatment integrity with a rating scale with hypothetical data for DTI. N/A = not applicable. Rate the therapist’s implementation of each component as described by the protocol as implemented always (1), usually (2), sometimes (3), rarely (4), and never (5). Report the rating for each component individually (e.g., Securing Learner’s Attention was a 1) or compute the average rating across components. Calculate the average rating by summing the scores and dividing by the number of behaviors.

rates implementation of each component in an intervention. If the observer provides a rating for each component, the treatment-integrity estimate can be reported as either an individual component rating (e.g., Securing Learner’s Attention was a 1; Fig.  5) or an average across components. To compute the average rating across components, sum the component ratings and divide by the number of components (see Fig. 5). Additional research needs to evaluate the use of rating scales as an indirect assessment of treatment integrity, but rating scales may be easier to use in practice than direct assessments like occurrence and nonoccurrence (Suhrheinrich et al., 2020). If there is high correspondence between occurrence and nonoccurrence

52

Applied behavior analysis advanced guidebook

and rating-scale data, there is the potential that observers may find this format more feasible and be more likely to collect treatment-integrity data. Permanent Products. Permanent-product review involves using physical materials generated by the intervention to determine the extent to which the intervention was implemented with integrity (Sanetti & CollierMeek, 2014). Permanent-product review is possible when intervention components create an observable, lasting change in the environment. If the intervention generates permanent products, this indirect assessment method can be advantageous. These advantages include collecting data on a more flexible schedule and for a greater proportion of sessions. Collecting permanent products can occur any time after the intervention is implemented. Therefore, specific observation periods do not need to be scheduled, and the observer does not need to be present for long periods. Permanentproduct review might be less resource intensive and less time consuming than direct assessment methods (Noell & Gansle, 2014). Moreover, if permanent products are generated each time the intervention is implemented, it could be possible to collect treatment-integrity data for a greater proportion of sessions compared to direct assessment (Collier-Meek, Sanetti, & Fallon, 2021; Gresham et al., 2000). To compute integrity via permanent products, one sums the permanent products generated, divides by the total possible permanent products, and multiplies by 100 to obtain a percentage. For example, Gresham, Dart, and Collins (2017) assessed teachers’ treatment integrity during application of the Good Behavior Game using permanent products. Three of the steps generated a product: recording the reinforcer available to winning teams, making tally marks following inappropriate behaviors, and recording group scores and the winning team. Teachers used a record form during implementation, and researchers viewed the record form at the end of the day to determine whether steps were implemented correctly. If all three products were present on the record form, the session was implemented with 100% integrity. If only two of the products were present on the record form, the session was implemented with 67% integrity. See Fig. 6 for an example data sheet of a teacher’s implementation of the Good Behavior Game. Despite these potential advantages, there are limitations of permanent products. Some interventions may generate permanent products for all or nearly all of the steps, so permanent-product review may be an accurate representation of integrity. On the contrary, some interventions may generate no or very few permanent products, so permanent-product review



Treatment integrity

53

Fig. 6  Example of a data sheet for measuring permanent products to assess treatment integrity. Note. Sample data sheet for monitoring treatment integrity by measuring permanent products generated by a teacher implementing the Good Behavior Game. Mark a tally when the permanent product is present during the observation. Divide the number of permanent products present by the total number of permanent products and convert to a percentage.

would be a poor representation of integrity (Sanetti & Collier-Meek, 2014). For example, delivery of a backup reinforcer in a token economy will generate a permanent product if the implementer or client writes down which item or activity was selected, but delivery of the tokens before the exchange does not generate a permanent product if the tokens are removed. Another limitation is that permanent-product review does not guarantee that the implementer was the person who provided the intervention or that the intervention was implemented at the correct time. Depending on the components of an intervention, it is possible that permanent-product review has limited or restricted applicability in ­treatment-integrity research. Permanent-product review could supplement direct observation if the two measures correlate highly (Gresham et al., 2017), but there are data to suggest that permanent products overestimate treatment integrity (Collier-Meek, Sanetti, & Fallon, 2021). More research should be conducted on whether permanent-product review generates valid and reliable treatment-integrity estimates and for which interventions.

54

Applied behavior analysis advanced guidebook

Interpreting treatment integrity Researchers and practitioners may select measures of treatment integrity based on the intervention, level of analysis they seek to achieve, and available resources to collect and analyze treatment-integrity data. Multiple measures of treatment integrity may be collected due to differences in the outcomes generated from measurement methods (Table  1). However, no matter the measurement method used in research and practice, there are no agreed upon standards for what constitutes adequate levels of treatment integrity. This hinders the evaluation of sufficiency of treatment-integrity data by researchers and practitioners and will likely continue to do so until standards are established (e.g., a minimum of 90% treatment integrity is recommended by Ledford & Gast, 2018). Without guidance on acceptable levels of treatment integrity, researchers and practitioners may fail to make modifications to intervention in a timely manner or provide additional training to ensure that the intervention is implemented accurately and consistently. In addition, without treatment integrity standards that can be applied to manuscripts, editorial board members may use individual standards to differentially recommend publication of manuscripts with varying levels of reported treatment integrity, which can cause inconsistent applications of publication practices across manuscripts and journals. Establishing minimum standards for the sufficiency of treatment integrity may be difficult due to differences in the effects of treatment-integrity errors on outcomes in research. Research on the effects of treatment-­ integrity errors on skill acquisition show smaller decrements in integrity can hinder or prevent learning (e.g., Bergmann et al., 2021), whereas research on treatment-integrity errors while treating problem behavior show larger decrements in integrity can occur before the efficacy of intervention is reduced (e.g., St. Peter Pipkin et  al., 2010). The type of integrity error (errors of omission versus commission) may produce differential outcomes (e.g., Bergmann et  al., 2017; St. Peter Pipkin et  al., 2010). In the absence of minimum standards for treatment integrity, researchers and practitioners should report the treatment integrity measurement and calculation methods, proportion of the intervention with treatment-integrity data, ­treatment-integrity estimates, and the types of errors that occurred. These data, in combination with data on the DVs, will help establish standards for monitoring and reporting treatment integrity. Although treatment-­integrity errors may have differential effects on targeted behavior, determinations of the sufficiency of treatment integrity for behavior-analytic interventions



Treatment integrity

55

will not occur until more researchers and practitioners collect and report treatment-integrity data in research and practice.

Reporting treatment integrity Treatment integrity, which can be collected via multiple methods as described above, is crucial to internal and external validity in research and practice and guides intervention planning, training, and data-based decisions. Yet, there are few requirements for collecting and reporting ­treatment-integrity data in research and practice. There is a double standard that exists in collecting and reporting treatment-integrity data relative to interobserver agreement data in behavior-analytic research (McIntyre, Gresham, DiGennaro, & Reed, 2007; Peterson et al., 1982). Interobserver agreement (i.e., reliability) measures provide evidence of the believability of data, because these measures indicate the extent to which two independent observers consistently scored the same behavior in a designated time period using the same operational definition of a behavior. The field has established a mean of no less than 80% agreement between observers as a minimum acceptable value in studies (Kazdin, 2011), although a mean of 90% or greater is preferred (Miltenberger, 2016). Further, established standards for calculating IOA indicate a minimum of 20% of sessions should be scored, although calculating IOA for 25% to 33% of sessions is preferred (Ledford & Gast, 2009). Nevertheless, there are currently no established standards for the collection and analysis of treatment-integrity data, which is problematic for several reasons. Failure to report treatment-integrity data in research prevents a carefully controlled analysis of the effects of an IV on DVs. The absence of treatment-integrity data limits the conclusions that can be drawn from efficacy and effectiveness studies. Specifically, the absence of treatment-­ integrity data should temper conclusions that the IV was responsible for changes in the DV. Further, when multiple studies that evaluate the efficacy or effectiveness of an IV do not include treatment-integrity data, comparisons between the studies should be approached cautiously because there are no data demonstrating the IV was consistently and accurately implemented. Only when treatment-integrity data are reported across studies with varying levels of treatment efficacy can researchers analyze other variables that may contribute to differences in outcomes (e.g., participant population, frequency of intervention).

56

Applied behavior analysis advanced guidebook

Minimum standards for reporting treatment integrity could be established for published research by scientific groups and journals. The Quality Indictors for Group Experimental and Quasi-Experimental Research in Special Education (Gersten et al., 2005) encourage reporting of treatment integrity data but do not include guidelines on the proportion of intervention sessions that should be included in treatment-integrity calculations. Journals could require authors to report treatment-integrity data for 20% to 33% of sessions across conditions, similar to the standards established for reporting IOA data in behavioral research (Ledford & Gast, 2009). Journals also could require authors to report IOA data on measures of treatment integrity. Establishing reporting standards would increase confidence in interpretations of the effects of treatment integrity on demonstrations of functional relationships between IVs and DVs in studies. In addition, the adoption of reporting standards might increase the likelihood that ­treatment-integrity data are collected in behavior-analytic practice, related fields (e.g., speech therapy, occupational therapy), and with participant populations for whom treatment-integrity data may be nearly nonexistent (e.g., owners implementing treatment with companion animals). Future research could identify barriers to collecting primary and reliability treatment-integrity data, provide suggestions to reduce such barriers, and promote data collection on treatment integrity.

Responding to integrity errors in practice Because the field has not agreed upon a standard for adequate treatment integrity—in contrast to the generally agreed upon minimum standard of 80% for reliability data (Kazdin, 2011)—practitioners must develop their own criteria for evaluating treatment-integrity data. A practitioner needs to determine criteria for sufficient treatment integrity when training staff and stakeholders to implement behavior-analytic interventions and determining when performance necessitates retraining. Research on staff and caregiver training typically includes stringent mastery criteria for initial training. For example, Madzharova, Sturmey, and Helen Yoo (2018) taught classroom staff members to implement behavior intervention plans using modeling and feedback. Classroom staff were considered trained when they conducted components of the behavior-intervention plan with at least 90% integrity across three consecutive implementations. Gerencser, Higbee, Contreras, Pellegrino, and Gunn (2018) also used a criterion of 90% correct across two consecutive sessions when teaching paraprofessionals to implement



Treatment integrity

57

e­ rrorless DTI with students with developmental disabilities. These authors also included a re-training criterion, whereby two consecutive sessions with integrity below 80% resulted in re-training that progressed in regard to the intensity of feedback. Re-training methods will likely match those provided in initial training. Behavior skills training (BST) is a common method for teaching staff and caregivers to implement components of intervention with high integrity. The four components of BST include instructions, modeling, rehearsal, and feedback (Vladescu & Marano, 2022), although some research shows training with only a portion of these components (e.g., instructions and video modeling only) leads to high treatment integrity during training (e.g., Delli Bovi,Vladescu, DeBar, Carroll, & Sarokoff, 2017). Following the completion of initial training, practitioners should collect and analyze ­treatment-integrity data to identify whether re-training is necessary, and if so, what components must be re-trained. Once intervention is underway, performance feedback can be provided when integrity errors occur. Performance feedback could include brief praise on components that are implemented correctly and corrective feedback on intervention components that are not implemented as described in the intervention protocol. For example, if the practitioner fails to deliver a reinforcer following correct responses during a skill acquisition program, performance feedback could include a vocal reminder to deliver reinforcers following each correct response. If integrity errors are not resolved following performance feedback, establishing a re-training criterion can be beneficial to prevent staff and stakeholders from making consistent integrity errors that can reduce the efficacy of intervention. Although additional research is needed to guide the selection of criteria that can be used for re-training on behavior-reduction and skill-acquisition programs, relatively stringent criteria can be used by practitioners to prevent barriers to client progress. However, the specific criteria established for re-training will likely be based on the expected duration of retraining and the availability of supervisory staff to collect integrity data, as well as the type of measurement system used to collect integrity data. If practitioners are collecting direct assessment data using global ratings of performance (Table 2), re-training could occur if integrity is below 85% for two consecutive observations. We recommend using 85% as a re-training criterion for skill acquisition based on the parametric analyses conducted by Bergmann et al. (2021) that show at least 85% integrity in reinforcement delivery is needed to produce skill acquisition for at least 80% of individuals. Retraining based on a global

58

Applied behavior analysis advanced guidebook

rating of integrity will likely lead to repeated practice and feedback on all components of intervention until a re-training mastery criterion is achieved (e.g., two consecutive intervention sessions implemented with at least 90% integrity). If a practitioner collects data on integrity across all components of intervention (Fig.  1), re-training on specific components of intervention could occur if integrity falls below 85% for two consecutive observations. Re-training on specific components of intervention that are associated with integrity below 85% (e.g., securing attending, implementing error correction, and ignoring/blocking problem behavior, Fig. 1) might be less time consuming than conducting re-training on all components of the intervention. Practitioners who use checklists or rating scales to measure the integrity of intervention in practice may need to identify re-training criteria that align with the type of measure and goal of data collection. For example, use of a rating scale similar to Fig. 5 could result in re-training for all intervention components if one or more components fall below a rating of 2 for two consecutive observations (i.e., a stringent re-training criterion) or involve re-training of specific components that fall below a rating of 2. Criteria to conduct re-training on components of behavior-­reduction programs may differ from those of skill-acquisition programs based on research showing the robust effects of behavior-reduction procedures implemented with lower levels of integrity. For example, research on ­behavior-reduction procedures shows integrity levels near 50% to 60% may be sufficient to reduce problem behavior and maintain levels of appropriate behavior (Colón & Ahearn, 2019; Foreman et  al., 2021; St. Peter Pipkin et  al., 2010). Nevertheless, these studies evaluated integrity errors in one intervention (e.g., DRA) rather than an intervention package or behavior intervention plan that includes numerous interventions and components. In addition, researchers typically initiated intervention at 100% integrity prior to manipulating integrity errors. It is possible that intervention implemented with high integrity had protective effects on behavior subsequently exposed to intervention with integrity errors. As such, it is recommended that practitioners use a more stringent re-training criterion than the percentages suggested to maintain behavior reduction in research; practitioners could conduct re-training on behavior-reduction interventions and components if integrity falls below 75% (Vollmer et al., 2008).



Treatment integrity

59

Conclusions Treatment integrity is a critical component of any study or practice in behavior analysis. Research shows that treatment-integrity errors will reduce the efficacy and efficiency of interventions, although the detrimental effects of integrity errors differ based on the types of errors (i.e., omission, commission, or combined), the number of errors, and whether intervention targets behavior reduction or skill acquisition. Due to the effects of treatment-integrity errors on intervention outcomes, practitioners must develop and use measurements systems to collect data on treatment integrity in practice. Many methods of measurement exist for use in practice, and additional research is needed to identify accurate and feasible methods of measurement to guide decisions made by practitioners. Treatment-integrity data can be used to determine when to provide performance feedback and re-train implementers to improve treatment integrity. Collecting data on treatment integrity in research and practice will permit more frequent reporting of these data in manuscripts submitted to journals and in treatment summaries submitted to funding sources for ­behavior-analysis services. The identification of minimum standards for treatment integrity will help establish guidelines for practitioners that can be used to interpret intervention outcomes. However, research on minimum levels and critical components of treatment integrity is still in its infancy and requires further investigation to assist in the development of practice standards.

References Bergmann, S., Kodak, T., & Harman, M. J. (2021). When do errors in reinforcer delivery affect learning? A parametric analysis of treatment integrity. Journal of the Experimental Analysis of Behavior, 115(2), 561–577. https://doi.org/10.1002/jeab.670. Bergmann, S., Kodak, T., & LeBlanc, B. (2017). Effects of programmed errors of omission and commission during auditory-visual conditional discrimination training with typically developing children. Psychological Record, 67(1), 109–119. https://doi.org/10.1007/ s40732-016-0211-2. Brand, D., Henley, A. J., DiGennaro Reed, F. D., Gray, E., & Crabbs, B. (2019). A review of published studies involving parametric manipulations of treatment integrity. Journal of Behavioral Education, 28, 1–26. https://doi.org/10.1007/s10864-018-09311-8. Breeman, S., Vladescu, J. C., DeBar, R. M., Grow, L. L., & Marano, K. E. (2020). The effects of procedural integrity errors during auditory–visual conditional discrimination training: A preliminary investigation. Behavioral Interventions, 35(2), 203–216. https://doi. org/10.1002/bin.1710.

60

Applied behavior analysis advanced guidebook

Carroll, R. A., Kodak, T., & Adolf, K. J. (2016). Effect of delayed reinforcement on skill acquisition during discrete-trial instruction: Implications for treatment-integrity errors in academic settings. Journal of Applied Behavior Analysis, 49(1), 176–181. https://doi. org/10.1002/jaba.268. Carroll, R. A., Kodak,T., & Fisher,W.W. (2013). An examination of treatment integrity errors on skill acquisition during discrete trial instruction. Journal of Applied Behavior Analysis, 46(2), 379–394. https://doi.org/10.1002/jaba.49. Codding, R. S., Livanis, A., Pace, G. M., & Vaca, L. (2008). Using performance feedback to improve treatment integrity of classwide behavior plans: An investigation of observer reactivity. Journal of Applied Behavior Analysis, 41, 417–422. https://doi.org/10.1901/ jaba.2008.41-417. Collier-Meek, M. A., Fallon, L. M., & Gould, K. (2018). How are treatment integrity data assessed? Reviewing the performance feedback literature. School Psychology Quarterly, 33, 517–526. https://doi.org/10.1037/spq0000239. Collier-Meek, M. A., Sanetti, L. M., & Fallon, L. (2021). Exploring the influences of assessment method, intervention steps, intervention sessions, and observation timing on treatment fidelity estimates. Assessment for Effective Intervention, 46(1), 3–13. https://doi. org/10.1177/1534508419857228. Collier-Meek, M. A., Sanetti, L. M., Gould, K., & Pereira, B. (2021). An exploratory comparison of three treatment fidelity assessment methods: Time sampling, event recording, and post-observation checklist. Journal of Educational and Psychological Consultation, 31(3), 334–359. https://doi.org/10.1080/10474412.2020.1777874. Colón, C. L., & Ahearn, W. H. (2019). An analysis of treatment integrity of response interruption and redirection. Journal of Applied Behavior Analysis, 52(2), 337–354. https://doi. org/10.1002/jaba.537. Cook, J. E., Subramaniam, S., Brunson, L. Y., Larson, N. A., Poe, S. G., & St. Peter, C. C. (2015). Global measures of treatment integrity may mask important errors in ­discrete-trial training. Behavior Analysis in Practice, 8(1), 37–47. https://doi.org/10.1007/ s40617-014-0039-7. Delli Bovi, G. M., Vladescu, J. C., DeBar, R. M., Carroll, R. A., & Sarokoff, R. A. (2017). Using video modeling with voice-over instruction to train public school staff to implement a preference assessment. Behavior Analysis in Practice, 10(1), 72–76. https://doi. org/10.1007/s40617-016-0135-y. DiGennaro Reed, F. D., Reed, D. D., Baez, C. N., & Maguire, H. (2011). A parametric analysis of errors of commission during discrete-trial training. Journal of Applied Behavior Analysis, 44(3), 611–615. https://doi.org/10.1901/jaba.2011.44-611. Falakfarsa, G., Brand, D., Jones, L., Godinez, E. S., Richardson, D. C., Hanson, R. L., et al. (2021). Treatment integrity reporting in behavior analysis in practice 2008–2019. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-021-00573-9. Foreman, A. P., St. Peter, C. C., Mesches, G. A., Robinson, N., & Romano, L. M. (2021). Treatment integrity failures during timeout from play. Behavior Modification, 45(6), 988– 1010. https://doi.org/10.1177/0145445520935392. Gerencser, K. R., Higbee, T. S., Contreras, B. P., Pellegrino, A. J., & Gunn, S. L. (2018). Evaluation of interactive computerized training to teach paraprofessionals to implement errorless discrete trial instruction. Journal of Behavioral Education, 27(4), 461–487. https:// doi.org/10.1007/s10864-018-9308-9. Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & Innocenti, M. S. (2005). Quality indicators for group experimental and quasi-­ experimental research in special education. Exceptional Children, 71(2), 149–164. https://doi. ­ org/10.1177/001440290507100202. Gresham, F. M. (2014). Measuring and analyzing treatment integrity data in research. In L. M. H. Sanetti, & T. R. Kratochwill (Eds.), Treatment integrity: Conceptual, ­methodological,



Treatment integrity

61

and applied considerations for practitioners (pp. 109–130). Washington, DC: American Psychological Association. https://doi.org/10.1037/14275-007. Gresham, F. M., Dart, E. H., & Collins, T. A. (2017). Generalizability of multiple measures of treatment integrity: Comparisons among direct observation, permanent products, and self-report. School Psychology Review, 46, 108–121. https://doi.org/10.17105/ SPR46-1.108-121. Gresham, F. M., MacMillan, D. L., Beebe-Frankenberger, M. E., & Bocian, K. M. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research & Practice, 15(4), 198– 205. https://doi.org/10.1207/SLDRP1504_4. Johnston, J., & Pennypacker, H. (1993). Strategies and tactics of human behavioral research (2nd ed.). Lawrence Erlbaum Associates, Inc. Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2nd ed.). Oxford University Press. Kodak, T., Cariveau, T., LeBlanc, B., Mahon, J., & Carroll, R. A. (2018). Selection and implementation of skill acquisition programs by special education teachers and staff for students with autism spectrum disorder. Behavior Modification, 42(1), 58–83. https://doi. org/10.1177/0145445517692081. Ledford, J. R., & Gast, D. L. (2009). Single subject research methodology in behavioral sciences (1st ed.). Routledge. https://doi.org/10.4324/9780203877937. Ledford, J. R., & Gast, D. L. (2018). Single case research methodology: Application in special education and behavioral sciences (3rd ed.). Routledge. Madzharova, M. S., Sturmey, P., & Helen Yoo, J. (2018). Using in-vivo modeling and feedback to teach classroom staff to implement a complex behavior intervention plan. Journal of Developmental and Physical Disabilities, 30(3), 329–337. https://doi.org/10.1007/ s10882-018-9588-y. McIntyre, L. L., Gresham, F. M., DiGennaro, F. D., & Reed, D. D. (2007). Treatment integrity of school-based interventions with children in the Journal of Applied Behavior Analysis 1991–2005. Journal of Applied Behavior Analysis, 40(4), 659–672. https://doi. org/10.1901/jaba.2007.659-672. Miltenberger, R. G. (2016). Behavior modification: Principles and procedures (6th ed.). Cengage Learning. Noell, G. H. (2008). Appraising and praising systematic work to support systems change: Where we might be and were we might go. School Psychology Review, 37(3), 333–336. Noell, G. H., & Gansle, K. A. (2014). The use of performance feedback to improve intervention implementation in schools. In L. M. Hagermoser Sanetti, & T. R. Kratochwill (Eds.), Treatment integrity: A foundation for evidence-based practice in applied psychology (pp. 161–183). American Psychological Association. https://doi.org/10.1037/14275-009. Noell, G. H., Witt, J. C., Gilbertson, D. N., Ranier, D. D., & Freeland, J. T. (1997). Increasing teacher intervention implementation in general education settings through consultation and performance feedback. School Psychology Quarterly, 12(1), 77–88. https://doi. org/10.1037/h0088949. Peterson, L., Homer, A., & Wonderlich, S. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15(4), 477–492. Sanetti, L. M. H., & Collier-Meek, M. A. (2014). Increasing the rigor of treatment integrity assessment: A comparison of direct observation and permanent product methods. Journal of Behavioral Education, 23, 60–88. https://doi.org/10.1007/s10864-013-9179-z. Sanetti, L. M. H., Luiselli, J. K., & Handler, M.W. (2007). Effects of verbal and graphic performance feedback on behavior support plan implementation in a public elementary school. Behavior Modification, 31(4), 454–465. https://doi.org/10.1177/0145445506297583. St. Peter Pipkin, C. C., Vollmer, T. R., & Sloman, K. N. (2010). Effects of treatment integrity failures during differential reinforcement of alternative behavior: A translational

62

Applied behavior analysis advanced guidebook

model. Journal of Applied Behavior Analysis, 43(1), 47–70. https://doi.org/10.1901/ jaba.2010.43-47. Suhrheinrich, J., Dickson, K. S., Chan, N., Chan, J. C., Wang, T., & Stahmer, A. C. (2020). Fidelity assessment in community programs: An approach to validating simplified methodology. Behavior Analysis in Practice, 13, 29–39. https://doi.org/10.1007/ s40617-019-00337-6. Vladescu, J. C., & Marano, K. E. (2022). Behavioral skills training. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 69–99). Routledge/Taylor & Francis Group. https://doi.org/10.4324/9780429324840-8. Vollmer, T. R., Sloman, K. N., & St. Peter Pipkin, C. C. (2008). Practical implications of data reliability and treatment integrity monitoring. Behavior Analysis in Practice, 1(2), 4–11. https://doi.org/10.1007/BF03391722. Witt, J. C., Noell, G. H., LaFleur, L. H., & Mortenson, B. P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of the independent variable. Journal of Applied Behavior Analysis, 30(4), 693–696. https://doi.org/10.1901/ jaba.1997.30-693.

Further reading Journal of Applied Behavior Analysis. (2021). Author guidelines. Retrieved from https://onlinelibrary.wiley.com/page/journal/19383703/homepage/forauthors.html. (Accessed 2 November 2021).

CHAPTER 3

Functional analysis: Contemporary methods and applications John Michael Falliganta,b, Brianna Laureanoa,b, Emily Chesbrougha, and Samantha Hardestya,b a Kennedy Krieger Institute, Baltimore, MD, United States Johns Hopkins University School of Medicine, Baltimore, MD, United States

b

Problem behavior (e.g., aggression, self-injury, disruptive behavior) is a heterogeneous phenomenon with respect to its onset, presentation, and causal/ maintaining variables. Problem behavior can emerge at any time in the developmental period and into adulthood in some cases. Individuals with neurodevelopmental disabilities including autism spectrum disorder (ASD), intellectual and developmental disabilities (IDD), or other cooccurring psychiatric, medical, or neurodevelopmental conditions (e.g., Danforth, 2016) are at increased risk for engaging in problem behavior (e.g., aggression, property destruction, self-injury; Strand, Vister, Eldevik, & Eikeseth, 2021). Individuals referred for behavioral assessment and treatment services may present with a wide range of challenging behaviors that differ along the dimensions of topography, frequency, and severity. That is, individuals may present with a single form of problem behavior (e.g., aggression) or multiple forms (e.g., aggression, tantrums, elopement). These behaviors may occur episodically, for example, only once per day or week, or up to hundreds or thousands of time in a day.When problem behavior occurs regularly and/or with high intensity, it is likely to produce injuries to self and others, greatly impair functioning, and lead to placement in less-inclusive settings. The consequences of these behaviors may range from relatively minor to highly significant and even life-threatening (Kalb et al., 2016). Problem behavior in these populations is thought to be the product of interactions between deficits related to the primary disability such as limited communication repertoires and adaptive skills, affective dysregulation, and diminished inhibitory control, and learning experiences that reinforce and Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00003-9

Copyright © 2023 Elsevier Inc. All rights reserved.

63

64

Applied behavior analysis advanced guidebook

establish problem behavior over time. In other words, deficits in functioning and adaptive behavior may increase the frequency and intensity of frustrative experiences (e.g., not being able to adequately express wants and needs) that set the stage for the occurrence of problem behavior (Kurtz, Leoni, & Hagopian, 2020). As the occurrence of problem behavior is generally concerning, socially unacceptable, and often disruptive or dangerous, caregivers and other individuals within the individual’s environment often react to problem behavior in an attempt to interrupt or prevent it from occurring in the future. Common socially-mediated reactions include but are not limited to redirecting the individual to another activity, providing attention or consolation, removing the individual from the event thought to occasion the behavior, or providing access to items or activities as a means of redirecting their behavior to some alternative, incompatible activity. Although well-­intended, these reactions also have the potential to inadvertently reinforce the behavior via operant conditioning processes—thus, increase the probability of problem behavior in the future and creating an interlocking transactional contingency for problem behavior between the individual, caregiver, or other people in their environment. As problem behavior has multiple dimensions, its assessment and management must consider a variety of factors including caregiver-child interactions that occasion and reinforce the behavior, other skills deficits, developmental variables, the environmental context, psychiatric comorbidities, familial variables, and the broader socio-cultural context in which the child lives. Multidimensional assessment of these factors has been described extensively in the literature (e.g., McGuire et  al., 2016), so the focus of the present discussion will center around functional behavioral assessments, which are broadly designed to identify the environmental variables that occasion and reinforce problem behavior (i.e., the operant reinforcing function of problem behavior). Although interviews, behavioral observations, and rating scales (e.g., Aberrant Behavior Checklist; Aman, Singh, Stewart, & Field, 1985) may have some utility for gathering background information regarding the antecedent and consequent events surrounding problem behavior and characterizing the type and severity of problem behavior, they are not designed to directly identify the events that occasion and reinforce the behavior. Rather than rely on the report or accounts of others (which can be informative but also unreliable; e.g., Pence, Roscoe, Bourret, & Ahearn, 2009), it is better to experimentally identify the different types of consequences that serve to strengthen and maintain problem behavior because this yields information about the function of the problem behavior.That is, it



Functional analysis: Contemporary methods and applications

65

allows clinicians to understand why problem behavior is occurring to better inform the development of function-based behavioral interventions.

Functional analysis Functional analysis (FA) refers to both the process for identifying the controlling variables or problem behavior and a methodology for experimentally examining these variables in a systematic manner. FAs are the most valid and scientifically rigorous method of assessment (Beavers, Iwata, & Lerman, 2013; Didden, Korzilius, van Oorsouw, & Sturmey, 2006) and have become the nonpareil assessment procedure for identifying the maintaining variable(s) of problem behavior (Beavers et al., 2013; Hanley, Iwata, & McCord, 2003; Iwata & Dozier, 2008). Broadly, the FA is a controlled assessment in which environmental conditions are systematically manipulated to determine their relation to problem behavior (Iwata, Dorsey, Slifer, Bauman, & Richman, 1994; Iwata & Dozier, 2008). In other words, FAs involve examining how problem behavior changes as function of specific antecedent conditions and consequences for problem behavior arranged by a trained professional. Thus, this methodology involves creating different analog contexts that are designed to simulate various situations that may occasion and reinforce an individual’s problem behavior in the real world. For example, if a child’s problem behavior is hypothesized to be maintained by attention from a caregiver, the “test condition” involves arranging a situation in which the caregiver initially provides minimal attention to the child (e.g., she pretends to be busy or occupied with some task) but provides a form of attention (e.g., telling the child to stop being disruptive, physically touching the individual) contingent on each instance of the problem behavior. Similarly, if a child’s problem behavior is thought to be maintained by termination of some aversive task or demand, the test condition may involve the clinician delivering academic demands using a three-step prompting procedure (i.e., verbal, gestural, full physical prompt) and waiting 3 to 5 s in between each prompt. Contingent on problem behavior following the presentation of an academic demand, the clinician provides a brief break by removing the academic materials for a period of time (e.g., 30 s). After 30 s elapses, the clinician issues another demand and the sessions continue. To determine if problem behavior is maintained by sensory reinforcement, that is, problem behavior persists in the absence of social consequences, a no-interaction test condition may be conducted in which the child does not have access to toys or any other materials to engage with and the experimenter leaves

66

Applied behavior analysis advanced guidebook

the room or remains in the room but does not interact with the child even if problem behavior occurs. In the control condition, which serves as a basis for comparison, the caregiver interacts with the child continuously and does not provide any attention contingent upon the occurrence of problem behavior. The difference in the amount or degree of problem behavior occurring in the test and control conditions determines whether or not the specific function is indicated. There are many variations of test conditions that have been described in the extant literature (e.g., Beavers et al., 2013; Hanley, 2012; Iwata & Dozier, 2008), but the above illustrate the general premise underlying traditional FA methodology.

Challenges, safeguards, and modifications To reiterate, FA procedures represent the gold standard of behavioral assessment in the clinical practice of applied behavior analysis, as it is highly controlled and yields objective data based on the frequency or duration of problem behavior. However, as this assessment method can sometimes require considerable time, trained staff, and other resources, it may only be available in specialized clinical and school settings. Yet, there are several modifications to FA that make it more practical and feasible for implementation in community settings by individuals in the community under the supervision of an appropriate behavioral healthcare provider (i.e., Board Certified Behavior Analyst [BCBA] or licensed clinical psychologist with appropriate training). Several of these FA variations are outlined below, but readers are encouraged to review work by Iwata and Dozier (2008) or others (e.g., Lydon, Healy, O’Reilly, & Lang, 2012) for a more-detailed overview of possible FA modifications. There are many challenges that can occur in which a standard functional analysis would not be feasible or relevant. One challenge to standard FA methodology is the probability or management of high-risk or unsafe behavior. Roscoe, Phillips, Kelly, Farber, and Dube (2015) found that 11.7% of BCBAs who responded to a survey in 2015 did not conduct FAs because they felt like they were not ethically appropriate or safe. There are, of course, circumstances in which that may be true. Often this concern is raised because an FA is designed to exacerbate or increase problem behavior to better understand the circumstances that give rise to and maintain problem behavior in the individual’s natural environment. What cannot be overlooked here, though, is the fact that an FA is only conducted for those individuals who have been referred because problem behavior is occurring in the natural environment to the extent that warrants intervention.



Functional analysis: Contemporary methods and applications

67

Therefore, ­individuals for whom we see high-risk behavior in the FA, an analog environment, are also engaging in high-risk behavior outside of the FA, in their daily natural settings. Notably, Kahng et al. (2015) compared the rates and severity of injuries during FA versus outside of an FA for individuals who were admitted to an inpatient hospital unit for severe problem behavior and found that the total number of injuries reported outside the FA were higher than during the FA. In addition, when injuries did occur, the severity of those injuries was similar outside and during the FA. Therefore, for individuals being treated for severe problem behavior, the probability of injury did not increase during the FA but when injuries did occur, they were no more severe than the injuries observed outside the FA. Thus, when treating individuals with severe problem behavior, it is worth considering the benefits that come with conducting an FA. First, while the FA is designed to increase problem behavior within very specific observational windows, it is more accurate than indirect and direct assessments (e.g., Paclawskyj, Matson, Rush, Smalls, & Vollmer, 2001; Pence et al., 2009; Piazza et al., 2003; Thompson & Iwata, 2007). It is therefore possible, that relying on individual report or observations alone may lead to an incorrect functional hypothesis, which then may lead to the development of an ineffective treatment. Therefore, there may be instances in which the benefits of conducting an FA outweigh the potential costs of not doing so. When those circumstances arise, there are many ways clinicians can manage the risk that comes with conducting FAs. Iwata et al. (1994) described safeguards that can and should be in place for high-risk circumstances, including first conducting a medical exam to rule out biological factors or consulting a physician to determine if the potential for injury is too high of a risk for the well-being of the individual. In addition, a healthcare provider can establish the criterion for risk for the patient that describes the level of responding or degree of injury in which an FA session should be terminated when met. If a session is terminated because of safety concerns, then a medical professional should conduct a physical exam to approve any continuation. With these medical safeguards, along with frequent case reviews, it is possible to monitor and prevent injuries from occurring simply by terminating sessions alone. It is also possible to modify FA procedures to better prohibit the likelihood of injury. For example, if behavior is high risk, you can employ (a) protective procedures, (b) a precursor FA, (c) a latency-based FA, or (d) a trial-based FA (e.g., Bloom, Iwata, Fritz, Roscoe, & Carreau, 2011; Fritz, Iwata, Hammond, & Bloom, 2013; Le & Smith, 2002; Smith & Churchill, 2002;Thomason-Sassi,

68

Applied behavior analysis advanced guidebook

Iwata, Neidert, & Roscoe, 2011). For example, Le and Smith compared FAs of self-injurious behavior (SIB) under two different arrangements: conditions similar to Iwata et al. (1994) and those same conditions plus protective equipment placed on the participant. They found that for some of the participants, the addition of protective equipment suppressed SIB completely during the FA. However, for one of the participants, responding continued to occur at a low rate. Although it may be more difficult to identify a functional relation while doing so, the use of protective equipment during an FA should be considered for high-risk behavior. Precursor FAs. Smith and Churchill compared FAs of SIB under conditions similar to Iwata et al. (1994) in which the contingency was in place for the target problem behavior versus a precursor FA in which the same contingencies were placed for behaviors that reliably preceded the target behavior. They found that the precursor FA reliability identified the same functional relation identified with the standard FA without observing an increase in the target response. Specifically, the high-risk behavior only occurred at low rates throughout the precursor FA, suggesting that if a client is reliably engaging in precurrent behavior before their high-risk behavior, a precursor FA may be a useful modification. Note that, absent an experimental analysis such as a precursor FA, it is not possible to identify a precurrent relation without at least conducting a background and conditional probability analysis for the antecedent behavior (e.g., Fritz et  al., 2013). Doing so allows the clinician to make statements about the relative probability of the candidate precursor given the occurrence of severe problem behavior and other potential relations. It may be somewhat inaccurate to assume antecedents are precursors without conducting the probability analyses described above given the discordance within the literature regarding the true prevalence of precursors (i.e., only between 6% and 10% of cases have precursors [Fahmie & Iwata, 2011]; however, others estimate up to 88% of cases have precursors [Fritz et al., 2013]). Thus, unless confirmed experimentally, which the authors attempted to do, it is probably not safe to assume that the less severe behaviors are precursor behaviors or belong in the same response class (Asmus, Franzese, Conroy, & Dozier, 2003) as the more severe topographies of problem behavior. Latency-Based FAs. Another modification that could be made is a latency-based FA, in which the duration between the start of the trial and the first occurrence of the target behavior is recorded. A latency FA reduces the risk by only allowing behavior to occur a single time in a given session. Thomason-Sassi et al. (2011) compared the outcomes of a



Functional analysis: Contemporary methods and applications

69

standard FA in which sessions were 10 min and conclusions were drawn based on the rate of responding across the entire session versus a latency FA in which sessions were terminated after the first response and conclusions were drawn by the number of seconds that occurred before the first response. They found correspondence between 33 of the 38 pairs of FA comparisons, demonstrating that a latency FA may be a safe and effective method for identifying functional relations of behaviors that are too high-risk to allow occurring at a high rate. Latency-based FAs may also be useful for assessing behavior that is less-dangerous but pose measurement difficulties because there are limited opportunities to engage in the behavior within an observation period (e.g., disrobing; Falligant, Carver, Zarcone, & Schmidt, 2021). Trial-Based FAs. Finally, it also may be beneficial to conduct a ­trial-based FA for high-risk behavior. A trial-based FA includes repeatedly creating opportunities to test FA conditions in the natural environment. For example, if a clinician is providing attention to a client in a clinic, they could begin withholding their attention and contingent upon the first instance of the target behavior, the therapist would begin providing attention again. These trials and trials from other conditions would be interspersed throughout the day based on the natural opportunities that arise.Then, conclusions are drawn based on the proportion of trials in which target behaviors occurred across each condition. Bloom et al. (2011) demonstrated that trial-based FAs are effective at identifying the same conditions as standard FAs when a therapist or teacher implemented trials. Trial-based FAs can be implemented in classroom settings or other environments with relative ease by teachers (Rispoli, Neely, Healy, & Gregori, 2016), paraprofessionals (Lloyd et al., 2015), and graduate students (Falligant, Pence, Nuhu, Bedell, & Luna, 2021). Again, this FA variation may be easily embedded within ongoing classroom activities and may be viewed by school staff and other professionals as more naturalistic than traditional FAs conducted in analog settings (Rispoli et al., 2016). Abbreviated FAs. Lack of time is a barrier that clinicians sometimes report for not conducting FAs (Roscoe et  al., 2015). There are multiple methods for shortening FAs including use of latency or trial-based FAs as described above. For example, clinicians can conduct a brief FA (e.g., Gerow et al., 2021; Kahng & Iwata, 1999; Northup et al., 1991) in abbreviated form by running fewer or shorter sessions. A brief FA can shorten the duration of assessment time by either decreasing the number of sessions (Gerow et al., 2021; Northup et al., 1991) or decreasing the length

70

Applied behavior analysis advanced guidebook

of sessions (Kahng & Iwata, 1999). Second, clinicians may use behavioral screeners to shorten FAs (e.g., Querim et al., 2013; Slanzi et al., 2022). For example, Querim et al. (2013) screened for problem behavior maintained by automatic reinforcement by conducting repeated no-interaction conditions, and Slanzi et al. (2022) screened for problem behavior maintained by socially mediated reinforcement by conducting a within-session analysis of extended no-interaction sessions. Interview-Informed Synthesized Contingency Analysis. Hanley, Jin,Vanselow, and Hanratty (2014) introduced the interview-informed, synthesized contingency analysis (IISCA) in an attempt to improve the efficiency of FA procedures given the perceived aforementioned constraints surrounding the practicality of traditional (Iwata et  al., 1994) methods. Note, this method departs considerably from other FA formats both in terms of its procedures and the logic underlying the FA approach. Broadly, after conducting a caregiver interview and behavioral observations, this method entails synthesizing multiple reinforcement contingencies within a single test (e.g., problem behavior produces attention and escape and access to preferred stimuli/activities) and control condition where these same reinforcers are provided response-independently. This approach represents a departure from FAs with isolated contingencies involving a single a) establishing operation, b) discriminative stimulus, and c) consequence manipulated within a single test condition (Holehan et  al., 2020). There may be some advantages to using this method, as there could situations in which isolating contingencies poses practical challenges or otherwise requires substantial time, effort, and resources (note, though, that the FA modifications listed above address these concerns). However, this method does not allow the clinician to identify the specific controlling variables underlying problem behavior, which could result in more complex and resource-­intensive interventions than would otherwise be necessary. For example, attention-maintained problem behavior could be erroneously identified as being maintained by attention, escape, and access to tangible items—­warranting three function-based treatments instead of a single treatment (e.g., Fisher, Greer, Romani, Zangrillo, & Owen, 2016; Holehan et al., 2020). This method could also lead to a number of habilitative challenges such as providing escape when it is not necessary, thus reducing the amount of instructional time an individual receives in the classroom (Holehan et al., 2020) and may potentially have iatrogenic effects with respect to establishing new functional classes of problem behavior (Retzlaff, Fisher, Akers, & Greer, 2020).



Functional analysis: Contemporary methods and applications

71

Inconclusive outcomes Decades of research support the utility of FAs, and several large-scale analyses and reviews have found the FA to be highly effective at identifying function(s) of problem behavior (Iwata et al., 1994; Davis, Kahng, Schmidt, Bowman, & Boelter, 2012; Hagopian et al., 2013; Hanley, Iwata, & McCord, 2003; Kurtz et al., 2003; Mueller, Nkosi, & Hine, 2011). Collectively, these large-scale analyses and reviews arrive at a similar consensus that FA procedures generally identify the functions of problem behavior in 90% or more of the individuals for whom FAs are conducted. This research body also acknowledges that to achieve these highly effective outcomes, at times, modifications to the standard procedures described in Iwata et al. (1994) are warranted. These modifications may be necessary when results are inconclusive or ambiguous. Inconclusive results can occur in different ways. FAs may yield inconclusive data when a) high-rate behavior is observed across all conditions (including the control condition), b) low rate (or zero) behavior is observed across conditions, or c) from highly variable behavior that occur across conditions and prevents a clear pattern of results from emerging. Ambiguous or undifferentiated results are problematic because they do not allow practitioners to draw conclusions about the controlling variable of problem behavior and subsequently facilitate the identification of function-based treatments. Failure to achieve clear differentiation between test and control conditions may be due to many factors such as lack of appropriate discriminative stimuli, limited exposure to reinforcement contingencies, novelty associated with the analog context or clinicians, response-specific characteristics of the problem behavior (e.g., low frequency behaviors), or FA design (e.g., multielement, pairwise, reversal). These issues may be further complicated when working with individuals with multiply maintained problem behavior, behavior under the control of idiosyncratic variables, or behavior maintained by automatic reinforcement (see “Assessment of Automatically Maintained Self-Injurious Behavior” below).

Strategies for achieving differentiated results There are several antecedent strategies that clinicians can employ in the pursuit of achieving differentiated FA results. First, at the onset of the FA, indirect assessments such as interviews and surveys may be used to acquire important information about the client and generate general hypotheses about causal relationships between the environment and sources of behavior

72

Applied behavior analysis advanced guidebook

dysfunction. This information can then be used to guide individual modifications to the antecedent and consequent events programmed in relevant FA conditions. Descriptive assessment can also be conducted through direct observation of the client and their behavior in the natural environment. By observing the occurrence of the behavior in the natural environment, clinicians can draw conclusions about potential maintaining variables and identify qualitative features of potential reinforcing events (e.g., the type and duration of attention delivered for problem behavior) to individualize FA conditions. Conducting preference assessments prior to the FA can also help ensure that the appropriate stimuli are incorporated into the control, social attention, and tangible conditions. In addition, demand analyses can be conducted to determine if variable responding is evident during academic contexts or activities of daily living, and identify low-probability tasks for use within the demand condition (e.g., Roscoe, Rooker, Pence, Longworth, & Zarcone, 2009). Modifications to FA procedures typically involve changing programmed antecedents (e.g., environmental stimuli, task demands, therapist behavior; (Kennedy & Souza, 1995; Kuhn, Hardesty, & Luczynski, 2009; Mueller et al., 2011)), consequences (e.g., extinction to determine response class hierarchy, type of attention; Kodak, Northup, & Kelley, 2007; Richman,Wacker, Asmus, Casey, & Andelman, 1999), and procedural changes such as session duration or experimental design. Common design changes typically involve moving from multielement to reversal or pairwise designs (Hagopian et  al., 2013; Iwata et al., 1994;Vollmer, Iwata, Duncan, & Lerman, 1993).These modifications can be particularly useful when practitioners are concerned about discrimination or rapid altering between multiple test conditions. The reversal design allows for repeated exposure to one condition and limited alternating between conditions (e.g., Falligant, Pence, Sullivan, & Luna, 2021), whereas, the pairwise design removes those extra variables by allowing one test condition to be isolated and directly compared to a control condition. Another procedural variation is extending the duration of FA sessions. Several studies have demonstrated the utility of this approach, extending the duration of time that FA contingencies are in effect from 5 min to 10 min to 30 min or even several hours (e.g., one session conducted per day; Kahng, Abt, & Schonbachler, 2001; Wallace & Iwata, 1999; Rolider, 2007). Many sources of ambiguity can also be solved by adhering to best practice guidelines during the initial assessment. For example, Rooker, DeLeon, Borrero, Frank‐Crawford, and Roscoe (2015) outline strategies for ­clinicians to avoid or manage ambiguous outcomes, including (a) ruling



Functional analysis: Contemporary methods and applications

73

out biological/medical events, (b) limiting presession access to reinforcers, (c) standardizing the reinforcement intervals across conditions, (d) using standardized or fixed sequences of FA conditions to ensure that each FA condition arranges the establishing operation for the next FA condition (see Hammond, Iwata, Rooker, Fritz, & Bloom, 2013), (e) programming unique discriminative stimuli for each FA condition, (f) ensuring problem behavior has fully subsided before beginning each session, (g) incorporating task and leisure stimuli that are contextually appropriate, and (h) using low-preferred toys (or no toys) in the attention and alone conditions (Roscoe, Carreau, MacDonald, & Pence, 2008). With respect to data analysis, it may be helpful to graph and analyze responses when an EO is present and absent (Roane, Lerman, Kelley, & Van Camp, 1999). It may also be helpful to graph each topography of problem behavior separately. In at least half of the published FAs, multiple forms of problem behavior (e.g., aggression, self-injury, and disruption) were assessed simultaneously, with increased use of this practice in the past decade (Beavers et al., 2013; Hanley et al., 2003). In the test conditions of these simultaneous assessments, the programmed consequent event (e.g., brief physical and vocal attention in the attention condition) is delivered contingent upon the occurrence of any one of the targeted forms of behavior. For the present discussion, we will refer to this type of FA that simultaneously assesses the function of multiple problem behaviors as a multiple-behavior FA (MBFA). The outcomes of an MBFA can be depicted with all forms of behavior aggregated together (combined and graphically displayed within a single data path) or with each form of behavior graphed separately. For example, Derby et al. (1994) conducted MBFAs for four individuals and compared outcomes using individual versus aggregated data-presentation methods. Relative to the interpretations of the aggregated results, graphing the data separately for each form of behavior revealed additional functional relations that were not identified in the aggregate FA for two individuals. For example, the aggregate data-analysis identified an automatic function but the individual data analyses identified an automatic and social function.The authors concluded that reliance on aggregated data obscured these additional functions, and that both aggregate and separate data paths should be used in the interpretation of FA results. Derby et al. (2000) subsequently replicated these procedures with 48 participants and found additional functions of specific forms of problem behavior in 25% of cases. Thus, findings from Derby et al. (1994; 2000) suggest that MBFA data should be examined both in the aggregate and separately for each target behavior.

74

Applied behavior analysis advanced guidebook

Although interpreting each form of behavior separately can reveal functions obscured by those displayed in the aggregate, it does not address all potential limitations of providing consequences for multiple forms of behavior simultaneously. MBFAs may result in inconclusive or false negative findings when target behaviors are members of the same response class (e.g., Richman et al., 1999), or when one response occurs at a high frequency across FA conditions (e.g., if one response is maintained by automatic reinforcement; Asmus et al., 2003). In both of these scenarios, the relevant EO for some forms of behavior may be weakened, resulting in the occurrence of only one form of behavior and preventing identification of function for the remaining target behaviors.Thus, it may also be advantageous to potentially reinforce a single topography of problem behavior at a time in lieu of reinforcing multiple topographies of problem behavior during the FA. As described above, inconclusive FAs may be characterized by responding in which high-rate behavior is observed across all conditions, including the control condition, or no or very low rate behavior is observed across conditions. Although these response patterns could be the product of uncontrolled variation and procedural issues, possibly poor discrimination between conditions, weak EOs, and inappropriate FA design, these patterns could also be indicative of automatically maintained problem behavior (e.g., SIB, stereotypy, pica). Below, we briefly highlight a few considerations pertaining to the functional assessment of automatically maintained problem behavior, emphasizing SIB due its potential for bodily harm and pronounced interest to clinicians and researchers alike. Assessment of Automatically Maintained Self-Injurious Behavior. The DSM-5 delineates SIB as a distinct psychiatric disorder (Stereotypic Movement Disorder with Self-Injurious Behavior) most commonly associated with individuals who have IDD. Many forms of SIB such as striking head with hands, head banging, and self-biting impose direct and immediate risks to the individual including concussions, lacerations, body disfigurement, permanent sensory impairment (e.g., loss of vision) or even death. Additionally, SIB may result in placement in restrictive social, educational, and residential settings and/or use of psychotropic medications and physical restraint. SIB is the product of interactions between biological and environmental variables. Reviews of the published literature on functional analysis (FA) of SIB suggest, in most cases, SIB is maintained by social consequences—­ typically attention, escape from instructional demands and access to preferred items. However, in approximately 25% of cases, SIB occurs independent of



Functional analysis: Contemporary methods and applications

75

social consequences—in these cases, SIB is automatically reinforced, as the products of the behavior itself maintain its occurrence through unspecified mechanisms (e.g., sensory stimulation; see Kurtz et al., 2020). There are a number of distinct response patterns during FAs that characterize different subtypes of automatically reinforced self-injurious behavior (ASIB). Broadly, these response patterns refer to the level of differentiation of ASIB occurring in alone/no-interaction conditions relative to control (toy play) conditions (Subtype 1 and Subtype 2) or the presence of self-­restraint across conditions (Subtype 3). Whereas Subtype 1 ASIB occurs at relatively higher rates in the alone/ignore condition than in the control condition, Subtype 2 ASIB occurs at relatively similar levels across both the alone/no interaction condition and the control condition (Hagopian, Rooker, & Zarcone, 2015; Hagopian, Rooker, Zarcone, Bonner, & Arevalo, 2017). Subtype 3 ASIB is characterized by high levels of self-restraint (at least 25% of 10-s intervals) that precludes the occurrence of SIB but also limits the functional use of hands. These subtypes have important ­ treatment-related implications, as reinforcement-based procedures (e.g., noncontingent reinforcement) are significantly less effective for Subtype 2 ASIB relative to Subtype 1 (Hagopian et al., 2015, 2017; Hagopian, Rooker, & Yenokyan, 2018). As described above, the heterogeneity of ASIB in terms of the level of differentiation and the presence of self-restraint has provided a basis for the subtyping model of ASIB that been replicated across a number of studies (e.g., Hagopian et  al., 2018). Prior to identifying subtypes of ASIB, one must first determine if SIB is automatically maintained using established structured criteria described by Roane, Fisher, Kelley, Mevers, and Bouxsein (2013) or comparable methods (Hagopian et al., 2015). Broadly, this involves calculating the mean and standard deviation (SD) of self-injury in the play condition as the basis for drawing upper and lower criterion lines at 1 standard deviation (SD) above and below the mean. If more than half of the data points in the no-interaction condition are above the upper criterion line relative to the number of data points below the lower criterion line, an automatic function is indicated. The second step determines the specific subtype. Subtype 3 is identified if self-restraint is found to occur in at least 25% of 10-s intervals for at least three series of no-interaction sessions. Subtypes 1 or 2 are identified based on the degree to which self-injury was differentiated across the play and no-interaction conditions. This is determined by the application of structured visual analysis criteria (Hagopian et al., 1997; Roane et al., 2013), leading to the calculation of a quotient score based on

76

Applied behavior analysis advanced guidebook

the proportion of data points above and below the criterion lines. Note that there are additional criteria that also take into account the presence of aberrant data points or additional data from extended no-interaction series (see Hagopian et al., 2015) that should be considered. Although these criteria generate a principled, objective, and replicable subtyping method, they can be laborious to apply which poses a barrier to their use in clinical practice (cf. Hagopian, Falligant, Frank-Crawford, & Yenokyan, n.d.).

Summary For individuals with intellectual disabilities, problem behavior can greatly impair functioning, produce injury, and lead to placement in less inclusive settings. Although the form of behavior (e.g., hitting, self-biting) is an important descriptive feature, it is simply a clinical endpoint that represents a final product of gene-environment interactions. Examining problem behavior across deeper levels of analysis has great potential to better inform treatment. An underlying dimension of problem behavior that warrants study is the function of behavior. The function of a behavior refers to what events occasion and maintain it. Knowledge of the function of problem behavior has revolutionized behavioral treatment in recent years. A distinguishing feature between behavior analysis and behavior modification is the primacy of discovering and understanding variables that control behavior under natural conditions (Mace, 1994). The development of many behavior-analytic technologies, such as the FA, illustrates this emphasis on understanding (and changing) the underlying contingencies that maintain problem behavior by designing interventions that match the operant functions of problem behavior; this approach represents a dramatic shift away from simply applying default technologies to override existing (and unknown) contingencies. While the FA methodology described by Iwata et al. (1994) may be considered a standard or default approach (cf. Jessel, Hanley, & Ghaemmaghami, 2020; Jessel, Hanley, Ghaemmaghami, & Carbone, 2021), there are many procedural modifications that clinicians can use to address concerns that may arise when attempting to assess and treat severe problem behavior. For example, if safety is a concern, then a brief FA or latency FA may be useful. If time or training is a concern, then a trial-based FA conducted by teachers or staff may advantageous. Or if a standard FA yielded undifferentiated results, then modifications of the experimental design, order of conditions, or duration of sessions may be worth considering. While the procedure of



Functional analysis: Contemporary methods and applications

77

the FA can be modified, the process of experimentally identifying variables that give rise to and maintain behavior remains the same—and remains the most effective form of behavioral assessment for severe problem behavior.

References Aman MG, Singh NN, Stewart AW, Field CJ. The aberrant behavior checklist: A behavior rating scale for the assessment of treatment effects. American Journal of Mental Deficiency. 1985;89(5):485–491. https://doi.org/10.1037/t10453-000. Asmus JM, Franzese JC, Conroy MA, Dozier CL. Clarifying functional analysis outcomes for disruptive behaviors by controlling consequence delivery for stereotypy. School Psychology Review. 2003;32:624–631. https://doi.org/10.1080/02796015.2003.12086225. Beavers GA, Iwata BA, Lerman DC. Thirty years of research on the functional analysis of problem behavior. Journal of Applied Behavior Analysis. 2013;46(1):1–21. https://doi. org/10.1002/jaba.30. Bloom SE, Iwata BA, Fritz JN, Roscoe EM, Carreau AB. Classroom application of a trial-based functional analysis. Journal of Applied Behavior Analysis. 2011;44:19–31. ­ https://doi.org/10.1901/jaba.2011.44-19. Danforth JS. A flow chart of behavior management strategies for families of children with co-occurring attention-deficit hyperactivity disorder and conduct problem behavior. Behavior Analysis in Practice. 2016;9(1):64–76. https://doi.org/10.1007/ s40617-016-0103-6. Davis BJ, Kahng S, Schmidt J, Bowman LG, Boelter EW. Alterations to functional analysis methodology to clarify the functions of low rate, high intensity problem behavior. Behavior Analysis in Practice. 2012;5(1):27–39. https://doi.org/10.1007/BF03391815. Derby KM, Wacker DP, Peck S, Sasso GARY, DeRaad A, Berg W, … Ulrich S. Functional analysis of separate topographies of aberrant behavior. Journal of Applied Behavior Analysis. 1994;27:267–278. https://doi.org/10.1901/jaba.1994.27-267. Derby KM, Hagopian L, Fisher WW, Richman D, Augustine M, Fahs A, Thompson R. Functional analysis of aberrant behavior through measurement of separate response topographies. Journal of Applied Behavior Analysis. 2000;33:113–117. https://doi. org/10.1901/jaba.2000.33-113. Didden R, Korzilius H, van Oorsouw W, Sturmey P. Behavioral treatment of challenging behaviors in individuals with mild mental retardation: Meta-analysis of single-subject research. American Journal on Mental Retardation. 2006;111(4):290–298. https://doi.org/ 10.1352/0895-8017(2006)111[290:BTOCBI]2.0.CO;2. Fahmie TA, Iwata BA. Topographical and functional properties of precursors to severe problem behavior. Journal of Applied Behavior Analysis. 2011;44(4):993–997. https://doi. org/10.1901/jaba.2011.44-993. Falligant JM, Carver A, Zarcone J, Schmidt JD. Assessment and treatment of public disrobing using noncontingent reinforcement and competing stimuli. Behavior Analysis: Research and Practice. 2021;21(1):75–83. https://doi.org/10.1037/bar0000179. Falligant JM, Pence ST, Nuhu NN, Bedell S, Luna O. Effects of feedback specificity on acquisition of trial‐based functional analysis skills. Behavioral Interventions. 2021;36(3):697– 707. https://doi.org/10.1002/bin.1784. Falligant JM, Pence ST, Sullivan C, Luna O. Functional analysis and treatment of multiply maintained operant vomiting. Journal of Developmental and Physical Disabilities. 2021;33(1):153–161. https://doi.org/10.1007/s10882-020-09740-2. Fisher WW, Greer BD, Romani PW, Zangrillo AN, Owen TM. Comparisons of synthesized and individual reinforcement contingencies during functional analysis. Journal of Applied Behavior Analysis. 2016;49(3):596–616. https://doi.org/10.1002/jaba.31.

78

Applied behavior analysis advanced guidebook

Fritz JN, Iwata BA, Hammond JL, Bloom SE. Experimental analysis of precursors to severe problem behavior. Journal of Applied Behavior Analysis. 2013;46:101–129. https://doi. org/10.1002/jaba.27. Gerow S, Radhakrishnan S, Davis TN, Zambrana J, Avery S, Cosottile DW, Exline E. Parentimplemented brief functional analysis and treatment with coaching via telehealth. Journal of Applied Behavior Analysis. 2021;54(1):54–69. https://doi.org/10.1002/jaba.801. Hagopian, L.P., Falligant, J.M., Frank-Crawford, M.A., & Yenokyan, G. (n.d.). A simplified method for identifying subtypes of automatically maintained self-injury. Journal of Applied Behavior Analysis, (Accepted with revisions). Hagopian LP, Rooker GW, Jessel J, DeLeon IG. Initial functional analysis outcomes and modifications in pursuit of differentiation: A summary of 176 inpatient cases. Journal of Applied Behavior Analysis. 2013;46(1):88–100. https://doi.org/10.1002/jaba.25. Hagopian LP, Fisher WW, Thompson RH, Owen‐DeSchryver J, Iwata BA, Wacker DP. Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis. 1997;30(2):313–326. https://doi.org/10.1901/ jaba.1997.30-313. Hagopian LP, Rooker GW,Yenokyan G. Identifying predictive behavioral markers: A demonstration using automatically reinforced self‐injurious behavior. Journal of Applied Behavior Analysis. 2018;51(3):443–465. https://doi.org/10.1002/jaba.477. Hagopian LP, Rooker GW, Zarcone JR. Delineating subtypes of self‐injurious behavior maintained by automatic reinforcement. Journal of Applied Behavior Analysis. 2015;48(3):523– 543. https://doi.org/10.1002/jaba.236. Hagopian LP, Rooker GW, Zarcone JR, Bonner AC, Arevalo AR. Further analysis of subtypes of automatically reinforced SIB: A replication and quantitative analysis of published datasets. Journal of Applied Behavior Analysis. 2017;50(1):48–66. https://doi.org/10.1002/jaba.368. Hammond JL, Iwata BA, Rooker GW, Fritz JN, Bloom SE. Effects of fixed versus random condition sequencing during multielement functional analyses. Journal of Applied Behavior Analysis. 2013;46(1):22–30. https://doi.org/10.1002/jaba.7. Hanley GP. Functional assessment of problem behavior: Dispelling myths, overcoming implementation obstacles, and developing new lore. Behavior Analysis in Practice. 2012;5(1):54– 72. https://doi.org/10.1007/BF03391818. Hanley GP, Iwata BA, McCord BE. Functional analysis of problem behavior: A review. Journal of Applied Behavior Analysis. 2003;36(2):147–185. https://doi.org/10.1901/ jaba.2003.36-147. Hanley GP, Jin CS, Vanselow NR, Hanratty LA. Producing meaningful improvements in problem behavior of children with autism via synthesized analyses and treatments. Journal of Applied Behavior Analysis. 2014;47(1):16–36. https://doi.org/10.1002/jaba.384. Holehan KM, Dozier CL, Diaz de Villegas SC, Jess RL, Goddard KS, Foley EA. A comparison of isolated and synthesized contingencies in functional analyses. Journal of Applied Behavior Analysis. 2020;53(3):1559–1578. https://doi.org/10.1002/jaba.700. Iwata BA, Dozier CL. Clinical application of functional analysis methodology. Behavior Analysis in Practice. 2008;1(1):3–9. https://doi.org/10.1007/BF03391714. Iwata BA, Dorsey MF, Slifer KJ, Bauman KE, Richman GS. Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis. 1994;27:197–209 [Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3-20, 1982] https://doi. org/10.1901/jaba.1994.27-197. Jessel J, Hanley GP, Ghaemmaghami M. On the standardization of the functional analysis. Behavior Analysis in Practice. 2020;13(1):205–216. https://doi.org/10.1007/ s40617-019-00366-1. Jessel J, Hanley GP, Ghaemmaghami M, Carbone MJ. On the efficiency and control of different functional analysis formats. Education and Treatment of Children. 2021;1–16. https:// doi.org/10.1007/s43494-021-00059-x.



Functional analysis: Contemporary methods and applications

79

Kahng S, Abt KA, Schonbachler HE. Assessment and treatment of low‐rate high‐intensity problem behavior. Journal of Applied Behavior Analysis. 2001;34(2):225–228. https://doi. org/10.1901/jaba.2001.34-225. Kahng S, Iwata BA. Correspondence between outcomes of brief and extended functional analyses. Journal of Applied Behavior Analysis. 1999;32:149–159. https://doi.org/10.1901/ jaba.1999.32-149. Kahng S, Hausman NL, Fisher AB, Donaldson JM, Cox JR, Lugo M, Wiskow KM. The safety of functional analyses of self‐injurious behavior. Journal of Applied Behavior Analysis. 2015;48:107–114. https://doi.org/10.1002/jaba.168. Kalb LG, Vasa RA, Ballard ED, Woods S, Goldstein M, Wilcox HC. Epidemiology of ­injury-related emergency department visits in the US among youth with autism spectrum disorder. Journal of Autism and Developmental Disorders. 2016;46(8):2756–2763. https://doi.org/10.1007/s10803-016-2820-7. Kennedy CH, Souza G. Functional analysis and treatment of eye poking. Journal of Applied Behavior Analysis. 1995;28(1):27–37. https://doi.org/10.1901/jaba.1995.28-27. Kodak T, Northup J, Kelley ME. An evaluation of the types of attention that maintain problem behavior. Journal of Applied Behavior Analysis. 2007;40(1):167–171. https://doi. org/10.1901/jaba.2007.43-06. Kuhn DE, Hardesty SL, Luczynski K. Further evaluation of antecedent social events during functional analysis. Journal of Applied Behavior Analysis. 2009;42(2):349–353. https://doi. org/10.1901/jaba.2009.42-349. Kurtz PF, Chin MD, Huete JM, Tarbox RS, O’Connor JT, Paclawskyj TR, Rush KS. Functional analysis and treatment of self-injurious behavior in young children: A summary of 30 cases. Journal of Applied Behavior Analysis. 2003;36(2):205–219. https://doi. org/10.1901/jaba.2003.36-205. Kurtz PF, Leoni M, Hagopian LP. Behavioral approaches to assessment and early intervention for severe problem behavior in intellectual and developmental disabilities. Pediatric Clinics. 2020;67(3):499–511. https://doi.org/10.1016/j.pcl.2020.02.005. Le DD, Smith RG. Functional analysis of self-injury with and without protective equipment. Journal of Developmental and Physical Disabilities. 2002;14:277–290. https://doi. org/10.1023/A:1016028522569. Lloyd BP, Wehby JH, Weaver ES, Goldman SE, Harvey MN, Sherlock DR. Implementation and validation of trial-based functional analyses in public elementary school settings. Journal of Behavioral Education. 2015;24(2):167–195. https://doi.org/10.1007/ s10864-014-9217-5. Lydon S, Healy O, O’Reilly MF, Lang R. Variations in functional analysis methodology: A systematic review. Journal of Developmental and Physical Disabilities. 2012;24(3):301–326. https://doi.org/10.1007/s10882-012-9267-3. Mace FC.The significance and future of functional analysis methodologies. Journal of Applied Behavior Analysis. 1994;27:385–392. https://doi.org/10.1901/jaba.1994.27-385. McGuire K, Fung LK, Hagopian L, Vasa RA, Mahajan R, Bernal P, … Whitaker AH. Irritability and problem behavior in autism spectrum disorder: A practice pathway for pediatric primary care. Pediatrics. 2016;137(Suppl. 2):S136–S148. https://doi. org/10.1542/peds.2015-2851L. Mueller MM, Nkosi A, Hine JF. Functional analysis in public schools: A summary of 90 functional analyses. Journal of Applied Behavior Analysis. 2011;44(4):807–818. https://doi. org/10.1901/jaba.2011.44-807. Northup J, Wacker D, Sasso G, Steege M, Cigrand K, Cook J, DeRaad A. A brief functional analysis of aggressive and alternative behavior in an outclinic setting. Journal of Applied Behavior Analysis. 1991;24(3):509–522. https://doi.org/10.1901/jaba.1991.24-509. Paclawskyj TR, Matson JL, Rush KS, Smalls Y, Vollmer TR. Assessment of the convergent validity of the questions about behavioral function scale with analogue functional

80

Applied behavior analysis advanced guidebook

analysis and the motivation assessment scale. Journal of Intellectual Disability Research. 2001;45:484–494. https://doi.org/10.1046/j.1365-2788.2001.00364.x. Pence ST, Roscoe EM, Bourret JC, Ahearn WH. Relative contributions of three descriptive methods: Implications for behavioral assessment. Journal of Applied Behavior Analysis. 2009;42(2):425–446. https://doi.org/10.1901/jaba.2009.42-425. Piazza CC, Fisher WW, Brown KA, Shore BA, Patel MR, Katz RM, … Blakely-Smith A. Functional analysis of inappropriate mealtime behaviors. Journal of Applied Behavior Analysis. 2003;36:187–204. https://doi.org/10.1901/jaba.2003.36-187. Querim AC, Iwata BA, Roscoe EM, Schlichenmeyer KJ, Ortega JV, Hurl KE. Functional analysis screening for problem behavior maintained by automatic reinforcement. Journal of Applied Behavior Analysis. 2013;46:47–60. https://doi.org/10.1002/jaba.26. Retzlaff BJ, Fisher WW, Akers JS, Greer BD. A translational evaluation of potential iatrogenic effects of single and combined contingencies during functional analysis. Journal of Applied Behavior Analysis. 2020;53(1):67–81. https://doi.org/10.1002/jaba.595. Richman DM, Wacker DP, Asmus JM, Casey SD, Andelman M. Further analysis of problem behavior in response class hierarchies. Journal of Applied Behavior Analysis. 1999;32:269– 283. https://doi.org/10.1901/jaba.1999.32-269. Rispoli M, Neely L, Healy O, Gregori E. Training public school special educators to implement two functional analysis models. Journal of Behavioral Education. 2016;25(3):249–274. https://doi.org/10.1007/s10864-016-9247-2. Roane HS, Fisher WW, Kelley ME, Mevers JL, Bouxsein KJ. Using modified visual‐­ inspection criteria to interpret functional analysis outcomes. Journal of Applied Behavior Analysis. 2013;46(1):130–146. https://doi.org/10.1002/jaba.1. Roane HS, Lerman DC, Kelley ME, Van Camp CM. Within-session patterns of responding during functional analyses: The role of establishing operations in clarifying behavioral function. Research in Developmental Disabilities. 1999;20(1):73–89. https://doi. org/10.1016/S0891-4222(98)00033-X. Rolider, N. (2007). Functional analysis of low-rate problem behavior (Doctoral dissertation, University of Florida). Retrieved from: http://gradworks.umi.com/32/81/3281592.htm. Rooker GW, DeLeon IG, Borrero CS, Frank‐Crawford MA, Roscoe EM. Reducing ambiguity in the functional assessment of problem behavior. Behavioral Interventions. 2015;30(1):1–35. https://doi.org/10.1002/bin.1400. Roscoe EM, Rooker GW, Pence ST, Longworth LJ, Zarcone J. Assessing the utility of a demand assessment for functional analysis. Journal of Applied Behavior Analysis. 2009;42(4):819–825. https://doi.org/10.1901/jaba.2009.42-819. Roscoe EM, Carreau A, MacDonald J, Pence ST. Further evaluation of leisure items in the attention condition of functional analyses. Journal of Applied Behavior Analysis. 2008;41(3):351–364. https://doi.org/10.1901/jaba.2008.41-351. Roscoe EM, Phillips KM, Kelly MA, Farber R, Dube WV. A statewide survey assessing practitioners' use and perceived utility of functional assessment. Journal of Applied Behavior Analysis. 2015;48:830–844. https://doi.org/10.1002/jaba.259. Slanzi CM, Vollmer TR, Iwata BA, Kronfli FR, Williams LP, Perez BC. Further evaluation of functional analysis screening methods in early autism intervention. Journal of Applied Behavior Analysis. 2022;9999:1–20. https://doi.org/10.1002/jaba.925. Smith RG, Churchill RM. Identification of environmental determinants of behavior disorders through functional analysis of precursor behaviors. Journal of Applied Behavior Analysis. 2002;35:125–136. https://doi.org/10.1901/jaba.2002.35-125. Strand RC,Vister OM, Eldevik S, Eikeseth S. Nature, prevalence, and characteristics of challenging behaviors in functional assessment. In: Functional assessment for challenging behaviors and mental health disorders. Cham: Springer; 2021:153–181. Thomason-Sassi JL, Iwata BA, Neidert PL, Roscoe EM. Response latency as an index of response strength during functional analyses of problem behavior. Journal of Applied Behavior Analysis. 2011;44:51–67. https://doi.org/10.1901/jaba.2011.44-51.



Functional analysis: Contemporary methods and applications

81

Thompson RH, Iwata BA. A comparison of outcomes from descriptive and functional analyses of problem behavior. Journal of Applied Behavior Analysis. 2007;40:333–338. https:// doi.org/10.1901/jaba.2007.56-06. Vollmer TR, Iwata BA, Duncan BA, Lerman DC. Extensions of multielement functional analyses using reversal-type designs. Journal of Developmental & Physical Disabilities. 1993;5(4):311–325. https://doi.org/10.1007/BF01046388. Wallace MD, Iwata BA. Effects of session duration on functional analysis outcomes. Journal of Applied Behavior Analysis. 1999;32(2):175–183. https://doi.org/10.1901/ jaba.1999.32-175.

This page intentionally left blank

CHAPTER 4

Video modeling Florence D. DiGennaro Reeda, Sandra A. Rubya, Matthew M. Laskea, and Jason C. Vladescub a University of Kansas, Department of Applied Behavioral Science, Lawrence, KS, United States Caldwell University, Department of Applied Behavior Analysis, Caldwell, NJ, United States

b

Video modeling Decades of research supports the use of video modeling (VM) across a wide range of settings, populations, and target behaviors. For example, interventions involving VM have been effectively used with children with and without disabilities (e.g., Charlop & Milstein, 1989; Godish, Miltenberger, & Sanchez, 2017), parents (e.g., Bagaiolo et  al., 2017; Spiegel, Kisamore, Vladescu, & Karsten, 2016), athletes (e.g., Quinn, Narozanick, Miltenberger, Greenberg, & Schenk, 2020), unpaid volunteers and paid employees (e.g., Howard & DiGennaro Reed, 2014; Vladescu, Carroll, Paden, & Kodak, 2012), and other professionals (Mery et  al., in press). In fact, millions of people everyday practice and learn skills—such as how to prepare recipes, complete home repairs, apply make-up—by viewing videos and video models hosted on various social media sites (Warren, 2021). VM is an approach to training in which a learner views a video depicting behavior (i.e., modeling) that should be imitated (DiGennaro Reed, Blackman, Erath, Brand, & Novak, 2018).This procedure can be used in isolation or as a component of video- or computer-based instruction. Videobased instructiona (VBI) refers to an intervention package that may include the use of slides and written instructions, voiceover narration, diagrams or pictures, VM, and/or video prompting (Park, Bouck, & Duenas, 2018). These components may be embedded into a training and exported as a video file for viewing. Computer-based instruction (CBI) refers to a training approach in which material is presented via a computer or website and a

The term video modeling has been used interchangeably with video-based instruction in published papers. In the present chapter, video modeling/video models will refer only to the presentation of a video displaying behavior the viewer should imitate, not other intervention components. Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00004-0

Copyright © 2023 Elsevier Inc. All rights reserved.

83

84

Applied behavior analysis advanced guidebook

learners are required to actively respond to content (Campanaro, Vladescu, DeBar, Deshais, & Manente, in press; Geiger, LeBlanc, Hubik, Jenkins, & Carr, 2018; Williams & Zahed, 1996). VM, either alone or as part of a package of procedures, has been generally shown to improve the performance of staff working in human service settings (Erath & DiGennaro Reed, 2020; Marano,Vladescu, Reeve, Sidener, & Cox, 2020). Moreover, VM can be adopted at various stages of training and for a wide range of procedures and settings. This chapter will briefly summarize the applications of VM to staff training and associated benefits. Next, we provide more detailed information about variations of VM, their strengths and limitations, and considerations for incorporating these variations into training. Additionally, we included a step-by-step guide for creating video models.

Applications in staff training Training at various stages of employment Training that incorporates VM has been used across many different settings, skills, and stages of employment, including preservice, in-service, and professional development training. Catania, Almeida, Liu-Constant, and DiGennaro Reed (2009) used VM with voice over instruction (VMVO) as part of preservice training to teach new direct-care staff to implement discrete trial teaching (DTT) to children with autism. As part of training, the trainees watched a video model that showed a teacher and a confederate (an adult playing the role of a student) in a DTT session.Voiceover instruction accompanied the video model and described each of the modeled steps. Within 10 min of viewing the VMVO, the experimenters provided trainees with the opportunity to implement DTT with a confederate. Overall, DTT accuracy increased following VMVO with only one trainee requiring additional feedback to meet criterion. Moreover, responding generalized to (a) single-session probes during which the trainees implemented DTT with students with autism; and (b) two different tasks (i.e., match-to-auditory sample and expressive object-identification). In addition, responding maintained during a one-week follow-up session. These results demonstrated that VMVO can be an effective training technique to use during initial preservice training. As part of ongoing in-service training, Collins, Higbee, Salzberg, and Carr (2009) used VM to train staff to implement a seven-step problem-solving intervention to teach adults with disabilities how to problem solve difficult



Video modeling

85

situations such as a roommate stealing property or having a request denied. Before baseline, staff received workplace training that included one or more of the following: verbal and written instructions, modeling, role-play, and an opportunity to ask their trainer questions. Staff also completed a written competency test. During baseline, staff had access to the written instruction and role-played the intervention with the researcher. Intervention accuracy was low during baseline. As part of ongoing training, staff viewed a video model of how to implement the intervention. The video model contained performers (i.e., models) and other actors simulating an interaction between a staff and client during which the staff modeled the problem-­ solving intervention. After watching the video model, staff implemented the ­problem-solving intervention with a confederate. The VM procedures effectively improved intervention accuracy. Responding generalized to probes with an actual client and during a novel problem. In addition, correct responding maintained during 1-week and 2- to 4-week follow-ups. After staff have experienced preservice and in-service training, VM can also be used to train advanced professional development skills. For example, Walker and Sellers (2021) used CBI, with VMVO, to train four customer-service specialists and one manager how to appropriately receive feedback. The VMVO materials within the CBI program included descriptions and a rationale for each of the feedback reception skills with video models demonstrating 100% accuracy in feedback reception performance. The CBI program included additional features to promote discrimination of the appropriate skills. For example, a total of 12 video models were used that demonstrated different levels of accuracy for the feedback receptive skills. Staff were then required to identify accurate and inaccurate examples within the program. The CBI program provided feedback based on incorrect or correct discrimination of the target skills. Training continued until staff could correctly score all video models with at least 80% accuracy. The CBI program with VMVO was effective at teaching staff feedback reception skills. No additional feedback was required for staff to engage in the feedback reception skills. Last, reception skills maintained during 2- and 4-week follow-up sessions. Walker and Sellers demonstrated that VM packages can be effective at teaching more complex and advanced professional development skills. A variation of staff training procedures that can be used in preservice, in-service, or professional development stages of employment is pyramidal training. Pyramidal training involves the use of an expert professional who trains a group of staff how to train others. Erath, DiGennaro Reed,

86

Applied behavior analysis advanced guidebook

and Blackman (2021) used VBI, which included VM, to train direct support professionals to use behavioral skills training (BST). These employees then served as peer trainers for other direct support professionals in the organization. The VBI included VMVO, guided notes,b and a quiz for staff to complete following the video model. Within VMVO, each component of BST was first defined and then modeled with 100% accuracy. The training also incorporated two video models demonstrating accurate BST for two different skills. Training accuracy improved to 100% following VBI for two participants; the remaining two required supplemental feedback to reach 100% accuracy. Moreover, generalization was demonstrated across two novel training scenarios. These results demonstrated that VM procedures can be used in a pyramidal approach to train staff who will then train other staff.

Standalone training or packaged intervention VM can be used in isolation or as part of a packaged intervention. Often, VM is integrated into VBI or CBI packages that include additional intervention components. These additional components can be added to the video model, prior to performing the behavior, and after performance. A common addition to the video model is the inclusion of voice-over instruction (i.e., VMVO). In fact, most applications of VM we have reviewed so far have included VMVO. To increase the likelihood that viewers attend to relevant features of the video model, guided notes can be added for the trainee to complete while watching the video. Intermittent quiz questions can also be presented to the trainee during or at the conclusion of the VM training. Following VM and prior to performance, supplemental written instruction can be added, in which trainees receive written information about the procedure they are expected to perform. Written instructions may include detailed task analyses, diagrams or pictures, or lengthier written text (DiGennaro Reed et al., 2018; Graff & Karsten, 2012). Similar to written instructions, job aids can be used to provide succinct information about the target behavior demonstrated in the video model. A job aid could be a checklist with the steps in a task to complete (Austin, 2000) or a flow chart outlining steps and decisions within a task (Lipschultz,Vladescu, Reeve, Reeve, & Dipsey, 2015). Following VM and after performance, consequences can be delivered to ensure the efficacy of the VM training. b

Guided notes contain an outline of the presentation with prompts for users to write relevant content.



Video modeling

87

Performance feedback is a commonly used consequence to increase and sustain performance. These additions attempt to increase the efficacy of VM. Without these enhancements, VM as a standalone intervention may be insufficient. Research has demonstrated that VM alone may be an ineffective and inefficient training procedure for some staff. For example, Lipschultz et al. (2015) used VMVO to train staff how to select, implement, and interpret the data for three different stimulus preference assessments (SPAs). The training consisted of staff watching a 19-min video that showed a simulated staff conduct three SPAs with a simulated consumer. Staff then role-played selecting, implementing, and interpreting the results of an SPA within 5 min of watching the video model. If the staff member did not complete the SPAs with at least 90% accuracy, they were required to rewatch the video model. Although VMVO immediately increased performance, several staff needed to rewatch the video model multiple times before meeting the mastery criterion. Two of the staff watched the video model six times before meeting mastery (i.e., 116 min 48 s to watch the video models). In addition, one of those staff required performance feedback to meet mastery. The other two staff needed to rewatch the video model two and three times. These results demonstrated that VM alone may not be an effective or efficient process to train staff.VM likely requires additional components, such as role-play/practice with feedback, to be effective. Therefore, when creating a VM training, a trainer should also consider other intervention components to strengthen the effects of the video model.

Benefits of VM VM has many benefits that make it a desired training component particularly compared to in vivo/live modeling. Once a video model has been recorded with 100% integrity (i.e., accuracy), VM ensures each trainee will receive the modeled demonstration in the exact same way. In contrast, in vivo modeling requires the trainer to demonstrate the target skill perfectly every time the model is provided. Live demonstrations could increase the likelihood that the trainer emits a mistake in the model, thereby reducing the training’s procedural integrity.VM also affords the opportunity to prepare, record, and depict multiple exemplars that capture a range of scenarios the trainee will likely experience on the job. On the other hand, programming multiple exemplars within the context of in vivo modeling may be cumbersome if not impossible. For example, video models can be recorded in the environments the employee will work, whereas in vivo modeling may only occur in

88

Applied behavior analysis advanced guidebook

a context different than the one where the trainee is expected to perform, such as a training room. Video models can be viewed multiple times (Vladescu et al., 2012) and can be maintained for future trainings. If additional demonstrations are required, a video can easily be rewatched, whereas in vivo modeling requires the trainer to schedule another demonstration, which may be challenging given other trainer responsibilities. Another benefit of VM is that video files can be distributed to other trainees regardless of geographical location through an organization’s internal systems (e.g., server, portal, email). In contrast, in vivo modeling requires the trainer and trainee to be in the same location. We feel it is particularly important to highlight the value of these benefits for maximizing trainer resources. Because a video model can be easily distributed and viewed, a trainer is not required to travel to various organizational locations to provide modeling. Furthermore, once a video model is made, it can be reused, whereas in  vivo modeling requires the trainer modeling the skill every time the skill needs to be taught.The time saved by eliminating travel and in vivo demonstrations could allow a trainer to attend to other job responsibilities. Despite these benefits, organizations may still rely on in  vivo modeling because of the upfront investment of resources to create video models. To create video models, the trainer must define the critical features of the behavior(s) to be modeled, usually through a task analysis. The setting in which the models should be recorded also need to be identified. In addition, if VMVO is adopted, a narration script must be prepared. Next, performers and other actors to be used in the video model must be recruited and trained, then the model can be recorded and edited. Although the initial cost to develop video models is greater than in vivo modeling, the long-term cost benefits of VM make it a justifiable training method. Because in vivo modeling requires ongoing presence of a trainer, over time VM may be a more efficient training approach. In fact, Geiger et al. (2018) demonstrated CBI with VMVO became a more cost-effective approach than in-person BST (with in vivo modeling) once 62 trainees completed training.Thus, the up-front investment of resources is worthwhile for situations in which the video model can be used repeatedly, such as during preservice or in-service training. It may not be beneficial to invest in VM development in situations where the need for the model is infrequent. In the latter situations, in vivo modeling would be cost efficient. Similarly, if the desired skill is difficult to simulate and capture in a video, then in vivo modeling is preferred. For



Video modeling

89

example, on-the-job modeling of how to respond to severe aggression or self-injury may be difficult to simulate because some dimensions of these behaviors (e.g., magnitude) cannot be captured. However, the critical behaviors to perform in those situations could likely be included in a video model. Thus, our recommendation is to use VM for trainings that will be frequently delivered, therefore making up for the cost of initial development.

Variations of VM As already noted,VM has numerous variations that must be considered before developing training. According to Marano, Vladescu, Reeve, Sidener, and Cox (2020), researchers frequently combine variations to meet the training needs of the setting or trainees. Below we describe common variations, briefly summarize illustrative research that incorporates the variations, and identify their benefits and limitations. To supplement this information, Table  1 presents a step-by-step guide with relevant steps, processes, and questions trainers must answer to create video models.

Selection of the performer Identifying the performer (i.e., the model) and other actors is an important first step in developing training. Video models used for staff training purposes often include confederates—individuals simulating a role, such as a trainee (e.g., Erath et  al., 2021) or learner (e.g., Vladescu et  al., 2012). Marano,Vladescu, Reeve, Sidener, and Cox (2020) documented that nearly 63% of the reviewed VM studies included confederates. An advantage to using confederates is being able to script the responses of the model and actors, which allows for the opportunity to capture the full range of stimulus conditions that could occur in the work setting and relevant responses to these conditions (for a detailed example of this, see pp. 9–21 from the Online Supporting Information that accompanies Romer et al., 2021). For example, a video model demonstrating how a trainee should respond to the various types of learner responses during DTT must include content on how to respond to a correct response (e.g., praise, token), incorrect response (e.g., error correction), and no response (e.g., error correction, prompt hierarchy). Incorporating a confederate learner ensures that all types of learner responses are captured, which may promote generalization. On the other hand, including a service recipient in the creation of the video may introduce errors and disrupt quality if that individual is unable to perform one or more responses given the target skill. As a related example, a trainee

Table 1  Step-by-step guide for creating video models. Step

Process

Considerations

Secure necessary time and resources

Plan the content (setting, performers, behavior, number of examples and nonexamples)

In what setting will the model be recorded? Who will be captured in the background of the recording? Are there risks to privacy? Is there a task analysis of the target behavior? What activities are taking place during the recorded interaction? How many modeled exemplars will be displayed and across what contexts? Will nonexamples be shown? If so, how many? How will you determine common errors? Who will model the target behavior? Who else will interact with the model during the recording? Are the interactions simulated? Have you obtained commitments from volunteers? Has written consent been obtained for video recording, if necessary? What equipment needs to be purchased and how will it be stored for safe keeping? Have sufficient resources been arranged to adopt behavioral skills training to teach models and other actors the behaviors they will perform in the video? When will training occur and in what location? What criterion will be used to ensure perfect demonstrations of the target skills? Given the target skill, is first- or third-person point-of-view most appropriate? If first-person, what video recording modifications are necessary to capture the best video model (e.g., GoPro, screen capture)? If third-person, where will you position the camera to capture relevant skill with minimal extraneous distractions?

Record the video footage

Identify performers (i.e., models) and other actors Solicit volunteers to serve as models and actors Obtain consent, if necessary Gather materials (video camera, video-editing software, microphone, materials) Use behavioral skills training to train performers

Determine if first- or third-person point-ofview is preferable

Ensure the footage captures demonstrations of each component of the target skill individually and together in a synthesized format Record multiple exemplars of the target skill; record common nonexamples, if necessary Compose onscreen text

Determine the necessity of on-screen text Write the on-screen text

Compose and embed text within the training using editing software Record narration (i.e., voiceover instruction)

Determine the necessity of narration Write the narration script Record and embed the narration script

Is it necessary to record each step separately or splice a recording of the entire procedure? To foster generalization, how many examples of the target skill should be modeled? Will these models occur with individual components of the skill or the synthesized format? What and how many nonexamples will be recorded? Is on-screen text necessary (or do other training components serve same the function)? Is the reading level of the text at or below the level expected of trainees (i.e., will trainees comprehend the text)? Is the text presented on a separate slide or atop the video model? Is the text clear and concise? Does the text follow the same order as other materials (e.g., guided notes, user guide)? Are critical overt and covert behaviors described? Does the text match narration, if used? Can trainees clearly see the text? Does the video pause while lengthy text is presented? Is the font light or dark enough to read even as the image in the video model changes? Is the text large enough? Does the text allow trainees to view critical aspects of the video model? Is narration necessary (or do other training components serve the same function)? Does the video model platform have audio capabilities? Is the script clear and concise? Does the script follow the same order as other materials (e.g., guided notes, user guide)? Are critical overt and covert behaviors described? Does the script match on-screen text, if used? Is a microphone available? Does the microphone have a pop filter (optional)? Is there silence (i.e., absence of white noise) in the background? Does the speed of the narration match the speed of the video model? Do other sounds in the video model need to be adjusted (e.g., someone talking)? Is the narration sound balanced (e.g., is the narration loud enough)? Continued

Table 1  Step-by-step guide for creating video models—cont’d Step

Process

Considerations

Assemble the video model

Identify software to edit and assemble video model

Does the video model require significant editing? (If so, consider editing software [e.g., Camtasia, Davinci Resolve]; If not, consider PowerPoint or Keynote) Will the size of the video and on-screen text be visible on the viewing device (e.g., laptop, iPad, cell phone)? Is narration audible while using headphones? Is narration audible without headphones? Will the video need to be played across a variety of devices (e.g., Windows, Mac)? If so, consider exporting as an MP4 file type. Will the video model be stored on the company server or another location (e.g., Microsoft Teams), individual devices (e.g., laptops), or online (e.g.,YouTube)? Do staff need training on how to access the video model? Will staff need training on how to view the video model? Have staff viewed the video model prior to implementing the procedure modeled? In what ways can you ensure staff can perform the modeled procedure correctly?

Add video footage, on-screen text, and narration to software

Distribute the video model

Export the completed video model as a video file Determine where to store the video model file Share video model with required staff Arrange for staff to view video model



Video modeling

93

in Catania et  al. (2009) showed elevated levels of DTT accuracy during baseline generalization probes because the child did not make errors when responding and the trainee was not required to implement the error correction procedure (which they could not do accurately). A potential limitation of using confederates is that the video model depicts a simulation rather than real-world interactions, which may not fully capture the range of conditions a trainee will experience outside of training. If this is the case, a trainee might develop skills that would be less likely to generalize to the work setting.

Point-of-view/perspective Consideration for selecting the point of view (POV) used in the video model should be made to reduce irrelevant and distracting stimuli, provide a more realistic view of the natural setting, and increase the saliency of discriminative stimuli (Lee, 2015; Mason, Davis, Boles, & Goodwyn, 2013). Trainers can increase the saliency of discriminative stimuli by angling the camera in such a way that antecedents signaling trainee behavior are highlighted. For instance, if a video model demonstrates how staff should respond to child behavior during DTT, the video would only depict the child engaging in a response without other environmental stimuli such as the performer, other individuals, or items. Showing only the child’s response may increase the likelihood that this relevant aspect evokes the desired trainee behavior. POV options include first person, third person, or a combination of these perspectives. Considering what perspective to capture may enhance the training experience and maximize trainee performance. In first-person POV,c the trainer films the video from the performer’s perspective, that is, the trainee observes the perspective of the performer engaged in the desired responses. For instance, Tyner and Fienup (2015) and Berkman, Roscoe, and Bourret (2019) used screen capture software to record a computer screen and audio to teach participants how to graph data using Microsoft Excel and GraphPad Prism. A strength of using first-­ person POV is that trainees will view the demonstration of the skill from the perspective in which they will engage in the behavior. A limitation of this perspective is that recording may require special consideration for camera angles so that trainers keep critical aspects of the environment within the frame. In addition, first-person POV can be used with a limited number of target behaviors. c

Sometimes referred to as point-of-view only with no reference to first person (Lee, 2015; Mason et al., 2013).

94

Applied behavior analysis advanced guidebook

In third-person POV, the trainer films the video from a viewer’s perspective (i.e., the trainee observes the performer). For example, Erath et al. (2021) used a video model that involved actors performing the target behavior in front of a camera so the trainee could observe the body and behaviors of the person they would imitate. A strength of third-person POV is the video captures the performer’s body (i.e., positioning) as well as the environment. Relative to first-person POV, third-person POV presents a more holistic view of the environment with limited bias toward an individual’s perspective. That is, the performer may be as salient as other environmental stimuli instead of only the stimuli the performer views. A potential limitation of using third-person POV is that the trainee may view the skill opposite to how the action should be performed (Quinn et al., 2020). For example, a performer who rotates stimuli from left to right during an SPA will appear as right to left on the video model. This issue can easily be addressed when creating the video model as the performer can adjust the modeled behavior, so it is performed correctly from the perspective of the viewer (e.g., performer rotates preference assessment stimuli from right to left so it appears as left to right on the model). Third-person POV poses additional limitations compared to first-person POV such as an increased production effort as the trainer should consider the performer’s body positioning as well as other stimuli in the environment that may be distracting (Mason et al., 2013). Mason et al. (2013) reported third-person POV as more common than first-person POV for teaching independent living and social skills to individuals with intellectual and developmental disabilities. We were unable to locate information about which form is most common in staff training applications. Based on our collective experience in research and practice, we suspect third-person POV is most common. Regardless of which POV is adopted, trainers should strive to adopt a perspective that best guides trainees to correctly implement procedures. Training can also incorporate video models using both perspectives. For example, Delli Bovi, Vladescu, DeBar, Carroll, and Sarokoff (2017) used a combination of first- and third- person POV to teach staff trainees how to conduct an SPA.The training began using the third-person POV during (a) VMVO in which trainees viewed each step of the preference assessment, and (b) a video model (without voice-over) of the entire assessment. The next portion of the training used first-person POV via screen-recording. During first-person VMVO, trainees watched the trainer model the steps to calculate selection percentages, rank item preference, and select relevant



Video modeling

95

teaching stimuli. Staff trainees reached mastery after two training sessions and performance near 100% during generalization probes with an actual consumer using toys and edibles. A strength of this study is that VM incorporated both first- and third-person POV based on training relevance (i.e., skills to be learned guided the development of training). Thus, we recommend trainers consider POV to increase the saliency of relevant discriminative stimuli.

Number of exemplars When developing video models, a critical decision concerns the number of exemplars demonstrated. Variation exists among the published literature (Marano, Vladescu, Reeve, Sidener, & Cox, 2020). In some applications, video models contained exemplars of each step of a procedure (e.g., Mitteer, Greer, Fisher, & Cohrs, 2018) whereas other researchers provided exemplars of the full procedure (e.g., DiGennaro Reed, Codding, Catania, & Maguire, 2010). Another approach is to combine these features and include one exemplar or more of each step of a procedure followed by one exemplar or more of the full procedure (e.g., Deliperi,Vladescu, Reeve, Reeve, & DeBar, 2015; Erath et al., 2021). In illustration, Deliperi et al. (2015) used VMVO that incorporated one exemplar of each step of an SPA: identify stimuli to use in the SPA, administer the SPA, calculate session percentages, select a stimulus to use for teaching. Trainees then watched a video model that depicted a single administration of the entire SPA. Similarly, Erath et al. (2021) showed one exemplar of each component of BST followed by two exemplars of the full training package. Unfortunately, Marano, Vladescu, Reeve, Sidener, and Cox (2020) documented that relatively few VM studies report the number of exemplars they incorporate, which could negatively impact replication for research purposes and adoption by trainers. Presumably, multiple exemplars of the modeled skills (for each step or the full procedure) facilitates generalization across settings, learners, and instructional materials (Stokes & Baer, 1977).The current state of the research prevents us from stating with certainty that multiple exemplar training within VM promotes generalization—after all, multiple exemplars may be insufficient in producing generalized responding (Holth, 2017). Moreover, the lack of detail about the number of exemplars in published research and the absence of parametric evaluations make it difficult to offer concrete guidance to trainers. Based on our experience in both research and practice, we recommend trainers use the approach described by Deliperi et al. (2015) and Erath et al. (2021), both of which incorporated at least one exemplar

96

Applied behavior analysis advanced guidebook

for each step of the procedure as well as at least one demonstration of the full procedure. In addition to the above-mentioned concerns about the state of the research, another limitation involves the increased upfront resources needed to produce video models containing multiple exemplars. Trainers should expect to devote more time for planning and recording the video model, training the performers, and showing the video model to trainees because the depiction of multiple exemplars will produce a lengthier video. Despite this limitation, preparing a high-quality video model with multiple exemplars may yield return on the investment if effective.

Use of nonexamples Nonexamples refer to irrelevant or incorrect behaviors incorporated into video models as a guide for behaviors that trainees should not perform. Showing nonexamples may help the trainee generate rules about what not to do. Explanations or rationales that accompany nonexamples could assist with rule generation about why trainees should not engage in the modeled behaviors, which could help them avoid undesirable or unsafe situations. In their review of staff training strategies that minimize trainer involvement, Marano,Vladescu, Reeve, Sidener, and Cox (2020) reported that only 17.2% of trainings included nonexamples. Unfortunately, the extent to which nonexamples help trainees avoid common mistakes is unknown as research has not experimentally compared models with and without nonexamples. Researchers may exclude nonexamples for several reasons including nonexamples take more time to develop and record; a lack of consensus about common errors with a given procedure; a lack of knowing what behavior to prioritize as a nonexample; or the opinion that trainees should not view incorrect models as they may inadvertently imitate them. Although researchers have not directly evaluated the influence of video models depicting nonexamples on trainee performance, nonexamples have been incorporated into training for various skills. For example, several researchers have asked participants to evaluate correct and incorrect performances depicted on video scenarios (e.g., Campanaro & Vladescu, in press; Eldevik et al., 2013; Howard & DiGennaro Reed, 2014; Mailey et al., 2021; Marano, Vladescu, Reeve, & DiGennaro Reed, 2020; Romer et al., 2021). Although these studies showed performance improvements after training, the researchers did not experimentally evaluate the effects of nonexamples on performance. Thus, future research should address this area.



Video modeling

97

To ensure video models containing nonexamples are relevant for onthe-job performance, trainers should determine common mistakes made with the procedure being taught. This task may be accomplished by reviewing the literature for information about steps trainees commonly implement incorrectly, discussing common errors with people who regularly perform the procedure, and conducting a retrospective or descriptive analysis. Retrospective analysis involves reanalyzing existing data to determine common errors made when performing a procedure. Descriptive analysis involves collecting new data to identify common errors emitted by staff implementing a procedure (e.g., Breeman,Vladescu, DeBar, Grow, & Marano, 2020; Carroll, Kodak, & Fisher, 2013). Based on our experience, the advantages of seeking guidance from relevant stakeholders and reviewing previous/new data are that trainers can collect information about the specific setting, barriers to performance, rationales as to why the skill should be implemented as planned, and ways to avoid common mistakes. Although a complete discussion of the potential importance of nonexamples in instruction is beyond the scope of this chapter (interested readers should review, among other sources, Critchfield & Twyman, 2014 and Engelmann & Carnine, 1991), video models that incorporate nonexamples may offer several potential advantages. Nonexamples may aid in the trainee’s generation of rules to avoid dangerous situations and increase the likelihood of obtaining high-quality outcomes as trainees avoid making procedural integrity errors. Despite these potential advantages, experimental data are lacking on the direct benefits of nonexamples and there is risk that trainees may inadvertently imitate the nonexample. However, training that incorporated nonexamples as part of a training package produced high levels of staff performance suggesting trainees may not imitate errors (e.g., Campanaro & Vladescu, in press; Eldevik et al., 2013; Howard & DiGennaro Reed, 2014; Mailey et  al., 2021; Marano, Vladescu, Reeve, & DiGennaro Reed, 2020; Romer et al., 2021).

Number of video viewings Trainers may permit trainees to watch a video model once or multiple times. Because some tasks are complex, it may be beneficial to allow trainees to watch a video model more than once. Doing so may reduce the total duration of training or produce higher levels of performance than a single viewing would produce. Trainers have arranged for multiple viewings of a video model in several ways. Delli Bovi et al. (2017) produced a video model that first depicted

98

Applied behavior analysis advanced guidebook

individual videos of each component of an SPA being implemented. Then trainees viewed a second portion of the video model that depicted the entire SPA implemented without interruption. Another option is to teach trainees how to rewatch the video models. For example, Marano,Vladescu, Reeve, and DiGennaro Reed (2020) taught trainees how to start, stop, rewind, and advance the videos, which allowed participants to view the videos at their own pace. If trainees cannot control the video model (e.g., the video is only available for a limited amount of time or training is in a group format), trainees could ask if they can re-watch the training. For instance, Quinn et al. (2020) allowed participants to watch a brief video model multiple times but only if requested. In a group format, it may be beneficial for trainers to ensure trainees know when they can ask to view video models again, offer to show the video model multiple times, or arrange for trainees to have access to the video models outside of training so they can view them on their own. One advantage of viewing a video model more than once is that trainees are repeatedly exposed to correct models, which may increase maintenance and generalization of the target skill while allowing trainees to view steps they may have missed. A limitation is that viewing videos multiple times may increase the duration of training. For example, some researchers have demonstrated that less training time was required by providing feedback rather than requiring trainees re-watch a video model (Giannakakos, Vladescu, Kisamore, & Reeve, 2016; Nottingham, Vladescu, Giannakakos, Schnell, & Lipschultz, 2017). A cost-benefit analysis could determine whether the additional duration of training produces return on investment as evidenced by higher treatment integrity, better client outcomes, reduced staff turnover, and fewer injuries).

On-screen text A video model can include on-screen text, which occurs when words are presented on the screen to highlight relevant steps of a procedure, specific actions taken by the performer, or environmental stimuli to which the trainee should attend. On-screen text may be helpful when a trainer is not present or additional materials such as guided notes or trainer manuals, are not provided. The purpose of on-screen text is to highlight relevant discriminative stimuli and describe covert behavior (e.g., when a therapist counts down the number of trials until the next token can be delivered) to guide the trainee’s attention. On-screen text can occur in multiple forms. Trainers can present one or two words atop the screen of a video model



Video modeling

99

to help the trainee identify relevant environmental stimuli or the behavior of the performer as they are performing it. Additionally, on-screen text can include instructions, which are brief descriptions of the performer’s behavior. Instructions could be positioned atop the screen of a video model or on a PowerPoint or Keynote slide embedded into the video. Slides may also include a list, flowchart, diagram, table, or figure.They can also be beneficial for providing an overview of the training process or steps in a behavior chain. Although there are several variations of on-screen text, relatively few authors have described format. Marano,Vladescu, Reeve, Sidener, and Cox (2020) reported that 11% of studies included on-screen text. Nottingham et  al. (2017) included a table in their manuscript describing their use of on-screen text across several SPAs.The text was associated with key features of the video model such as the type of preference assessment shown (e.g., “Single stimulus,” or “Paired stimulus”) or where to place an item when starting a trial (e.g., “Place the item 1 foot from student”). The number of words ranged from 1 to 11 in Nottingham et al. Presenting words atop the screen of a video model may look best when sentence length is brief. Trainers can incorporate slides for lengthier text. Trainers can present on-screen text several ways, which is an advantage for trainings that need flexibility. Additionally, on-screen text can highlight discriminative stimuli by guiding trainee attention to relevant antecedents and describe covert and overt behavior by writing about behavior that is and is not shown. For example, during DTT, text could describe the passage of time during the intertrial interval (i.e., counting the number of seconds silently is a covert behavior) or describe overt behaviors depicted on the video (i.e., implementation of other components of a procedure). However, a limitation of on-screen text is that it requires a certain level of reading comprehension and text may obstruct or distract from images on the screen. Trainers can resolve these issues in several ways including diagrams or pictures, providing opportunities for practice and feedback, and recording voiceover instruction.

Voiceover instruction Like on-screen text, voiceover instruction can increase the saliency of discriminative stimuli and describe covert and overt behavior. Trainers can incorporate voiceover by recording a narrative that describes the behaviors displayed on the screen or provides instructions, then overlaying that voiceover on the video. Marano, Vladescu, Reeve, Sidener, and Cox (2020)

100

Applied behavior analysis advanced guidebook

reported that 55.5% of studies incorporated voiceover instruction and that 22.2% incorporated voiceover instruction and on-screen text. Trainers may find voiceover instruction particularly useful relative to on-screen text when concerns exist about the trainee’s reading repertoire, obstruction of the video model, or competing visual stimuli. Deliperi et al. (2015) noted that instruction manuals and on-screen text required some minimum level of reading comprehension or may be presented at a level of reading trainees may not have acquired. Trainers can mitigate this issue by incorporating voiceover instruction. Another issue with on-screen text is that words may obstruct viewing of or compete with viewing of the video model. Incorporating voiceover instruction may eliminate these issues while providing the same information. Voiceover instruction is commonly used in research. Lipschultz et  al. (2015) and Nottingham et al. (2017) included voiceover instruction and onscreen text with a video model to teach trainees how to implement SPAs. Day-Watkins, Pallathra, Connell, and Brodkin (2018) used VMVO as part of a BST package consisting of instructions, models (i.e.,VMVO), role-play, and feedback to teach staff how to implement a video modeling intervention to teach adults with autism spectrum disorder social skills. Results of these studies suggested the intervention improved performance. Although voiceover was used in tandem with additional components, staff performance improved after training, which suggests voiceover instruction is beneficial. A potential barrier to voiceover instruction is that it may require additional resources such as a microphone to record the instruction, editing software to incorporate the instruction, and speakers so trainees can hear the voiceover. These barriers can be mitigated using publicly or commonly available resources, notably libraries, smartphones, headphones, and free voice and screen recording software.

Other training considerations Previously we described variations of video models that trainers may adopt to enhance the effectiveness of VM. Next, we summarize supplemental training components often used with and without VM. These considerations include the use of instructional materials and activities that require active responding. Instructional materials Aside from on-screen text, trainers may adopt other forms of instructions. Common examples of instructional material include enhanced written



Video modeling

101

i­nstructions (EWI), user guides, lectures, presentation slides, and employee manuals—some of these were described previously but differ from onscreen text as the materials are not embedded into the video model. For example, Graff and Karsten (2012) evaluated written instruction and EWI on trainee implementation of SPAs. Written instructions included steps from the methods section of a research article. EWI included a detailed data sheet, pictures, and step-by-step instructions written without technical jargon. Trainee performance improved when EWI was used compared to written instructions. Thus, instructional materials may improve performance, but trainers should be aware that not all forms of instructional material produce the same level of trainee performance. Other research using these procedures has shown mixed effectiveness. For example, Berkman et al. (2019) showed that participants were unable to accurately create a single-subject graph on GraphPad Prism during baseline when they had access to the platform’s user guide. When EWI, rationales, and VMVO were introduced, participants achieved mastery-level performance. Results from these and other studies lead us to caution trainers that instructional materials should not be used alone; rather, trainers should supplement these materials with other training components, such as video modeling and active responding. Notably, Shapiro and Kazemi (2017) reported that in approximately 70% of studies included in their staff training literature review, participants were directed to implement the target behavioral technology after access to instructions were provided, but none of the participants demonstrated the desired level of integrity. Training activities that require active responding Active responding requires trainees to engage in an overt behavior that demonstrates the extent to which the trainees can perform skills taught (States, Detrich, & Keyworth, 2019; Twyman & Heward, 2018). Common active responding components include quizzes, guided notes, ratings, checklists, and role-plays, and can range from high- to low- tech with tablets, paper-and-pen activities, and so on. Based on trainee’s performance during active responding, trainers can identify portions of a skill that are implemented well and what behaviors need improvement. Marano, Vladescu, Reeve, Sidener, and Cox (2020) reported that 58.6% of trainings used active responding, but researchers have not systematically evaluated the isolated effects in the staff training literature.Vladescu et al. (2022) compared VM to CBI on trainee’s SPA integrity. Participants assigned to the VM group were not required to engage in active responding, whereas participants ­assigned

102

Applied behavior analysis advanced guidebook

to the CBI group were required to answer questions throughout the training. Results showed VM and CBI were similarly effective and both procedures were associated with favorable social validity ratings but trainees in the CBI group performed fewer errors and reported feeling more confident than trainees in the VM group. These results suggest trainers should incorporate active responding in their training. A main advantage to active responding is that trainers can address incorrect responses before trainees work independently.We advise that trainers should be aware that some forms of active responding (e.g., quizzes) test knowledge about a behavioral technology rather than the degree to which the trainee can implement the behavioral technology. Ensuring trainees can accurately perform the skill is critical for optimizing client outcomes and to avoid potential liabilities. To ensure trainees can accurately implement the skill, trainers should incorporate role-play/practice and feedback as active responding components. The inclusion of active responding components can increase performance outcomes and can supplement information shown on the video models. For example, Hansard and Kazemi (2018) evaluated the effectiveness of video models with voiceover instruction and on-screen text and images on trainees’ implementation of SPAs. Furthermore, trainees were provided with the materials needed to implement the SPA and were directed to rehearse the SPA steps while viewing the video. Trainees implemented the SPA with high integrity after a single viewing and rehearsal.

Summary Researchers have increasingly sought to evaluate the usefulness of VM in application to training human service staff. The benefits of VM are many, which is perhaps exemplified in the extant literature by the number of VM variations and applications. Yet, practitioner adoption of VM may be somewhat limited given potential barriers and the lack of guidance for conceptualizing and creating video models. As such, we approached this chapter from that perspective. In doing so, we provided an overview of steps that bear consideration when creating video models, discussed video model variations, identified strengths and limitations of these variations, and provided processes and their considerations for optimizing the creation and impact of video models. We hope this chapter provides information that sufficiently guides the behavior of those wishing to incorporate video models in their staff training programs.



Video modeling

103

References Austin, J. (2000). Performance analysis and performance diagnostics. In J. Austin, & J. E. Carr (Eds.), Handbook of applied behavior analysis (pp. 321–349). Context Press. Bagaiolo, L. F., Mari, J. J., Bordini, D., Ribeiro, T. C., Martone, M. C. C., Caetano, S. C., et al. (2017). Procedures and compliance of a video modeling applied behavior analysis intervention for Brazilian parents of children with autism spectrum disorders. Autism, 21(5), 603–610. https://doi.org/10.1177/1362361316677718. Berkman, S. J., Roscoe, E. M., & Bourret, J. C. (2019). Comparing self‐directed methods for training staff to create graphs using GraphPad prism. Journal of Applied Behavior Analysis, 52(1), 188–204. https://doi.org/10.1002/jaba.522. Breeman, S. L., Vladescu, J. C., DeBar, R. M., Grow, L. L., & Marano, K. E. (2020). The effects of procedural-integrity errors during auditory-visual conditional discrimination training: A preliminary investigation. Behavioral Interventions, 35(2), 203–216. https://doi. org/10.1002/bin.1710. Campanaro, A. M., & Vladescu, J. C. (2022). Using computer-based instruction to teach implementation of discrete-trial instruction: A replication and extension. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-022-00731-7. Campanaro, A. M., Vladescu, J. C., DeBar, R. M., Deshais, M. A., & Manente, C. J. (2022). Using computer-based instruction to teach implementation of behavioral skills training. Journal of Applied Behavior Analysis. in press. Carroll, R. A., Kodak, T., & Fisher, W. W. (2013). An evaluation of programmed ­treatment-integrity errors during discrete-trial instruction. Journal of Applied Behavior Analysis, 46(2), 379–394. https://doi.org/10.1002/jaba.49. Catania, C. N., Almeida, D., Liu-Constant, B., & DiGennaro Reed, F. D. (2009).Video modeling to train staff to implement discrete-trial instruction. Journal of Applied Behavior Analysis, 42(2), 387–392. https://doi.org/10.1901/jaba.2009.42-387. Charlop, M. H., & Milstein, J. P. (1989). Teaching autistic children conversational speech using video modeling. Journal of Applied Behavior Analysis, 22(3), 275–285. https://doi. org/10.1901/jaba.1989.22-275. Collins, S., Higbee, T. S., Salzberg, C. L., & Carr, J. (2009). The effects of video modeling on staff implementation of a problem-solving intervention with adults with developmental disabilities. Journal of Applied Behavior Analysis, 42(4), 849–854. https://doi.org/10.1901/ jaba.2009.42-849. Critchfield,T. S., & Twyman, J. S. (2014). Prospective instructional design: Establishing conditions for emergent learning. Journal of Cognitive Education and Psychology, 13(2), 201–217. https://doi.org/10.1891/1945-8959.13.2.201. Day-Watkins, J., Pallathra, A. A., Connell, J. E., & Brodkin, E. S. (2018). Behavior skills training with voice-over video modeling. Journal of Organizational Behavior Management, 38(2–3), 258–273. https://doi.org/10.1080/01608061.2018.1454871. Deliperi, P.,Vladescu, J. C., Reeve, K. F., Reeve, S. A., & DeBar, R. M. (2015). Training staff to implement a paired‐stimulus preference assessment using video modeling with voiceover instruction. Behavioral Interventions, 30(4), 314–332. https://doi.org/10.1002/bin.1421. Delli Bovi, G. M., Vladescu, J. C., DeBar, R. M., Carroll, R. A., & Sarokoff, R. A. (2017). Using video modeling with voice-over instruction to train public school staff to implement a preference assessment. Behavior Analysis in Practice, 10(1), 72–76. https://doi. org/10.1007/s40617-016-0135-y. DiGennaro Reed, F. D., Blackman, A. L., Erath, T. G., Brand, D., & Novak, M. D. (2018). Guidelines for using behavioral skills training to provide teacher support. Teaching Exceptional Children, 50(6), 373–380. https://doi.org/10.1177/0040059918777241. DiGennaro Reed, F. D., Codding, R., Catania, C. N., & Maguire, H. (2010). Effects of video modeling on treatment integrity of behavioral interventions. Journal of Applied Behavior Analysis, 43(2), 291–295. https://doi.org/10.1901/jaba.2010.43-291.

104

Applied behavior analysis advanced guidebook

Eldevik, S., Ondire, I., Hughes, J. C., Grindle, C. F., Randell,T., & Remington, B. (2013). Effects of computer simulation training on in vivo discrete trial teaching. Journal of Autism and Developmental Disorders, 43(3), 569–578. https://doi.org/10.1007/s10803-012-1593-x. Engelmann, S., & Carnine, D. (1991). Theory of instruction: Principles and applications (Rev. ed.). ADI Press. Erath,T. G., & DiGennaro Reed, F. D. (2020). A brief review of technology-based antecedent training procedures. Journal of Applied Behavior Analysis, 53(2), 1162–1169. https://doi. org/10.1002/jaba.633. Erath, T. G., DiGennaro Reed, F. D., & Blackman, A. L. (2021). Training human service staff to implement behavioral skills training using a video-based intervention. Journal of Applied Behavior Analysis, 54(3), 1251–1264. https://doi.org/10.1002/jaba.827. Geiger, K. B., LeBlanc, L. A., Hubik, K., Jenkins, S. R., & Carr, J. E. (2018). Live training versus e-learning to teach implementation of listener response programs. Journal of Applied Behavior Analysis, 51(2), 220–235. https://doi.org/10.1002/jaba.444. Giannakakos, A. R.,Vladescu, J. C., Kisamore, A. N., & Reeve, S. (2016). Using video modeling with voiceover instruction plus feedback to train staff to implement direct teaching procedures. Behavior Analysis in Practice, 9(2), 126–134. https://doi.org/10.1007/ s40617-015-0097-5. Godish, D., Miltenberger, R., & Sanchez, S. (2017). Evaluation of video modeling for teaching abduction prevention skills to children with autism spectrum disorder. Advances in Neurodevelopmental Disorders, 1(3), 168–175. https://doi.org/10.1007/ s41252-017-0026-4. Graff, R. B., & Karsten, A. M. (2012). Evaluation of a self-instruction package for conducting stimulus preference assessments. Journal of Applied Behavior Analysis, 45(1), 69–82. https:// doi.org/10.1901/jaba.2012.45-69. Hansard, C., & Kazemi, E. (2018). Evaluation of video self-instruction for implementing paired-stimulus preference assessments. Journal of Applied Behavior Analysis, 51(3), 675– 680. https://doi.org/10.1002/jaba.476. Holth, P. (2017). Multiple exemplar training: Some strengths and limitations. The Behavior Analyst, 40(1), 225–241. https://doi.org/10.1007/s40614-017-0083-z. Howard,V. J., & DiGennaro Reed, F. D. (2014).Training shelter volunteers to teach dog compliance. Journal of Applied Behavior Analysis, 47(2), 344–359. https://doi.org/10.1002/jaba.120. Lee, J. N. (2015). The effectiveness of point-of-view video modeling as a social skills intervention for children with autism spectrum disorders. Review Journal of Autism and Developmental Disorders, 2(4), 414–428. https://doi.org/10.1007/s40489-015-0061-x. Lipschultz, J. L., Vladescu, J. C., Reeve, K. F., Reeve, S. A., & Dipsey, C. R. (2015). Using video modeling with voiceover instruction to train staff to conduct stimulus preference assessments. Journal of Developmental and Physical Disabilities, 27(4), 505–532. https://doi. org/10.1007/s10882-015-9434-4. Mailey, C., Day-Watkins, J., Pallathra, A. A., Eckerman, D. A., Brodkin, E. S., & Connell, J. E. (2021). Using adaptive computer-based instruction to teach staff to implement a social skills intervention. Journal of Organizational Behavior Management, 41(1), 2–15. https:// doi.org/10.1080/01608061.2020.1776807. Marano, K. E.,Vladescu, J. C., Reeve, K. F., & DiGennaro Reed, F. D. (2020). Effect of conducting behavioral observations and ratings on staff implementation of a paired‐stimulus preference assessment. Journal of Applied Behavior Analysis, 53(1), 296–304. https://doi. org/10.1002/jaba.584. Marano, K. E., Vladescu, J. C., Reeve, K. F., Sidener, T. M., & Cox, D. J. (2020). A review of the literature on staff training strategies that minimize trainer involvement. Behavioral Interventions, 35(4), 604–641. https://doi.org/10.1002/bin.1727. Mason, R. A., Davis, H. S., Boles, M. B., & Goodwyn, F. (2013). Efficacy of point-of-view video modeling: A meta-analysis. Remedial and Special Education, 34(6), 333–345. https:// doi.org/10.1177/0741932513486298.



Video modeling

105

Mery, J. N., Vladescu, J. C., Day-Watkins, J., Sidener, T. M., Reeve, K. J., & Schnell, L. K. (2022). Training medical students to teach safe infant sleep environments using pyramdial behavioral skills training. Journal of Applied Behavior Analysis. in press. Mitteer, D. R., Greer, B. D., Fisher,W.W., & Cohrs,V. L. (2018).Teaching behavior technicians to create publication-quality, single-case design graphs in Graphpad prism 7. Journal of Applied Behavior Analysis, 51(4), 998–1010. https://doi.org/10.1002/jaba.483. Nottingham, C. L., Vladescu, J. C., Giannakakos, A. R., Schnell, L. K., & Lipschultz, J. L. (2017). Using video modeling with voiceover instruction plus feedback to train implementation of stimulus preference assessments. Learning and Motivation, 58, 37–47. https:// doi.org/10.1016/j.lmot.2017.01.008. Park, J., Bouck, E., & Duenas,A. (2018).The effect of video modeling and video prompting interventions on individuals with intellectual disability:A systematic literature review. Journal of Special Education Technology, 34(1), 3–16. https://doi.org/10.1177/0162643418780464. Quinn, M., Narozanick, T., Miltenberger, R., Greenberg, L., & Schenk, M. (2020). Evaluating video modeling and video modeling with video feedback to enhance the performance of competitive dancers. Behavioral Interventions, 35(1), 76–83. https://doi.org/10.1002/bin.1691. Romer, K., Vladescu, J. C., Marano, K. E., Reeve, S. A., Sidener, T. M., & Campanaro, A. M. (2021). The influence of observations and ratings on the implementation of discrete trial instruction. Journal of Applied Behavior Analysis, 54(4), 1639–1651. https://doi. org/10.1002/jaba.868. Shapiro, M., & Kazemi, E. (2017). A review of training strategies to teach individuals implementation of behavioral interventions. Journal of Organizational Behavior Management, 37(1), 32–62. https://doi.org/10.1080/01608061.2016.1267066. Spiegel, H. J., Kisamore, A. N.,Vladescu, J. C., & Karsten, A. M. (2016). The effects of video modeling with voice-over instruction on parent implementation of guided compliance. Child & Family Behavior Therapy, 38(4), 299–317. https://doi.org/10.1080/07317107.2 016.1238690. States, J., Detrich, R., & Keyworth, R. (2019). Active student responding (ASR) overview. The Wing Institute. https://www.winginstitute.org/instructional-delivery-student-respond. Stokes,T. F., & Baer, D. M. (1977). An implicit technology of generalization. Journal of Applied Behavior Analysis, 10(2), 349–367. https://doi.org/10.1901/jaba.1977.10-349. Twyman, J. S., & Heward, W. L. (2018). How to improve student learning in every classroom now. International Journal of Educational Research, 87, 78–90. https://doi.org/10.1016/j. ijer.2016.05.007. Tyner, B. C., & Fienup, D. M. (2015). A comparison of video modeling, text‐based instruction, and no instruction for creating multiple baseline graphs in Microsoft excel. Journal of Applied Behavior Analysis, 48(3), 701–706. https://doi.org/10.1002/jaba.223. Vladescu, J. C., Carroll, R. A., Paden, A., & Kodak, T. M. (2012). The effects of video modeling with voiceover instruction on accurate implementation of discrete-trial instruction. Journal of Applied Behavior Analysis, 45(2), 419–423. https://doi.org/10.1901/ jaba.2012.45-419. Vladescu, J. C., Mery, J. N., Marano-Frezza, K., Breeman, S. L., Campanaro, A. M., & Naudé, G. P. (2022). Comparing video modeling and computer-based instruction to teach preference assessment implementation. Journal of Organizational Behavior Management, 42(1), 56–74. https://doi.org/10.1080/01608061.2021.1965940. Walker, S., & Sellers, T. (2021). Teaching appropriate feedback reception skills using ­computer-based instruction: A systematic replication. Journal of Organizational Behavior Management, 41(3), 236–254. https://doi.org/10.1080/01608061.2021.1903647. Warren, J. (2021). Why video is hottest growth hack right now. LaterBlog. July 21 https://later. com/blog/video-on-social-media/. Williams, T. C., & Zahed, H. (1996). Computer-based training versus traditional lecture: Effect on learning and retention. Journal of Business and Psychology, 11(2), 297–310. https://doi.org/10.1007/BF02193865.

This page intentionally left blank

CHAPTER 5

Creating graphs and visual data displays☆ Daniel R. Mitteera,b, Michael P. Kranakc,d, Ashley M. Fuhrmana,b, and Brian D. Greera,b,e a

Severe Behavior Program, Children’s Specialized Hospital–Rutgers University Center for Autism Research, Education, and Services (CSH–RUCARES), Somerset, NJ, United States b Department of Pediatrics, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States c Department of Human Development and Child Studies, Oakland University, Rochester, MI, United States d Oakland University Center for Autism, Rochester, MI, United States e Rutgers Brain Health Institute, Rutgers University, Piscataway, NJ, United States

Creating graphs and visual data displays In his seminal book on visual displays, Cleveland (1994) stated that “Graphs allow us to explore data to see overall patterns and to see detailed behavior; no other approach can compete in revealing the structure of data so thoroughly” (p. 5). Indeed, it is difficult to understate the importance of graphs and other visual data displays (e.g., data tables) in behavior analysis. A behavior-analytic data display, like an electronic graph, serves two functions: (a) as a data-management system to organize variables and (b) a platform for evaluating the effects of independent variables on dependent variables. Graphs are the primary means by which behavior analysts inspect their data and adjust interventions; graphs permit us to be effective scientists and clinicians. As such, it is unsurprising to see entire chapters devoted to graphing in introductory textbooks (e.g., Cooper, Heron, & Heward, 2020), publications focused on teaching graphing skills to behavior analysts (e.g., Carr & Burkholder, 1998; Dixon et al., 2009), and task-list items (e.g., “Graph data to communicate relevant quantitative relations”) on graphing skills created by the Behavior Analysis Certification Board (Behavior Analyst Certification Board, 2017). ☆

 shley Fuhrman is now at Trumpet Behavioral Health. Grants 2R01HD079113 and A 5R01HD093734 from the National Institute of Child Health and Human Development provided partial support for this work. The authors wish to thank Halle Norris for her assistance in editing the initial draft of this chapter.

Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00005-2

Copyright © 2023 Elsevier Inc. All rights reserved.

107

108

Applied behavior analysis advanced guidebook

Behavior analysts can use many types of data displays to present i­ nformation, so they should consider whether graphing is best suited to their clinical or research question, as tables or summary statistics may be preferable in some situations (American Psychological Association [APA], 2020; Tufte, 2001). In many cases, behavior analysts will find that graphs represent the most efficient and effective visual display for their needs. In this chapter, we will focus on using and creating relational graphs. These graphs account for 40% of published graphs in modern scientific literature (Tufte, 2001) and are among the most common in behavior analysis. Relational graphs link at least two variables (e.g., rate of behavior, reinforcement contingency) and allow the reader to assess potential causal relations between the variables (Tufte, 2001). Thus, they are ideal for analyzing the effects of behavior-analytic independent variables on target behavior, particularly when depicting data from single-case experimental designs (SCEDs). In this chapter, we first review the common types of relational graphs used in behavior analysis. Although these graphs can be used for non-SCED data, we will refer to the relational graphs in this chapter broadly as “SCED graphs” for brevity. Second, we describe the essential and quality features of SCED graphs. Third, we detail frequently used graphing software in behavior analysis. Finally, encompassing the preceding topics, we describe empirically supported trainings and published tutorials for how readers can use graphing software to generate SCED graphs that contain both essential and quality features.

Common graph types for SCEDs It is important for behavior analysts to consider the level of analysis relevant to the research or clinical question at hand. As Cleveland (1994) wrote, “Contained within the data of any investigation is information that can yield conclusions to questions not even originally asked” (p. 8). The type of graph directly impacts visual analysis and the interpretations of results.Thus, behavior analysts should understand the relative strengths and weaknesses of different graph types to ensure they choose an appropriate format to best inform decisions about ongoing assessment and treatment. Although there are numerous types and combinations of SCED graphs, we will review the types used most by behavior analysts (i.e., line graphs, bar graphs, and cumulative records). We encourage readers interested in other valuable, albeit less commonly used, graph types (e.g., scatterplots, standard celeration charts, heat maps) to consult other resources that provide a more thorough



Graphing and visual data displays

109

­ verview of those graphing conventions (e.g., Cooper et al., 2020; Kazdin, o 2021; Mitteer & Greer, 2022). Additionally, we recommend that, irrespective of graph type, behavior analysts determine which level and unit of analysis is most fruitful for identifying patterns of responding such as smoothing data to detect cyclical responding (Retzlaff et al., 2022) and depicting exponential functions as linear relations via logarithmic or semilogarithmic transformations or scalings (see Shahan & Greer, 2021, as an example).

Line graphs The SCED graph type used most by behavior analysts is the line graph (Lane & Gast, 2014), which displays quantitative values of a dependent variable over a specified unit of analysis (e.g., time, session). For example, a distinct data point may be used for each session, and a line connects successive data points. Practitioners can use line graphs to display various measurement types, including frequency, duration, or latency (see Cooper et al., 2020, or Kazdin, 2021, for an overview of measurement). Shown in Fig. 1 (top panel), a behavior analyst might display the rate of destructive behavior for each 5-min session across baseline and intervention phases. Line graphs are beneficial because each unit of time is represented by a unique data point such that practitioners can easily analyze changes in level, trend, and variability across time and various experimental conditions. This detailed level of analysis allows behavior analysts to examine intervention and assessment results quickly and critically.The ability to efficiently analyze session-by-session data and make informed decisions can directly impact intervention success, which may be one reason line graphs are so widely used. Although line graphs are useful for most behavior-analytic purposes, practitioners may consider alternative graph types. For example, a behavior analyst might wish to display aggregate data as an outcome measure on intervention success for a caregiver or lay audience.

Bar graphs A bar graph is a way that behavior analysts can summarize data from different experimental conditions without distinct data points representing different points in time. Bar graphs can be particularly useful if a practitioner needs to summarize a large amount of data efficiently or compare average performance across different conditions or groups. For example, behavior analysts may use bar graphs to display the results of a paired-stimulus preference assessment quantified as percentage of opportunities a client chose each of multiple stimuli. The second panel of Fig. 1 displays the same data

110

Applied behavior analysis advanced guidebook

Fig. 1  Identical data displayed across three common graph types. Note. This layout displays a line graph (top panel), bar graph (middle panel), and cumulative record (bottom panel) from the same data set. Please note the different axis ranges and units.



Graphing and visual data displays

111

as the line graph in the top panel of the figure but with a bar graph summarizing the average performance across baseline and intervention phases. As seen in Fig. 1, a bar graph is a useful and visually appealing way to show average differences between conditions. Although a major limitation of bar graphs is the inability to analyze trends and variability in the data, it can be useful in several situations. Notably, bar graphs may be easier for stakeholders to understand and interpret. Additionally, they can be helpful as supplemental graphs to line graphs. As an example, Fisher, Fuhrman, Greer, Mitteer, and Piazza (2020) used a bar graph to show the destructive behavior across participants in addition to the individual line graphs for each participant.The bar graph in this example helped to highlight the difference between resurgence in two different experimental conditions in an efficient and easily consumable manner. One of the limitations of bar graphs is lack of information on within-group variability but this can be offset by adding error bars or by superimposing individual data points aggregated by the bar (see Hagopian, 2020, for an example of the latter strategy).

Cumulative records Another type of SCED graph that is common in the field of behavior analysis is cumulative records. A cumulative record displays the sum of a dependent variable across units of time. Of critical significance when examining a cumulative record is the slope of the line depicting responding. Higher response rates produce steeper slopes of responding in a cumulative record, whereas lower response rates produce more shallow slopes of responding. No responding produces a slope of zero (i.e., a flat line). The third panel of Fig. 1 displays the same data as the line and bar graphs in top two panels but in a cumulative record. The cumulative record can help behavior analysts better detect subtle changes in response rate that may be obscured by line or bar graphs. For example, a line graph may show that a client engaged in destructive behavior twice per min during a 5-min session. However, the corresponding cumulative record may reveal that destructive behavior always occurred in a burst during the first minute or two of the session with no responding thereafter. Having access to this finer level of analysis can help practitioners develop more effective protocol modifications for their clients.

Combining graph types In some scenarios, it may be beneficial for behavior analysts to use multiple SCED graph types of the same data or to combine two or more graph types within a single figure. For example, a practitioner may find it useful to c­ reate

112

Applied behavior analysis advanced guidebook

a cumulative record as a supplement to a line graph. Fig.  1 reveals how information that we cannot detect in the line graph becomes identifiable when we present the data as a cumulative record. Without this supplemental graph, it may be difficult to identify important behavior–environment relations obscured by less-precise levels of analysis. The information that supplemental graph types provide may benefit the refinement of assessment and treatment protocols. Combining graph types can be beneficial for behavior analysts when it is necessary to analyze frequency and duration data concurrently. Interpreting the results of a paired-stimulus preference assessment or competing stimulus assessment is one example. Hagopian et al. (2020) illustrated a combination of graph types by displaying the percentage of each session with engagement and self-restraint via a bar graph and self-injurious behavior per min as a line graph overlaid on the bar graph. Such an approach can assist readers in visually analyzing multiple dependent variables simultaneously, which decreases the amount of time required to make important clinical decisions. As Tufte (2001) noted, “Time-series plots can be moved toward causal explanation by smuggling additional variables into the graphic design” (p. 38). The purpose of combining graphs should be to enhance the viewer’s understanding of the phenomena by displaying moderating variables or supplemental measures with the primary variables.

Essential features of SCED graphs Whereas some graph components may be added for purely aesthetic reasons (e.g., using color instead of gray scale), all SCED graphs must contain certain features to permit the behavior analyst to make effective decisions regarding their data or for readers to detect suggested relations between variables. Although there are entire book series describing graphing features in detail (e.g., Tufte, 1990, 2001, 2006), we will focus on ones that behavior-analytic researchers (e.g., Kubina et al., 2021; Mitteer, Greer, Fisher, & Cohrs, 2018) have synthesized from the graphing literature and noted to be critical elements of SCED graphs. Fig. 2 provides a side-by-side comparison of a graph that includes essential features and quality features described in this chapter on the left-hand side and a graph omitting many of those desirable features on the right-hand side. To assist behavior analysts in identifying missing essential or quality components when generating their own graphs, we have included a graphing checklist in Table 1. We recommend that readers reference this checklist to improve the quality of their visual displays.



Graphing and visual data displays

113

Fig. 2  A comparison of graphs that include or fail to include essential and quality features. Note. The graph on the right is missing the essential and quality features displayed in the graph on the left. Each panel in the right-side graph includes (a) a Session 0, (b) data points directly on the x-axis, (c) chartjunk like gridlines and excessive tick marks, and (d) unclear data paths. This graph lacks descriptive labels, a legend, and phase-change lines. The first two panels use an undesirable aspect ratio. The first panel includes a data point that exceeds 100%, indicating a data-entry error, and the second panel uses a truncated y-axis that could give the appearance of clinically significant increases in on-task behavior when the level is still moderate.

Accurate data entry and sourcing A prerequisite to any graph is to ensure that the data being sourced to the graph represent the true values of the obtained data (Kranak & Mitteer, 2022; Mitteer, Greer, Randall, & Briggs, 2020).Whether reviewing raw data and then graphing them by hand or typing data into a spreadsheet for electronic graphing, the behavior analyst should always verify that the data are entered correctly. One issue that behavior analysts may encounter is an over-reliance on spreadsheet formulas, the outputs of which are then displayed graphically; for example, a rate measure that is computed by dividing the sum of various frequency columns by the session duration. Although such a formula produces an accurate computation most of the time, it may

114

Applied behavior analysis advanced guidebook

Table 1  Graphing checklist for creating high-quality figures. Questions to examine whether essential and quality features are present

Response

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y Y

N N

N/A N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y

N

N/A

Y Y

N N

N/A N/A

Does the graph type and choice of measure align with the phenomenon you are intending to analyze? Did you verify the data you entered are correct (e.g., visually scanning the values in the data table, conducting an outlier analysis)? If you are graphing information computed using formulas or macros, did you verify that they computed correctly (e.g., visually scanning the calculated values in the data table, conducting an outlier analysis)? Does your graph include all of the data that you hope to display? Are your variables sourced to the correct axes (e.g., responding on the y-axis, sessions on the x-axis)? Do the ranges of your axes make sense for the range of possible/obtained data (e.g., 0%–100% for percentages)? Did you verify that your axis ranges do not inadvertently misconstrue your data (e.g., magnifying a small effect with a constricted range)? Do your axis ranges match across linked graphs? Do your axes have reasonably spaced tick marks with tick-mark labels? Are your axes clearly labeled (e.g., responding per min, sessions)? Do all of your data paths have clearly distinguishable features (e.g., white circles, black squares), with successive data points of a given condition connected with a line? Does the appearance of your data points match that of established standards for a literature base (e.g., typical symbols used for the functional analysis)? Did you include either an arrow or box legend to label the data paths? Did you indicate phase changes with a distinguishable line and in the correct location? Did you label phases with specific labels (e.g., “extinction” rather than “intervention”)? Did you delete nonessential ink (e.g., extra tick marks, gridlines) from the graph? Did you remove the “0” point from the x-axis range? Did you float zero values that would normally fall on the x-axis?



Graphing and visual data displays

115

Table 1  Graphing checklist for creating high-quality figures—cont’d Questions to examine whether essential and quality features are present

Response

Y

N

N/A

Y

N

N/A

Y Y

N N

N/A N/A

Y

N

N/A

Are the sizes of fonts and data points legible at 100% of the graph size? Does the aspect ratio facilitate visual inspection (e.g., longer x-axis, shorter y-axis)? Are graph panels aligned and labeled? Did you export the graph with sufficient resolution that others can analyze the data easily? If publishing, did you include an appropriate figure caption underneath the graph?

be easy to overlook an error, such as a formula that links to the wrong cell in a spreadsheet.These errors can lead to inaccurate interpretations of client performance and to inopportune changes to clinical programming, possibly progressing to treatment when the correct baseline data do not support such a change. Another related error is failing to source all relevant data to the graph. With Microsoft Excel, users select a range of data (e.g., Sessions 1–50) to depict on the graph. However, when the sessions continue beyond this range, Excel will not plot the data because they are outside the specified range, potentially leading to similar errors in interpretation. Behavior analysts should become acquainted with the nuances of their graphing programs to identify potential sources of errors and plan for them accordingly. Scheduled spot-checking of data spreadsheets during supervision meetings is one proactive strategy. In addition to manual data validation, most graphing programs described later in this chapter offer options for “cleaning” data by detecting and resolving data abnormalities such as outliers or duplicate values. There are also many freely available resources for this purpose such as GraphPad’s outlier calculator (https://www.graphpad.com/quickcalcs/ grubbs1).

Axes and labels SCED graphs tend to have two axes: (a) the x-axis or abscissa and (b) the y-axis or ordinate. Commonly, SCED graphs are time-series graphs and, therefore, display the passage of time along the x-axis (e.g., sessions, units of time). However, the x-axis could depict the level of the independent variable (e.g., fixed-ratio versus variable-ratio reinforcement schedules) or groups of interest (e.g., children with autism spectrum disorder; children

116

Applied behavior analysis advanced guidebook

with attention-deficit hyperactivity disorder). The dependent variable in applied behavior analysis is most likely an observable, measurable, and socially important behavior (Baer, Wolf, & Risley, 1968). For example, the y-axis might depict the rate of destructive behavior expressed as responses per min or the skills exhibited correctly in a discrete-trial-teaching arrangement as a percentage. Regardless of the specific variables, each axis should include (a) an axis title, (b) a reasonable amount of tick marks between the minimum and maximum axis range, and (c) labels for those marks such that readers can easily understand the x- and y-axis values for a given data point. Behavior analysts must be mindful of how these elements could control the responding of viewers, such as a fellow clinician who oversees the behavior analyst’s case in their absence.The axis range and placement of tick marks should make sense for the data they represent. Thus, if the maximum latency for a response to occur is 300 s, the range of the y-axis for the latency measure ought not to exceed 300 s. If measuring a rate of behavior in a 5-min session yields values that increase by 0.2 responses per min (e.g., 0.2, 0.4, 0.6., 0.8), it will likely be more helpful to viewers of the graph to have tick intervals match those naturally occurring increments as opposed to implausible ones (e.g., 0.3, 0.75). Though the above examples may not disrupt visual inspection robustly, other decisions regarding graph construction can have substantial influence on the accuracy of visual analysis. In one study by Dart and Radley (2017), the experimenters depicted a percentage-of-intervals measure in an ABAB reversal-design graph. The authors constructed versions of the same graphs in which only the y-axis range changed (i.e., full range of 0%–100%; constricted ranges of 0%–80%, 0%–60%, 0%–40%). Thus, a clinically negligible decrease in the percentage of intervals with destructive behavior (e.g., 40% to 30%) at full range might look like a robust improvement in the levels of destructive behavior when viewed with a constricted y-axis range (e.g., 0%– 40%). To inspect the graph variations, Dart and Radley recruited 32 experts who had at least five years of experience judging single-case data and at least one peer-reviewed publication incorporating visual analysis. Overall, Dart and Radley found that experts made more Type I errors (false positives) and Type II errors (false negatives) as the y-axis became more restricted.

Data representation Whether behavior analysts represent data with a series of data points in a line graph or a set of columns in a bar graph, they should portray data clearly enough to allow for accurate visual inspection, such as using easily



Graphing and visual data displays

117

distinguishable features when presenting multiple experimental conditions or variables simultaneously (Mitteer et  al., 2018). Depicting data with a salient contrast from the background (e.g., black circles or black columns on a white background) will be undoubtedly easier for a reader to inspect than light-gray circles or white columns with black borders on a white background. Indeed, certain journal author guidelines suggest the use of black circles when displaying only one measure on a graph (Journal of Applied Behavior Analysis, 2022). Data for multiple conditions will be easier to inspect when the symbols or data paths bear less resemblance to one another as with black circles for the test condition and white triangles for the respective control condition. Additionally, behavior analysts should consider the data to which they want to draw the reader’s attention. Our clinical-research group tends to use visually prominent options like black data points for severe destructive behavior (e.g., aggression, self-injurious behavior) and more subdued options like white data points for secondary, albeit important, behavior (e.g., communication responses). In this way, for both our clinical and research graphs, even minor increases in behavior that could result in tissue damage or other significant side effects will be readily evident. Behavior analysts might also assign data-point features for logical reasons. For example, in a paper on teaching expressive identification of images with or without background stimuli, Mitteer, Luczynski, McKeown, and Cohrs (2020) used white circles to depict identification of images without backgrounds (i.e., a white backdrop) and black circles for responding to images with backgrounds. That is, the fill corresponded to the level of the independent variable. Another consideration when selecting data points is whether there are graphing conventions that should be followed to easily convey data to readers familiar with a particular literature base. Since Iwata, Dorsey, Slifer, Bauman, and Richman’s (1994) seminal work on functional analysis, certain symbols have been correlated with specific conditions: (a) white circles for the test for automatic reinforcement, (b) white squares for the attention test condition, (c) white triangles for the escape test condition, (d) black squares for the tangible test condition, and (e) black circles for the toy-play or control condition. Using standardized data points can facilitate data interpretation.

Legend An arrow legend or box legend quickly informs the reader of what the data are when inspecting a graph. A separate legend is not needed when depicting a single dependent variable; instead, the y-axis label can be used.

118

Applied behavior analysis advanced guidebook

It has been suggested to use an arrow legend with descriptive text pointing to data paths when labeling two or fewer data paths in a graph unless the data overlap considerably (JABA, 2022). If representing three or more data paths, or when arrow legends may deter efficient analysis, one can use a box legend formed as a rectangle containing depictions of data symbols and their respective labels. Box legends should not obscure any data depicted in the graph. In our research and practice, we tend to arrange the order of the legend entries by the order in which a reader would see the data symbols when scanning the x-axis horizontally.

Phase-change lines and labels When a behavior analyst introduces new independent variables or procedural adjustments in a sequential design, the change must be denoted correctly in order to determine effects on the dependent variable. For example, if a registered behavior technician (RBT) misplaces the phase-change line between baseline and the intervention phase such that the latter phase incorporates baseline data, the behavior analyst inspecting the graph might assume that the independent variable had a delayed effect. If the RBT places the phase-change line after the intervention occurred, including intervention data in the baseline phase, the behavior analyst may conclude that threats to internal validity (e.g., history) may have produced the therapeutic change prior to the programmed independent variable. Once phase-change lines are placed correctly, the behavior analyst must consider solid, dashed, or dotted formats. In general, one should adjust the line type and weighting according to the relevant importance to the dependent variable, such as solid lines for contingency changes (e.g., differential reinforcement) and dashed or dotted lines for minor procedural changes (e.g., schedule thinning from fixed-ratio 1 to a variable-ratio 2). Finally, the phase labels should be sufficiently informative for readers such that they do not need to consult the procedural details to understand the experimental condition. Therefore, writing “differential reinforcement” rather than a generic “intervention” label above the phase is preferred.

Figure caption Although not necessary for a working or clinical graph, behavior analysts disseminating graphs in manuscripts must adhere to the relevant publication standards of the field (APA, 2020) and journal (e.g., JABA, 2022), which means including a figure caption to accompany the graph. Figure-caption text should appear beneath the graph and briefly explain the figure, such as



Graphing and visual data displays

119

describing the main purpose of the graph and elements not easily gleaned from the graph (e.g., defining acronyms used as phase labels in the graph).

Quality features of SCED graphs Above, we described essential features that SCED graphs must have for behavior analysts to interpret graphs accurately and be able to detect relations between variables. In this section, we provide an overview of features that, according to authors of graphing papers and books, SCED graphs arguably should have to enhance the desirability of the visual displays. Said another way, behavior analysts can create graphs sufficient for clinical practice or answering research questions but there are several graphical aspects that might increase how aesthetically pleasing the figures are for readers. Indeed, authors have sometimes distinguished between “analysis-altering” and “aestheticaltering” graph components (e.g., Peltier, Muharib, Haas, & Dowdy, 2022). Nevertheless, these features also might facilitate improved detection of relations between variables.

Maximizing data-ink ratio and reducing chartjunk In his influential book on graphic display, Tufte (2001) described the ­data-ink ratio, which is the proportion of the graph’s ink devoted to nonredundant display of data information. He goes on to summarize adages by architects Ludwig Mies van der Rohe and Robert Venturi: “For non-­ data-ink, less is more. For data-ink, less is a bore” (p. 175). From Tufte’s perspective, the superfluous ink in a graph, such as redundant data paths or excessive tick marks, is chartjunk. Chartjunk is a term describing unnecessary graphical features that distract from the importance of the relations depicted in the graph and do not present novel or meaningful information. Instead, the focus should be on maximizing the data-ink ratio and ensuring that the primary variables are salient and effectively consumed by the reader. Unfortunately, many graphing programs enable chartjunk by default. In illustration, Microsoft Excel (described later in this chapter) tends to overlay grid lines on line graphs and also makes it challenging to remove select ink from graphs, such as needing to insert a white rectangle to cover up chartjunk. Another commonly generated element that must be removed from graphs is the 0 from the range of the x-axis (JABA, 2022; Mitteer et  al., 2018), as values like Day 0 or Session 0 are not possible. Further, most graphing programs tend to connect data points across phase-change lines

120

Applied behavior analysis advanced guidebook

without user adjustments, which authors have noted as undesirable (Carr & Burkholder, 1998;Kubina et al., 2021; Mitteer et al., 2018). Ultimately, the behavior analyst should review the function of each graph element displayed in their figure and ask, “Is this element unnecessary or will it deter from readers interpreting the main findings?” If the answer is “yes,” then the analyst should attempt to remove such chartjunk.

Formatting considerations Behavior analysts should consider whether subtleties of their graph formatting promote or deter efficient or accurate visual analysis (Cleveland, 1994). For example, data points that fall directly upon the x-axis could make detection of those points difficult. Thus, a common recommendation is to float data points representing a zero value slightly above the x-axis (Carr & Burkholder, 1998; JABA, 2022; Mitteer et al., 2018). Additionally, presentation of multiple graphs within a figure invites cross-graph comparisons. These graphs should be aligned such that readers can compare data points at the same observation period (Kubina et al., 2021; Mitteer et al., 2018), particularly when experimental control depends on accurate inspection of responding at those time points (e.g., verification logic in multiple-baseline designs; Kazdin, 2021). Behavior analysts should select appropriate axis ranges when comparing data across graphs or graph panels (Cleveland, 1994). In many situations, matching x-axes and y-axes across graphs is best practice. Comparable axes allow the visual inspector to better evaluate responding across graph panels. However, if matching axis ranges precludes inspectors from being able to easily identify an important relation in a particular data set, it may be best to forgo identical axes. In this situation, one should clearly denote marked differences in axis ranges to readers by writing “note that y-axis ranges differ across graph panels” in the figure caption. The size and proportion of graphical elements should maximize visual clarity. APA (2020) guidelines suggest serif fonts (e.g., Arial) between 8 and 14 points in size and title case for most labels (i.e., capitalizing the first letter of major words but using lowercase for minor words, such as “Percentage of Opportunities”). In general, our research group increases the font size of labels as they extend further out from the center of the figure (e.g., size 10 for legend entries, size 12 for phase labels, size 14 for tick labels and axis labels). We use 0.5 pt. or 1 pt. line weighting for most lines but may increase that weight (e.g., 2 pt) to clearly distinguish something of interest to the reader (e.g., a 2-pt line depicting mean responding across a phase or condition).



Graphing and visual data displays

121

Ultimately, behavior analysts should determine the size and line thickness of each graphical element based on its relative importance within the figure (APA, 2020; Mitteer et al., 2018).

Aspect ratio A final component of the graph to consider is its aspect ratio, or the relative length of the y-axis compared to that of the x-axis (Cleveland, 1994).There are aesthetic and analytical rationales for selecting smaller y-to-x-axis ratios so that graphs that are wider than they are high. Aesthetically, these aspect ratios allow one to fit several graph panels on a printed page, which can be helpful for displaying a multiple-baseline design. When there are many sessions or conditions displayed along the x-axis, a wider graph may allow for less clutter by increasing the amount of space between data points. Indeed, Ledford, Barton, Severini, Zimmerman, and Pokorski (2019) surveyed experts who were on editorial boards for behavior-analytic journals to determine their preference for aspect ratios and found that the experts generally preferred shorter y-axes to longer x-axes, particularly when many sessions were represented along the x-axis. From an analytic perspective, proper aspect ratios can bolster accuracy of visual inspection (Cleveland, 1994; Peltier et al., 2022; Radley, Dart, & Wright, 2018). In one study, Radley et al. (2018) manipulated the aspect ratios of sample graphs and had experts in visual analysis indicate whether they detected a functional relation between the independent and dependent variables displayed in the graphs. The authors found that graphs with larger y-to-x-axis ratios increased the rates of Type I errors among the experienced visual inspectors. For more information on research related to aspect ratio and its impact on visual analysis, we refer interested readers to the “Banking to 45°” sections of Cleveland’s (1994) text.

Graphing software Thus far, we have discussed common graph types and their features. The remaining portions of the chapter focus on how to create graphs with these essential and quality elements.To create such graphs electronically, one must first have software that is amenable to visually depicting SCEDs. Kranak and Mitteer (2022) identified four graphing programs with published support for their relevance in behavior-analytic research and practice: Microsoft Excel, GraphPad Prism, Systat SigmaPlot, and Google Sheets.

122

Applied behavior analysis advanced guidebook

Microsoft Excel Microsoft Excel (hereafter Excel) is the most widely used graphing program in behavior-analytic research and practice (Haddock & Iwata, 2015) and is available as a part of the Microsoft Office 365 Suite (Microsoft, 2022). Microsoft Office 365 Suite has over 258 million paid user subscriptions broadly and over 750 million individuals who use Excel daily (Lacey & Ashby, 2018; Microsoft, 2020). In other words, many companies and universities, including behavior-analytic-service providers and training programs, provide individuals with subscriptions to Microsoft and, subsequently, access to Excel. Furthermore, individual subscriptions resulting in access to Excel are currently as low as $70/year, which could presumably be reimbursed by one’s employer. Taken together, Excel is one of the two most accessible and affordable graphing programs. In addition to its accessibility and affordability, Excel is also highly versatile. Indeed, Excel provides users with the capability to make numerous types of individual graphs or combination graphs. Users are also able to edit cosmetic features of graphs like size and color of the data paths, markers, and font. These are features of interest to a behavior analyst who must share graphs and communicate with a variety of stakeholders (e.g., parents and school-based personnel). Furthermore, Excel can conduct in-cell calculations and analyses (e.g., calculate an average rate of behavior across several sessions), as well as perform more sophisticated macros or automate certain features of graph creation (e.g., automated phase-change lines; Deochand, 2017; Lacey & Ashby, 2018). However, with Excel, it can be difficult to offset the x- and y-axes, split an axis, or create more complicated graphs without lengthy or cumbersome workarounds. Also, Excel may not produce publication-quality graphs as efficiently as other programs (see Mitteer et al., 2018). Nevertheless, Excel is a viable option for the day-to-day graphing needs of practicing behavior analysts.

Google Sheets Google Sheets (hereafter Sheets) shares many of the same qualities as Excel, is accessible and widely available, and free through Google’s Workspace.This means that any individual can create graphs using Sheets without having to purchase a license. Like Excel, Sheets is highly versatile in that one can customize the cosmetic features of the graph with ease, as well as create different types of graphs common in behavior analysis. Several individuals can also collaborate and work on the same graph at the same time within



Graphing and visual data displays

123

Sheets. It is unknown how many behavior analysts currently use Sheets for their graphing needs, as researchers have only recently started emphasizing and providing tutorials for Sheets (Blair & Mahoney, 2022). Nevertheless, Sheets could be an option for those who do not have access to another graphing program. Among a few limitations, Google’s Workspace (and therefore Sheets) is subject to restrictions like The Health Insurance Portability and Accountability Act of 1996 (HIPAA) and The Family Educational Rights and Privacy Act of 1974 (FERPA). Any graphs created in Sheets are stored on the Google cloud server, which does not have easy encryption options. Of course, there may be instances where HIPAA and FERPA do not apply (e.g., nonhuman data, teaching exercises in a graduate program), but privacy concerns are particularly germane to Sheets. Second, Sheets has many of the same disadvantages as Excel and no studies have evaluated the quality of graphs produced by Sheets.

GraphPad Prism and Systat SigmaPlot GraphPad Prism (hereafter Prism) and Systat SigmaPlot (hereafter SigmaPlot) are two other graphing programs used by behavior analysts. Creating ­publication-quality graphs with either Prism or SigmaPlot is relatively easy when compared to other programs (e.g., Excel). For example, Prism has features that enable an individual to align multiple graph panels automatically. Similarly, individuals using Prism or SigmaPlot can easily manipulate axes that would take multiple workarounds in other programs, notably offsetting the x- and y-axes and splitting an axis. In addition to their cosmetic advantages, Prism and SigmaPlot enable both the creation of more sophisticated or complex graphs and graph features (Mitteer & Greer, 2022) and the computation of statistical analyses (e.g., ANOVA, regression). Moreover, Prism and SigmaPlot detect the type of data entered and analyses conducted and suggest graph types that might best depict the results in an effective way. Note that some of these advanced features may be more relevant to behavior-analytic researchers rather than practitioners, though this is not to say practitioners would not benefit from using these programs. Cost and complexity are two main drawbacks to these programs. Individual licenses for Prism and SigmaPlot are $185/year and $599/lifetime, respectively. These costs could be major barriers to individuals who do not have funding available through their organization or university for graphing programs. However, the cost might be worth it to individuals who

124

Applied behavior analysis advanced guidebook

have a continuous need for high-quality SCED graphs or a sophisticated statistical-analysis program. Unlike Excel and Sheets, Prism and SigmaPlot can be considered less user-friendly when an individual first begins using them (Mitteer et al., 2018). Individuals who wish to use Prism or SigmaPlot will likely need to learn additional skills to use the programs, such as entering data into separate columns following a phase change to have the graphs populate correctly. Fortunately, researchers have provided several video models and task analyses for using these programs (Cihon, Ferguson, Milne, & Leaf, 2021; Mitteer et al., 2018).

Alternative programs for graphing There are a few other programs that could be more broadly considered as “data-analysis programs” that can also be used to create SCED graphs and analyze behavior-analytic data. For example, several behavior-analytic researchers have used either R or Python to analyze their data and often create corresponding graphs (e.g., Epstein, Mejia, & Robertson, 2017; Falligant, Cero, Kranak, & Kurtz, 2020; Kranak, Falligant, Bradtke, Hausman, & Rooker, 2020; Lanovaz & Hranchuk, 2021). Like Prism and SigmaPlot, these programs permit highly specialized data and statistical analysis as well as pristine graph creation. However, they require extensive training and expertise to be used and are likely more relevant to research compared to clinical practice. More closely related to graphing specifically, there are several programs that enable either automatic graphing or provide templates for various types of SCED graphs. Many of these programs may be highly relevant to training of students or clinical practice. TherapyScience (therapy-science. com) is a program that automatically graphs data and can conduct some of the advanced analyses described above. Additionally, there are many data-­ collection and graphing programs that are seemingly widely used among companies. Some examples include CentralReach, Theralytics, Noteable, and ABAdesk. Consider also the Formative Grapher described by Cole and Witts (2015) in which individuals can simply input their data into the template and it will create a corresponding graph with many of the desirable features of graphs described in the chapter. This tool could be used by both researchers and clinicians alike.

Graphing training There are four main skillsets that one must have to create a graph: (1) operating the program; (2) entering data accurately; (3) graphing accurately, and, when disseminating data to others; (4) exporting graphs c­orrectly.



Graphing and visual data displays

125

Operating the program may be the most straightforward of these skillsets. Here, operating the program means being able to correctly use the program-specific features to create the desired graph, for example, navigating Excel, locating the graph generator and selecting the correct graph type. Recall from the earlier section on Accurate Data Entry and Sourcing, that entering data accurately refers to transferring the raw data from a paper datasheet or electronic output into the spreadsheet from which the graph will be created. Similarly, one must also be sure the correct data are selected as the source for the graph—note that this also could be considered a part of the skillset of graphing accurately. Graphing accurately also refers to selecting the most amenable graph type to best convey the data. The final skillset is being able to export a graph correctly.What defines “exporting a graph correctly” will depend not only on one’s program, but also one’s audience. For example, taking a screenshot of a graph (e.g., using Windows 11’s Snipping Tool) might be quick, easy, and amenable to presentations or clinical meetings. However, screenshots of graphs might be insufficient for publication, as screenshots typically render a lower-quality image (e.g., 72 dpi) than what is displayed within the software (1200 dpi). In those cases, exporting a TIF file from Prism would be a more appropriate method. Some of these skillsets would appear to generalize across platforms whereas others likely vary across programs, as an example, creating a ­multiple-baseline-design graph in Excel versus in SigmaPlot. Fortunately, there are several training methods available to teach individuals how to graph, along with all the skills related to graphing. Each of the following methods can be adapted and used for any of the skillsets mentioned above.

Task analyses Task analyses (TAs) are lists that break down either larger composite skills or behavior chains into smaller component skills or teachable units of behavior (Cooper et al., 2020).TAs have been one of the primary methods for teaching individuals how to graph. Carr and Burkholder (1998) published the seminal TA related to graphing SCEDs, specifically for creating reversal, alternating treatment, and multiple-baseline designs in Excel. Since this seminal paper, over a dozen articles—those providing a TA and/or experimentally validating the TA—have been published (e.g., Berkman, Roscoe, & Bourret, 2019; Blair & Mahoney, 2022; Cihon et al., 2021; Dixon et al., 2009; Watts & Stenhoff, 2021). These TA-based articles are valuable in that they provide an easy and largely accessible method for training individuals to create graphs so long

126

Applied behavior analysis advanced guidebook

as behavior analysts can obtain the journal article readily. Once obtained, individuals can acquire relevant graphing skills at their own pace without intensive involvement from a trainer. Traditional TAs have also been enhanced with picture prompts to better facilitate acquisition of graphing skills (see Berkman et al., 2019).There are two main limitations of using TAs from graphing instruction. First, unlike other methods such as behavioral skills training (BST), TAs usually do not have a trainer or expert present to provide feedback. Thus, if an individual has difficulty with a certain step, there is no other support beyond the TA itself. Second, as new programs and versions of programs are developed, TAs may become outdated and less applicable. It is important to note that this issue is not unique to only TAs, as any relevant training materials would need to be updated as new programs emerge or companies release updated software.

Formative graphing templates As mentioned in Alternative Programs for Graphing, Cole and Witts (2015) developed a template that can create an SCED graph after an individual simply inputs their data.Their Formative Grapher allows individuals to skip repetitive steps (e.g., deleting grid lines) and quickly create graphs. Although mentioned as an alternative graphing program, we would be remiss not to describe its use as a training method. Briefly, formative graphing templates can be used to teach individuals how to create an SCED graph with all the basic features in a very short amount of time. Unfortunately, because of their automaticity, formative graphing templates preclude teaching individuals more nuanced or complex skills, such as floating a zero value above the x-axis or inserting an axis break (see Deochand, 2017; Kranak, Shapiro, Sawyer, Deochand, & Neef, 2019). Nevertheless, as a training method, formative graphing templates have promise in that they can be used to show individuals what a relatively high-quality graph looks like upon completion.

Video models Video modeling is a training method in which an individual watches a recording of the target skill being completed and then attempts to complete that same skill (e.g., Erath, DiGennaro Reed, & Blackman, 2021). Video models can be altered in a few ways, namely the point of view from which the skill is recorded (i.e., first- or third-person) and whether individuals being trained with video models watch the entire video for all relevant skills before attempting a given skill or if they watch short videos of each skill in succession with opportunities to practice the skill on a step-by-step



Graphing and visual data displays

127

basis (i.e., video prompting; Cannella-Malone et  al., 2011). In terms of teaching individuals to create graphs, video models are an effective training method—they have been used to teach individuals to create SCED graphs across a variety of graphing programs (Lehardy, Luczynski, Hood, & McKeown, 2021; Mitteer et al., 2018; Mitteer, Greer, et al., 2020). Video models have many of the same strengths and limitations as TAs. For example, video models enable individuals to learn at their own pace without the involvement of an expert trainer. An additional benefit is that, along with vocal instructions, the video models provide a demonstration of how to complete the skill and the corresponding final product. As with TAs, video models also allow users to “skip ahead” past mastered skills to the most relevant skills with which they might struggle and to review areas of confusion using the fast-forward and rewind features of video software. Like TAs, as new programs emerge, video models can become outdated and less applicable.

Behavioral skills training Behavioral skills training (BST) is an empirically supported training strategy that involves four main components: (a) instructions, (b) modeling, (c) rehearsal/role-play, and (d) feedback (Parsons, Rollyson, & Reid, 2012). In a BST model, rehearsal and feedback occur in loop until mastery is achieved, following which participants are then able to independently complete the targeted skill. BST is largely considered to be the most effective training strategy and has been used to teach individuals a wide variety of behavioranalytic skills (Andzik & Schaefer, 2020; see Brock et al., 2017, for review). BST has been used to teach graphing skills such as general software skills, data entry, and graph creation (Kranak et al., 2019). A major advantage of BST is that it can be tailored to the needs of the individual receiving training. Hence, an individual might have mastered SCED creation in Excel or Prism but would prefer to use Sheets as a more cost-friendly option. Thus, this type of user might need a small amount of help learning “the basics” in Sheets to create their usual SCED graphs. Novice users may also benefit. In either case, BST could be altered to meet the training demand. One drawback is that BST requires robust trainer involvement. Although it is possible to conduct group BST sessions (Courtemanche, Turner, Molteni, & Groskreutz, 2021), it may be challenging to routinely individualize BST across novice and experienced trainees as they are hired. Two additional points of consideration are the overall amount of time required to implement BST and how well BST-related materials can be disseminated

128

Applied behavior analysis advanced guidebook

broadly to other individuals. Along with the amount of trainer involvement, trainers must prepare sample data, exemplar graphs, and feedback checklists ahead of time and implement training until mastery is reached. On the one hand, a rationale for putting in this time upfront is that it could save time later. Of significance, graphing skills acquired via BST may maintain over a longer period and not require additional assistance from the trainer (Kranak et al., 2019). On the other hand, not every organization will be able to have a dedicated trainer available to deliver to BST, so self-paced training methods (e.g., video models) might be more desirable. Ultimately, selection of teaching modality will depend on the preferences of the learner (Berkman et al., 2019), the importance of graphing mastery, and the resources and availability of the trainer. For a detailed discussion of the empirical support of these graphing-training modalities and directions for future research and practice, please see Kranak and Mitteer (2022).

Chapter summary In this chapter, we described (a) the purpose of graphs, (b) common SCED graph types, (c) essential and quality graphical features, (d) software for graphing SCEDs, and (e) training resources for individuals learning to use that software to graph behavior-analytic data. To reflect, graphs are arguably the most effective and efficient means of detecting relations between variables in behavior-analytic research and practice. They allow us to rapidly assess the relevance of interventions on clinically significant behavior and glean important findings from retrospective data in a manner that might improve our prediction and control of behavior. Commonly, behavior analysts use line graphs, bar graphs, and cumulative records to display SCED data. For the very best understanding of their data, behavior analysts must select the most relevant graph type and incorporate graphical features that allow them to detect relations and analyze data accurately. Once determining a graph type and the features they would like to incorporate in their graph, behavior analysts can choose from several software options that maximize practicality and cost-effectiveness (i.e., Excel, Sheets) or graphical quality and advanced program features (i.e., Prism, SigmaPlot). Each software option has at least one published resource to teach behavior analysts how to generate SCED graphs, though Excel and Prism have the greatest number of empirically supported trainings. We encourage readers to consult the graphing-checklist resource (Table  1) and example graphs



Graphing and visual data displays

129

(Figs. 1 and 2) as they begin to explore and design graphs of their own. Ultimately, improving graphing knowledge and skills will facilitate their ability to inspect their data, improve client outcomes, and disseminate their important findings to the behavior-analytic community.

References American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). https://doi.org/10.1037/0000165-000. Andzik, N. R., & Schaefer, J. M. (2020). Pre-service teacher-delivered behavioral skills training: A pyramidal training approach. Behavioral Interventions, 35(1), 99–113. https://doi. org/10.1002/bin.1696. Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1(1), 91–97. https://doi.org/10.1901/ jaba.1968.1-91. Behavior Analyst Certification Board. (2017). BCBA task list (5th ed.). Retrieved from https://www.bacb.com/wp-content/bcba-task-list-5th-ed. Berkman, S. J., Roscoe, E. M., & Bourret, J. C. (2019). Comparing self-directed methods for training staff to create graphs using GraphPad Prism. Journal of Applied Behavior Analysis, 52(1), 188–204. https://doi.org/10.1002/jaba.522. Blair, B. J., & Mahoney, P. J. (2022). Creating single-subject research design graphs with Google applications. Behavior Analysis in Practice, 15(1), 295–311. https://doi.org/10.1007/ s40617-021-00604-5. Brock, M. E., Cannella-Malone, H. I., Seaman, R. L., Andzik, N. R., Schaefer, J. M., Page, E. J., et al. (2017). Findings across practitioner training studies in special education: A comprehensive review and meta-analysis. Exceptional Children, 84(1), 7–26. https://doi. org/10.1177/0014402917698008. Cannella-Malone, H. I., Fleming, C., Chung,Y. C.,Wheeler, G. M., Basbagill, A. R., & Singh, A. H. (2011). Teaching daily living skills to seven individuals with severe intellectual disabilities: A comparison of video prompting to video modeling. Journal of Positive Behavior Interventions, 13(3), 144–153. https://doi.org/10.1177/1098300710366593. Carr, J. E., & Burkholder, E. O. (1998). Creating single-subject design graphs with Microsoft ExcelTM. Journal of Applied Behavior Analysis, 31(2), 245–251. https://doi.org/10.1901/ jaba.1998.31-245. Cihon, J. H., Ferguson, J. L., Milne, C. M., & Leaf, J. B. (2021). Teaching behavior analysts to create multiple baseline graphs using SigmaPlot. Behavioral Interventions, 36(4), 910–926. https://doi.org/10.1002/bin.1833. Cleveland, W. S. (1994). The elements of graphing data. Hobart. Cole, D. M., & Witts, B. N. (2015). Formative graphing with a Microsoft Excel 2013 template. Behavior Analysis: Research and Practice, 15(3–4), 171–186. https://doi.org/10.1037/ bar0000021. Cooper, J. O., Heron,T. E., & Heward,W. L. (2020). Applied behavior analysis (3rd ed.). Pearson. Courtemanche, A. B., Turner, L. B., Molteni, J. D., & Groskreutz, N. C. (2021). Scaling up behavioral skills training: Effectiveness of large-scale and multiskill trainings. Behavior Analysis in Practice, 14(1), 36–50. https://doi.org/10.1007/s40617-020-00480-5. Dart, E. H., & Radley, K. C. (2017). The impact of ordinate scaling on the visual analysis of single-case data. Journal of School Psychology, 63, 105–118. https://doi.org/10.1016/j. jsp.2017.03.008. Deochand, N. (2017). Automating phase change lines and their labels using Microsoft Excel. Behavior Analysis in Practice, 10(3), 279–284. https://doi.org/10.1007/ s40617-016-0169-1.

130

Applied behavior analysis advanced guidebook

Dixon, M. R., Jackson, J. W., Small, S. L., Horner-King, M. J., Lik, N. M. K., Garcia, Y., et al. (2009). Creating single-subject design graphs in Microsoft Excel™ 2007. Journal of Applied Behavior Analysis, 42(2), 277–293. https://doi.org/10.1901/jaba.2009.42-277. Epstein, R., Mejia, J., & Robertson, R. E. (2017). The frequency profile: An informative method for graphing the behavior of individuals post hoc or in real time. Behavior Analysis: Research and Practice, 17(1), 55–73. https://doi.org/10.1037/bar0000052. Erath, T. G., DiGennaro Reed, F. D., & Blackman, A. L. (2021). Training human service staff to implement behavioral skills training using a video‐based intervention. Journal of Applied Behavior Analysis, 54(3), 1251–1264. https://doi.org/10.1002/jaba.827. Falligant, J. M., Cero, I., Kranak, M. P., & Kurtz, P. F. (2020). Further application of the generalized matching law to multialternative sports contexts. Journal of Applied Behavior Analysis, 54(2), 389–402. https://doi.org/10.1002/jaba.757. Fisher, W. W., Fuhrman, A. M., Greer, B. D., Mitteer, D. R., & Piazza, C. C. (2020). Mitigating resurgence of destructive behavior using the discriminative stimuli of a multiple schedule. Journal of the Experimental Analysis of Behavior, 113(1), 263–277. https://doi. org/10.1002/jeab.552. Haddock, J. N., & Iwata, B. A. (2015). Software for graphing time series data. Journal of Applied Behavior Analysis. Hagopian, L. P. (2020). The consecutive controlled case series: Design, data‐analytics, and reporting methods supporting the study of generality. Journal of Applied Behavior Analysis, 53(2), 596–619. https://doi.org/10.1002/jaba.691. Hagopian, L. P., Frank-Crawford, M. A., Javed, N., Fisher, A. B., Dillon, C. M., Zarcone, J. R., et al. (2020). Initial outcomes of an augmented competing stimulus assessment. Journal of Applied Behavior Analysis, 53(4), 2172–2185. https://doi.org/10.1002/jaba.725. Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27(2), 197–209. https://doi.org/10.1901/jaba.1994.27-197 (Reprinted from “Toward a functional analysis of self-injury,” 1982, Analysis and Intervention in Developmental Disabilities, 2(1), 3–20). Journal of Applied Behavior Analysis. (2022). Author guidelines. Retrieved from https://onlinelibrary.wiley.com/page/journal/19383703/homepage/forauthors.html. Kazdin, A. E. (2021). Single-case research designs: Methods for clinical and applied settings (3rd ed.). Oxford University Press. Kranak, M. P., Falligant, J. M., Bradtke, P., Hausman, N. L., & Rooker, G.W. (2020).Authorship trends in the journal of applied behavior analysis: An update. Journal of Applied Behavior Analysis, 53(4), 2376–2384. https://doi.org/10.1002/jaba.726. Kranak, M. P., & Mitteer, D. R. (2022). A concise review of recent advancements in the graphical training of behavior analysts. Journal of Applied Behavior Analysis, 1–6. https:// doi.org/10.1002/jaba.943. Kranak, M. P., Shapiro, M. R., Sawyer, M. R., Deochand, N., & Neef, N. A. (2019). Using behavioral skills training to improve graduate students’ graphing skills. Behavior Analysis: Research and Practice, 19(3), 247–260. https://doi.org/10.1037/bar0000131. Kubina, R. M., Kostewicz, D. E., King, S. A., Brennan, K. M., Wertalik, J., Rizzo, K., et al. (2021). Standards of graph construction in special education research: A review of their use and relevance. Education and Treatment of Children, 44(4), 275–290. https://doi. org/10.1007/s43494-021-00053-3. Lacey, A., & Ashby, D. (2018). 10 excel functions everyone should know. Harvard Business Review. https://hbr.org/2018/10/10-excel-functions-everyone-should-know. Lane, J. D., & Gast, D. L. (2014). Visual analysis in single case experimental design studies: Brief review and guidelines. Neuropsychological Rehabilitation, 24(3–4), 445–463. https:// doi.org/10.1080/09602011.2013.815636.



Graphing and visual data displays

131

Lanovaz, M. J., & Hranchuk, K. (2021). Machine learning to analyze single-case graphs: A comparison to visual inspection. Journal of Applied Behavior Analysis, 54(4), 1541–1552. https://doi.org/10.1002/jaba.863. Ledford, J. R., Barton, E. E., Severini, K. E., Zimmerman, K. N., & Pokorski, E. A. (2019). Visual display of graphic data in single case design studies. Education and Training in Autism and Developmental Disabilities, 54(4), 315–327. https://www.jstor.org/stable/26822511. Lehardy, R. K., Luczynski, K. C., Hood, S. A., & McKeown, C. A. (2021). Remote teaching of publication-quality, single-case graphs in Microsoft Excel. Journal of Applied Behavior Analysis, 54(3), 1265–1280. https://doi.org/10.1002/jaba.805. Microsoft. (2022). Apps and services. https://www.microsoft.com/en-us/microsoft-365/ products-apps-services. Microsoft. (2020). Microsoft FY20 third quarter earnings conference call. http://view. officeapps.live.com/op/view.aspx?src=https://c.s-microsoft.com/en-us/CMSFiles/ TranscriptFY20Q3.docx?version=f0427a57-33bf-57a5-7029-6a8904c0c123. Mitteer, D. R., & Greer, B. D. (2022). Using GraphPad Prism’s heat maps for efficient, finegrained analyses of single-case data. Behavior Analysis in Practice, 15, 505–514. https://doi. org/10.1007/s40617-021-00664-7. Mitteer, D. R., Greer, B. D., Fisher,W.W., & Cohrs,V. L. (2018).Teaching behavior technicians to create publication-quality, single-case design graphs in GraphPad Prism 7. Journal of Applied Behavior Analysis, 51(4), 998–1010. https://doi.org/10.1002/jaba.483. Mitteer, D. R., Greer, B. D., Randall, K. R., & Briggs, A. M. (2020). Further evaluation of teaching behavior technicians to input data and graphing using GraphPad Prism. Behavior Analysis: Research and Practice, 20(2), 81–93. https://doi.org/10.1037/bar0000172. Mitteer, D. R., Luczynski, K. C., McKeown, C. A., & Cohrs, V. L. (2020). A comparison of teaching tacts with and without background stimuli on acquisition and generality. Behavioral Interventions, 35(1), 3–24. https://doi.org/10.1002/bin.1702. Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-based staff training: A guide for practitioners. Behavior Analysis in Practice, 5(2), 2–11. https://doi.org/10.1007/ BF03391819. Peltier, C., Muharib, R., Haas, A., & Dowdy, A. (2022). A decade review of two potential analysis altering variables in graph construction. Journal of Autism and Developmental Disorders, 52(2), 714–724. https://doi.org/10.1007/s10803-021-04959-0. Radley, K. C., Dart, E. H., & Wright, S. J. (2018). The effect of data points per x- to y-axis ratio on visual analysts evaluation of single-case graphs. School Psychology Quarterly, 33(2), 314–322. https://doi.org/10.1037/spq0000243. Retzlaff, B. J., Craig, A. R., Owen, T. M., Greer, B. D., O’Donnell, A. O., & Fisher, W. W. (2022). Identifying cyclical patterns of behavior using a moving-average, data-smoothing manipulation. In preparation. Shahan, T. A., & Greer, B. D. (2021). Destructive behavior increases as a function of reductions in alternative reinforcement during schedule thinning: A retrospective quantitative analysis. Journal of the Experimental Analysis of Behavior, 116(2), 243–248. https://doi. org/10.1002/jeab.708. Tufte, E. R. (1990). Envisioning information. Graphics Press. Tufte, E. R. (2001). The visual display of quantitative information (2nd ed.). Graphics Press. Tufte, E. R. (2006). Beautiful evidence. Graphics Press. Watts, Z. B., & Stenhoff, D. M. (2021). Creating multiple-baseline graphs with phase change lines in Microsoft Excel for Windows and macOS. Behavior Analysis in Practice, 14(4), 996–1009. https://doi.org/10.1007/s40617-021-00552-0.

This page intentionally left blank

CHAPTER 6

Supervising ABA trainees and service providers Amber L. Valentino and Mia N. Broker Trumpet Behavioral Health, United States

Supervising ABA trainees and service providers The profession of applied behavior analysis (ABA) is experiencing exponential growth. New service providers at all levels of practice become certified and registered each day. As of April 1, 2022, there were 55,628 Board Certified Behavior Analysts (BCBAs), 5571 Board Certified Assistant Behavior Analysts (BCaBAs), and 115,238 Registered Behavior Technicians (RBTs) (Behavior Analyst Certification Board, 2020). Each of these credentials carry the requirement to either receive or deliver supervision at some specified point in the professional’s development—during training, after training, or both. Supervision requirements span all phases of an ABA professional’s experience, including time spent training for their professional role (e.g., BCBA) and in some cases, throughout their practice (e.g., RBTs). Effective supervision is critical to the overall development of our profession. High-quality and effective supervision can facilitate delivery of excellent behavioral services and support the supervisor and supervisee’s professional development (LeBlanc & Luiselli, 2016). Given the influence supervision can have over service delivery and professional development, the decision to supervise is a significant one. There are several considerations for a potential supervisor to make based on the credential of the supervisee, goals of supervision, and the supervisee and supervisor’s career path. In this chapter, we help the reader make these important considerations and focuses on summarizing and clarifying elements of delivering supervision.These elements include supervisor training requirements by the Behavior Analyst Certification Board (BACB), establishing a supervisory relationship, dominant supervision practices, monitoring, measuring, and evaluating supervision effectiveness, and ethical problem solving that arises during supervision. We also provide practice recommendations for service Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00006-4

Copyright © 2023 Elsevier Inc. All rights reserved.

133

134

Applied behavior analysis advanced guidebook

delivery and research inquiry. Before diving into these considerations and recommendations, we provide an overview and synthesis of the supervision literature to date.

Research basis and evidence support Determining one’s ability, motivation, capacity, and commitment to supervising a professional in the field of applied behavior analysis is an important consideration and carries with it significant implications for the future of our profession. The behavior analytic supervisor must carefully weigh the benefits of supervising (e.g., skill development, contribution to the field) with the gravity of their supervisory role (e.g., time commitment, planning, structure) to make this important decision for each person they might supervise (Britton, Crye, & Haymes, 2021). In turn, the supervisee must commit to learning, and successfully complete the credential in partnership with their new supervisor. While the decision to provide or receive supervision can seem daunting, the behavior analytic supervision literature has blossomed over the past six years, offering practical guidance for both supervisors and supervisees. The supervision literature blossomed due to several important changes in our profession’s history and due to hallmark articles that facilitated quick progression of supervision research. First, in 2012, the Behavior Analyst Certification Board (BACB) began requiring completion of eight hours of curriculum-based training focused on supervisory skills. This requirement made salient the need for our profession to provide high-quality training to supervisors, but at that time, research on the topic of supervision was mostly focused on effective training procedures to establish specific behavior analytic skills (e.g., Parsons, Rollyson, Iverson, & Reid, 2012) rather than the conceptualization, process, and practice of supervision. A hallmark article was published in 2015, by DiGennaro-Reed and Henley—the first data-based study examining different types of staff and supervisory training procedures in applied settings. Their survey of 382 behavior analytic professionals reveled that even though most professionals were responsible for supervising others, only a small percentage of respondents reported receiving any preservice supervisory training before working independently. The DiGennnaro-Reed and Henley study spawned several additional papers on the topic of supervision and shortly after the publication of this study, the journal, Behavior Analysis in Practice (BAP) published a special issue on the topic of supervision, burgeoning the supervision literature greatly. That



Supervising ABA trainees and service providers

135

special issue of BAP was notable due to several foundational articles (e.g., Sellers, LeBlanc, & Valentino, 2016) focused on defining the supervisor’s role, best practices, and encouraging our profession to conceptualize supervision above and beyond a specific requirement and documentation. Finally, the BACB now requires continuing education units focused on supervision, a requirement that has opened the opportunity for heightened discussion, empirical evaluation, and development of our supervisory practices. Much of the early supervision literature focused on collaboration between the supervisor and supervisee to frequently evaluate the effectiveness of supervision (Turner, Fischer, & Luiselli, 2016), and about the role of the ABA supervisor. Authors defined the activities associated with high-quality supervision, often in the form of recommended practices. For example, Sellers et al. (2016) established five key recommended practices for supervision (e.g., establish an effective relationship, evaluate the effects of supervision) and provided detailed strategies and resources for each practice. Some authors suggested specific practices associated with unique experiences such as group supervision (e.g., Valentino, LeBlanc, & Sellers, 2016) and suggested that these experiences should be conceptualized and crafted differently than individual supervision to optimize the supervisee’s learning experience within them. The early supervision literature also addressed interpersonal dynamics and identified the influence relationships could have on quality of supervision, recommending ways to both recognize and address interpersonal relationship barriers to successful supervision (Sellers et al., 2016). Some authors identified structural arrangements for mentoring, education, and training future BCBAs (Hartley, Courtney, Rosswurm, & LaMarca, 2016). A central theme in this early literature was encouragement for the field to continue to create supervision resources to facilitate high-quality supervision (Valentino, 2021). Recent research revealed that while supervisors noted several areas of strength to their practice such as using a contract and evaluating supervisor capacity, they also noted several areas needed for improvement, specifically setting clear expectations for receiving feedback, conducting ongoing evaluation of the supervisory relationship, using competency-based evaluations, and tracking outcomes (Sellers, Valentino, Landon, & Aielo, 2019). Research has also demonstrated that supervisors engage in recommended practices more frequently when there are concrete guidelines and supports in place helping them to do so (Hajiaghamohsen, Drasgow, & Wolfe, 2021) and that supervisory skills such as feedback can be directly taught using basic procedures such as task clarification and

136

Applied behavior analysis advanced guidebook

self-monitoring (Schulz & Wilder, 2022) and video modeling with voice over instruction (Shuler & Carroll, 2018). Fortunately, professionals have responded to this encouragement and have shared additional guidance for supervisors over the last several years. These resources can be categorized into several key areas: ethics, virtual supervision, defining the role and expectations of the supervisee, and training/teaching supervisory skills. Ethics. One of the early recommended practices was to incorporate ethics and professional development into supervision (Sellers et al., 2016). While many supervisors acknowledge incorporation of ethics as a strength of their supervision practice (Sellers et al., 2019), recent discussion papers have proposed more concrete ways to integrate ethics and have established useful tools for the task. These discussion papers focus on outlining the details of the BACB ethics code and providing tools to help navigate the code, particularly the section focused on supervision. For example, Britton et al. (2021) highlighted the common ethics code violation categories including improper or inadequate supervision/delegation, failure to report/respond to the Behavior Analyst Certification Board (BACB) as required, and professionalism/integrity. After highlighting these areas, the authors provided guidance on the structure and organization of supervision through checklists, decision making processes linked to the ethics code, and rubrics. This article is particularly useful for supervisors needing more resources to support the cultivation of ethical repertoires in their supervisees. In another example, Sellers et  al. (2016) outlined each section of the BACB ethics code related to supervision and shared nuances to consider when considering compliance to each section. More refined recommendations have focused on very specific code adherence, such as clinical supervision across interdisciplinary teams (Lindbald, 2021). Supervisors can now look to the research base to find literature to share with their supervisees to support them in identifying ethical practices in an organization prior to employment (Brodhead, Quigley, & Cox, 2018), and how to set up an ethics network and culture in their own workplace (Cox, 2020; LeBlanc, Onofrio, Valentino, & Sleeper, 2020). These resources combined have helped facilitate both our knowledge and adherence to ethics as a key component of effective supervision at all levels. Virtual supervision. While virtual supervision occurred prior to 2020, the COVID-19 pandemic propelled it to be a common practice among behavior analysts. Though many of the previous recommended practices could easily apply in a virtual scenario (e.g., establishing a committed and positive relationship; Sellers et al., 2016), additional tools and investigation



Supervising ABA trainees and service providers

137

are needed given the potential obstacles faced by supervisors in designing effective supervision remotely (Ninci et al., 2021). Some early research has demonstrated that virtual supervision can be both effective in helping therapists master skills that then help children master skills (Barkaia, Stokes, & Mikiashvili, 2017) and is acceptable by trainees (Simmons, Ford, Salvatore, & Moretti, 2021). Simmons et al. examined acceptability and feasibility of virtual supervision for BCBA/BCaBA trainees during COVID-19 via a survey and found that although client hours and overall supervision hours decreased, respondents were generally satisfied with virtual and group supervision. More research that examines other variables associated with the design of virtual supervision and its effectiveness, as well as application across different professionals (e.g., the RBT) is needed. These early studies show a positive outlook for the continued use of virtual supervision in our profession. The role and expectations of the supervisee. Several recent articles have focused not on the supervisor, but on the trainee instead, highlighting responsibilities and training models. There are broad responsibilities the supervisee must take on and very specific behaviors for supervisees to engage in. To focus on broader behaviors, Helvey, Thuman, and Cariveau (2022) matched Sellers et al.’s 2016 recommended practices with behaviors that the ­behavior-analytic trainee should perform. This article helped provide perspective on the trainee’s role in high-quality supervision, including review of their own caseload, requesting specific resources from the supervisor, and preparing a weekly agenda. On a smaller level, Parry-Cruwys, Atkinson, and MacDonald (2021) successfully taught graduate students preparing to become BCBAs how to track their fieldwork hours using Behavioral Skills Training (BST). These types of articles focused on the supervisee are likely to invite many more of their kind, concentrating on professional development and supervisory commitment among individuals in training. Identifying and more clearly articulating the role of the supervisee in this experience is a positive transition in the literature, as having both parties committed to high-quality supervision is likely to facilitate its positive development over time. Training/Teaching Supervisory Skills. The variety of skills a supervisor must acquire to provide high-quality supervision can seem endless. Several recent authors have focused on how to teach and train supervisory skills, spanning a variety of behaviors that must be established in supervisors. For example, effectively collaborating with individuals of other disciplines supports one’s own career development and provides an example for the supervisee

138

Applied behavior analysis advanced guidebook

about how to collaborate with nonbehavior analytic professionals. Boivin, Ruane, Quigley, Harper, and Weiss (2021) described a training model utilizing e-learning, rotations, and assignments to effectively establish this skill in individuals working to become BCBAs. Other authors have tackled cultural competence and diversity (Conners, Johnson, Duarte, Murriky, & Marks, 2019), creating environments (e.g., practicum placements with universities) where supervisees can be successful (Dubuque & Dubuque, 2018), and teaching soft skills such as active listening, compassion, and rapport-building (Andzik & Kranak, 2021). For supervisors looking for support in their practice, the behavior analytic literature is now ripe with recommended practices, tools, effective teaching strategies and guidance. The literature is also consistent with the message that care must be taken to provide supervision both positively and effectively. Taken together, the literature thus far suggests several considerations that must be made when providing supervision. We outline these critical practice considerations below.

Critical practice considerations There are several logistical, practical, and ethical considerations that must be made prior to providing supervision. Making these considerations will enable the supervisor to provide the appropriate quantity and quality of supervision to their trainee or supervisor.We will focus on the primary credentials offered through the Behavior Analyst Certification Board (BACB), the dominant credentialing body within the profession of behavior analysis. While there are other credentialing agencies (e.g., Qualified Behavior Analyst; QBA) they are less frequently sought.

The supervisor must consider The credential the supervisee has or is seeking. In behavior analysis, there are four credential options: the Registered Behavior Technician (RBT), the Board Certified Assistant Behavior Analyst (BCaBA), the Board Certified Behavior Analyst (BCBA), or the Board Certified Behavior Analyst-Doctoral (BCBA-D). The RBT is an entry-level position who delivers treatment as outlined by their supervisor. The BCaBA is a ­bachelor-level practitioner who offers clinical support under the guidance of a supervisor. The BCBA is a graduate-level independent practitioner who makes all clinical decisions, provides clinical direction, manages cases, and supervises all other levels of credentials. The BCBA-D is a doctoral



Supervising ABA trainees and service providers

139

d­ esignation for BCBAs with doctoral or postdoctoral training in behavior analysis. The BCBA-D is not a separate certification and carries with it the same privileges as the BCBA. The supervisor should determine which credential the supervisee has or is seeking. Each credential comes with its own requirements and the type of credential will determine the length of time and type of supervision required. Basic requirements for the supervisee. Each credential has basic requirements that must be met for the individual to pursue or to maintain it. Potential supervisors should consider each of these requirements and be knowledgeable of how to best support supervisees during training and after they have received the credential. Being knowledgeable of maintenance requirements will enable supervisors to best support the supervisee to keep their credential and uphold their ethical obligations to the profession. • An RBT must be 18-years-old, have a High School diploma, pass a background check, complete a 40-h training, and complete an initial competency assessment. After passing an exam, to maintain the RBT, one must receive ongoing supervision, adhere to ethics guidelines, and renew annually. • A BCaBA must meet one of two degree pathways, course content, and fieldwork requirements. After taking and passing an exam, to maintain the BCaBA, one must adhere to the ethics requirements, meet continuing education requirements, receive ongoing supervision, and recertify every 2 years. • The BCBA and BCBA-D must meet one of four degree pathways, experience, and fieldwork requirements. After taking and passing the BCBA exam, BCBAs must adhere to the ethics requirements, meet continuing education requirements, and recertify every 2 years. The supervisee’s status (in training or already certified/registered). Knowing whether a supervisee is still in training or already certified or registered will enable the best support of them. The BCaBA and BCBA require very specific supervision touch points throughout their time in training.The RBT does not technically require supervision precredential but will require specific oversight immediately once the exam is passed and the RBT credential is obtained. The BCBA and BCBA-D do not require supervision postcertification unless a BCBA is supervising another BCBA within their first year of being certified. Amount of supervision required. Each credential carries requirements for both the length and frequency of supervision required. Certain requirements are temporary (e.g., BCBAs in training do not need ­supervision

140

Applied behavior analysis advanced guidebook

after they become certified) whereas others need supervision for the length of time the individual holds the credential (e.g., there is no end to ongoing supervision for practicing RBTs). Here is a summary of the length and frequency of supervision requirements at each level: • RBTs are required to receive ongoing supervision for a minimum of 5% of the hours spent providing behavior-analytic services each calendar month. • BCaBAs in training require 5% (Supervised Fieldwork) or 10% (Concentrated Supervised Fieldwork) of hours supervised by a qualified supervisor each supervisory period. Once a BCaBA is certified, they must receive 5% of the total service hours provided for their first 1000 h of practice and 2% of the total service hours provided each month thereafter. • BCBAs and BCBA-Ds in training need to have 5% (Supervised Fieldwork) or 10% (Concentrated Supervised Fieldwork) of hours supervised by a qualified supervisor each supervisory period. Once an individual becomes certified, supervision is no longer necessary unless the individual is providing supervision to another person in training during their first year of certification. In this case, the first-year certificant must receive one hour of supervision once per month from a consulting BCBA supervisor. Location of services.The supervisor should thoughtfully consider the location in which the supervisee will conduct their behavior analytic activities. There are a variety of location possibilities depending on the population, and type of work conducted. For example, supervisees may work in schools, communities, centers, homes or even in businesses.Thus, the supervisor must carefully navigate how supervision will be effectively provided in these environments and meet requirements. The type of supervision required. Group supervision has been shown to offer unique benefits to supervisees (Valentino et al., 2016) and all credentials allow at least some portion of supervision to be provided in a group setting, both in training and post certification. In all situations, group supervision can only make up a certain portion of the overall supervisory experience within an allotted time. For example, BCBAs in training can receive up to 50% of their supervision in a small-group setting if there are 10 or fewer supervisees and the content meets overall requirements.The supervisor will need to carefully consider whether they can provide the type of supervision required. While group is never required, the supervisee may wish to have some of these experiences. If the supervisor cannot arrange



Supervising ABA trainees and service providers

141

for specific experiences, such as group, they should be honest about that limitation prior to beginning the supervisory relationship. The modality of supervision.There are many options for the modality of supervision—in person, video meeting, or phone call. All credentials require meeting regularly and permit online interaction. A combination of synchronous and asynchronous interactions is allowable. One modality may not work for all supervision interactions. For example, a supervisor may need to observe a supervisee delivering services in the client’s home, which could occur live or through a virtual meeting. As another example, a supervisor may provide oversight to a supervisee in a school setting, but video may not be allowed in that environment. The supervisor would need to be on-site and in person to provide direct supervision in the school setting. Consider the environments in which supervision may need to take place and any restrictions that could affect the desired modality of supervision. Training & other requirements for the supervisor. When considering one’s own qualifications, two areas become important factors for a potential supervisor: the requirements of performing the role, and the qualifications to supervise a specific person.The BACB requires various training and ongoing education. For example, to supervise an aspiring BCBA in fieldwork, a supervisor must complete 8 h of supervision training that meets specific content criteria, must receive regular continuing education related to supervision, and if supervising within the first year of one’s own certification, must receive support from a consulting supervisor. In addition to basic requirements, the supervisor should be knowledgeable of supervision best practices and be competent in practicing them. Current supervisory volume and supervisor’s capacity to support. Potential supervisors should assess current supervisory volume and confirm they can fully support new supervisees (Britton et al., 2021; Sellers et al., 2016). Assessment of capacity is not only an ethical obligation, but one that ensures supervision is thoughtful, organized, and best supports the individual. While there are no hard and fast rules on the number or percentage of supervisees one can have, the new supervisor should consider several factors such as current number of supervisees, caseload or other nonsupervisory work responsibilities, actual time available for supervision, and the supervisee’s available time, current work schedule, and enthusiasm for supervision. Clinical content. The supervisor should consider the type of work the supervisee will do, including the population and setting. In doing so, the supervisor should be confident they can teach the supervisee new skills in

142

Applied behavior analysis advanced guidebook

that area. For example, if a supervisee desires to work primarily with adults with dementia, but the supervisor’s experience is primarily with children with autism, the supervisor may not be able to provide the clinical content necessary for supervision to be optimally effective. While there are basics to cover and non-population-specific concepts to understand, having clinical knowledge for the application of those basics and concepts will ensure an optimal supervision experience for both parties. The overall match. In addition to being sure there is a match in clinical content, the supervisor should assess interpersonal characteristics, interests, workstyle, learning needs and learning style. This assessment should reveal that the supervisor and supervisee are compatible, can learn from one another, and will work well together over the course of the supervisory relationship. Sellers et al. (2016) offer a useful guide for considering the types of barriers a supervisor may encounter. This guide may be useful when speaking with a potential supervisee upfront prior to commencing supervision with them. Structure of supervision.The supervisor and supervisee should structure supervision in a way that it is conducive to the support needed. The structure may differ depending on the credential the supervisee has or seeking. For example, RBT supervision will involve a high volume of direct observation and feedback. Supervision of a BCBA in training may include learning assignments such as reading an article and discussing it. Before supervising, determine how supervision will flow, including what both parties will bring, the frequency of observation and feedback, and the expectations from both parties.These details can and should be outlined in an agreement prior to the onset of the relationship.

Present recommendations for service delivery Once these critical considerations have been made and supervision commences, several literature informed recommendations for supervisory practice exist. These practice recommendations span a variety of topics from relationship building to ethically terminating the supervisory relationship. We describe each of these practice recommendations below. Practice 1: Cultivate a strong relationship. An underlying theme of much of the supervision literature is that to build competence, establish critical repertoires in supervisees, and to provide excellent supervisory oversight, the supervisee and supervisor must have a positive and committed working relationship. This relationship must be cultivated by both parties, and each must do their part to communicate and learn



Supervising ABA trainees and service providers

143

from one another. Several authors have detailed behaviors of both parties to establish strong supervisory relationships and these behaviors include logistical details such as using well written, precise contracts (Sellers, Valentino, & LeBlanc, 2016) and “soft skills” like maintaining positive and respectful interactions (Helvey et  al., 2022). Focus on the relationship from the beginning and regularly check in with yourself and supervisees to ensure the relationship remains strong throughout the supervisory period. We refer the reader to the many published checklists, to support facilitation of a positive relationship including those published by Sellers et al. (2016) and Helvey et al. (2022). A deeper dive into assessment of barriers, particularly related to work style and interpersonal characteristics can be found in Sellers et al. (2016). In addition to these excellent resources, we have provided a quick and easy-to-use checklist focused on relationship building and maintenance in appendix A. Practice 2: Thoughtfully structure supervision time. Regardless of the role a supervisor plays, having structure to the time spent together is important from both a compliance perspective and for ensuring the supervisee receives the training, oversight, and professional development they need to be successful. This structure is often referred to as a “roadmap” and guides both the supervisor and supervisee to participate in various activities over the course of their time together. For individuals receiving ongoing supervision as part of their professional role (such as the RBT), the roadmap might include activities to help them learn new skills and be successful in their current role. For individuals receiving supervision for a set period to obtain a specific credential (e.g., a BCBA in training), activities may be focused on very specific activities linked to the job the person will do postcertification. Appendix B provides a sample of a roadmap utilized by Trumpet Behavioral Health. This timeline links back to a greater competency system based on an earlier version of the BACB’s task list. It provides the reader with an example of how to structure time and activities when supervising an individual in fieldwork as they work toward becoming a BCBA. Practice 3: Build competence. It is critical that supervisors use competency based training to support their supervisees’ development. Training is often completed using behavioral skills training (BST). BST is an active-response training procedure that has proven effective for teaching individuals a variety of new skills (Lerman, LeBlanc, & Valentino, 2015), such as teaching staff how to conduct specific behavioral assessments (Barnes, Mellor, & Rehfeldt, 2014). It typically

144

Applied behavior analysis advanced guidebook

involves instructions, modeling, rehearsal, and feedback. Competencies are embedded in the rehearsal phase to be sure that the individual not only completes the new skill but completes it with a high level of integrity. Typically, feedback is provided until the skill is performed independently and correctly in a variety of scenarios. As an example, a supervisor might teach a supervisee how to have a difficult conversation with a caregiver. The supervisor might create an integrity checklist to teach the supervisee the new skill and to assess performance. After describing and modeling, the supervisor might have the supervisee practice having a difficult conversation until the supervisee can do so independently and correctly, at which time they may be observed doing so with actual caregivers across a variety of situations and conversations. Building and assessing competence may vary depending on the type of work the supervisee does and the level of credential they hold or are seeking, but having competency based training in place is critical for effective supervision. Appendix C provides a sample of Trumpet Behavioral Health’s Clinical Competency Assessment and Support System (the TBH C-CASS), a curriculum designed for supervisors of individuals who are working to become BCBAs. Practice 4: Maintain excellent documentation. Supervisors and supervisees should take equal responsibility for maintaining documentation that tracks the quantity and content of supervision.This documentation can serve as a roadmap to be sure requirements will regularly be met and to determine the end of supervision (if an end exists, such as the case for supervising a BCBA in training). Documentation systems should be as automatic as possible, require minimal hand calculation, and should notify the users along the way if requirements are not being met so that they may correct the situation. If operating at a large scale, such as providing supervision to multiple RBTs across regions or time zones, processes can be built into software to allow for easy tracking. Appendix D describes the critical elements that should be in place for tracking on a large scale and shows a system utilized by Trumpet Behavior Health’s RBT Requirements Coordinator (RRC) to track RBT supervision requirements across multiple locations, supervisors, and schedules. Practice 5: Commit to open and honest communication. An extension of the recommended practice to cultivate a strong relationship is to focus on effective communication. Many issues and problems that prevent successful supervision revolve around lack of effective



Supervising ABA trainees and service providers

145

c­ ommunication which can pose supervision barriers, and by extension result in poor service delivery. Solving these communication problems can involve clearly outlining expectations for common supervisory ­experiences such as receiving feedback and reviewing the use of organizational tools to address time management issues (Sellers et al., 2016). In addition to addressing common barriers, by engaging in open and honest communication supervisors model for their supervisees how to navigate difficult conversations and positively solve problems through direct dialogue. Texts such as Crucial Conversations: Tools for talking when stakes are high (Grenny & Patterson, 2021) may prove useful in supporting supervisors in this recommended practice. Practice 6: Evaluate the effects of supervision. Evaluating the effects of supervision has been recommended as an important supervisory practice by several authors (e.g., Sellers et al., 2016) and is considered standard. There are a variety of ways to assess whether one’s supervision is effective including tracking performance and competencies, correct responding on permanent products such as a treatment plan, asking the supervisee about their performance, and tracking client outcomes. Practice 7: Plan for a variety of experiences. The repertories that need to be built across all credentials are complex and thus require intentional focus on a variety of experiences to establish them. Focus on the skills your supervisee needs to do their work and continue growing as a professional. Incorporate activities that foster this growth. Observations of their work, modeling of critical skills, practicing responding during hypothetical scenarios, learning from other professionals, reading, and participating in professional groups will assist with the establishment of new skills and maintenance of old ones.The supervisor should take responsibility to arrange for these types of experiences throughout the supervision time, focusing on building critical repertoires. Practice 8: Commit to quality. Given the surge of interest in behavior analytic credentials at all levels, the pressure to provide supervision in large numbers is likely great. Every supervisor will have varying levels of capacity depending on their role, other responsibilities, and supervision experience. Focusing on quality of supervision will ensure adherence to supervisory best practice. Practice 9: Intentionally build critical skills. Supervisors should think critically about unique skills required for success in various roles.

146

Applied behavior analysis advanced guidebook

Examples of skills required for success include diversity, equity, and inclusion (Conners et al., 2019), compassionate care and therapeutic relationships (LeBlanc, Taylor, & Marchese, 2020), ethics (Britton et  al., 2021), and interdisciplinary collaboration (Lindbald, 2021). Be intentional with efforts to foster these skills and embed opportunities for learning them. If your knowledge of the topic is limited, arrange for outside consultation, experiences, and supervision, and learn along with your supervisee. Practice 10: Transition thoughtfully. All supervisory relationships will end, either because an individual becomes certified or due to other transition circumstances (e.g., an employment change or relocation). Section 4 of the Behavior Analyst Certification Code focuses on the supervisor’s responsibility to supervisees and trainees. A specific subsection, 4.12 requires supervisors to appropriately terminate supervision (BACB, 2020). Thoughtful transitions involve collaborating with the supervisee, next supervisors, and other relevant parties to ensure the experience is positive for all. Appendix E provides a sample transition checklist for terminating supervision at all levels.

Recommendations for research inquiry The behavior analytic supervision literature has produced several recommended practices. The past 10 years of literature on this topic have also produced meaningful resources for supervisors. However, most of these practices and resources have yet to be empirically validated. Though some authors have assessed the state of current supervisory practice via surveys (e.g., Hajiaghamohsen et  al., 2021), there is a paucity of empirical support for use of recommended practices and how they influence important outcomes.Valentino (2021) recommends four broad categories for supervision exploration and a possible progression of study for future researchers to follow. These categories are (1) to define high-quality supervision, (2) confirm that the definition is accurate, (3) determine important variables, and (4) evaluate the effects of supervision on those variables. Within these broad areas, Valentino suggests researchers (a) experimentally evaluate existing recommendations, (b) define supervisory behavior, (c) identify effective training methods, (d) develop assessment tools, (e) establish supervision outcomes, (f) develop new models and frameworks, and (g) establish maintenance and generalization teaching procedures. These recent suggestions are still valid and could keep the interested researcher busy for a long time.



Supervising ABA trainees and service providers

147

Valentino also provides specific research questions linked to each of these suggestions for future researchers to ask and answer. We encourage the interested reader to reference that list of questions and we have expanded upon that list with new and updated research inquiries below: (1) Examine ways to improve ethical training during supervision. (2) Determine an approach to assess ethical competence. (3) Establish supervisory strategies to teach cultural competency. (4) Evaluate supervisory strategies to increase competence in working with diverse populations, and how this knowledge translates into clinical practice. (5) Extend research on the role of the supervisee in successful supervision. (6) Empirically validate large scale organizational supervision systems and their effects on important outcomes (e.g., client progress, supervisee competence). (7) Establish and evaluate training models to create competency in collaborative skills. (8) Investigate the effectiveness of interventions to target specific supervisor repertoires (e.g., providing feedback, addressing performance issues).

Summary The profession of behavior analysis is expanding at a rapid pace, necessitating an increase in supervision at all levels of practice. The quality of supervision can influence all facets of our work, making the decision to supervise a critical one for potential supervisors. Over the past ten years, the supervision literature has developed, with early literature focusing on the practices associated with high-quality supervision and more recent literature offering precise guidance in areas such as ethics and virtual support. In this chapter, we summarized logistical, practical, and ethical considerations that must be made prior to providing supervision. We provide the reader with ten practice recommendations and ideas for future research inquiry within the topic of behavior analytic supervision. As the need for behavior analytic services continues to grow, so will the need for behavior analytic supervision. Supervision is a critical tool for demonstrating positive clinical outcomes in the clients we serve, creating professional career paths for our community of workers, and demonstrating the effectiveness of our science to society.

148

Applied behavior analysis advanced guidebook

Appendix A Supervisor checklist for relationship building and maintenance Get to know your supervisee by learning some information about their life (e.g., where they grew up, how they discovered behavior analysis, their hobbies). Establish clear expectations at the beginning of supervision (e.g., on time arrival to meetings, communication timeliness, preparedness). Engage in active listening by smiling, making eye contact, repeating phrases they say, and asking clarifying questions. Provide professional opportunities (e.g., presenting at a conference, completing an article review) as they arise. Address performance issues kindly, quickly, and directly. Use effective teaching strategies that are competency based and focused on critical skill building. Focus your attention on your supervisee during supervision interactions by removing all distractions (e.g., cell phone, email). Clearly communicate any changes in supervision structure, timing, or transitions. Use a contract that documents all commitments by both parties and clearly outlines contingencies for different behaviors (e.g., how, and when supervision will end). Follow through with project, tasks, and communication commitments.

Appendix B Sample supervision roadmap

Supervision Timeline Prior to Accrual of Hours

the graphic in conjunction with the Supervision Competencies Checklists (Associate Clinician and Clinician), the Supervisor Manual, and the supervision hours tracking spreadsheet throughout the entire supervisory relationship. Before any hours are accrued, you must start the professional relationship appropriately by completing the items in the pink area on the left side of the graphic. Throughout each block of approximately 100 hours of supervised experience, ensure

1) Establish the relationship IF you are an eligible supervisor (8 hour training CEUs; online module) AND the supervisee is enrolled in a course that counts towards the coursework or has already completed one AND has completed their online module.

4) Establish a regular meeting schedule (at least every 2 weeks) 5) Train the supervisee to use spreadsheet and tracking forms to track the accrued and remaining hours. Make sure supervisor and supervisee both keep a copy of each signed form.

total because if there was some error in your calculations, additional hours will have to be accrued and this will give you a head start. If the supervisee is accruing hours towards the BCaBA, your superivision will be completed at 1000 hours.

After the Accrual of Hours

Direct Implementation Time

C13

2) Sign a TBH contract with the supervisee and send a copy to Aimee Holley at [email protected], keep one for you, give one to supervisee 3) Deliver the competency checklists to the supervisee

that there is a balance and appropriate tracking of direct

below) and that the appropriate amount of supervision is provided for the number of accrued experience hours. At the end of accrual of hours, complete the items in the pink area on the right to complete documentation and facilitate success on the upcoming exam. You should CONTINUE TO meet and track hours

This graphic is intended to be used by BCBAs supervising those

A18

Hours

C15 C16

A19

C29

C19

C12

C13

1100

1200

A22

100

200

A1

A9

A7

C6

C4

A11

300

A16

400 A15 A27

500 C2 A2 A4

600

700

800

A8

A5

A13

C20

A6

A14

A24

C18

C21

A23

A28

C22

900

1000

A29 A30

C5

C27

A17

A32

A29

A12

A29 C10 C26 C28

C23 C24 C31

1) Sign EVF

C8 C7

1300 A20 C9 C17 C25

2) Help supervisee with application

1400

1500 A25

C3 C13 C33

C1 C2 C32 C34

Indirect Time Critical items to facilitate a strong and organized supervision experience

clients.” Thus, activities such as instruction or behavior management with a consumer (e.g., discrete trial teaching, naturalistic teaching, shaping, chaining, implementation of a behavior intervention plan, token ecomomy implementation, data collection during teaching) would count in the light purple area. Assessment actvities (e.g., preference assessment, functional assessment, VB-MAPP or other curricular assessment), data analysis and graphing, program development and writing (e.g., creating or revising the token economy, creating or revising a data sheet or program) and other non direct treatment implementation activities would count in the blue area.

3) Get supervisee into the study group 4) Organize signature forms and ensure spreadsheet is accurate in case of audit



Supervising ABA trainees and service providers

149

This timeline links back to a greater competency system based on an early version of the BACB’s task list. Each alphanumeric code links to a specific competency. The timeline provides the reader with an example of how to structure supervision time and activities when supervising an individual in fieldwork as they work toward becoming a BCBA or BCaBA.

Appendix C Sample question and teaching activity from Trumpet Behavioral Health’s Clinical Competency Assessment and Support System (the TBH C-CASS).

150

Applied behavior analysis advanced guidebook

Appendix D Critical tracking elements and visual of system to track RBT supervision requirements across multiple locations, supervisors, and schedules Critical elements that should be in place for tracking supervision: 1. 2. 3. 4. 5. 6.

Automation Creation of rules for supervision scheduling that are aligned with regulatory requirements Frequent (daily or weekly) report generation that aggregates data across all locations Regular data review Regular feedback and planning to ensure requirements are met Responsible coordinator and supervisors frequent review of data

Sample visual:



Supervising ABA trainees and service providers

151

Appendix E Sample transition checklist for terminating supervision at all levels ‫ ܆‬Give advance notice. Give your supervisee as much advance notice as possible that supervision is ending so they can arrange for a new supervisor, organize paperwork, and ask questions. Complete all relevant paperwork. If your supervisee is an aspiring BCBA, complete your portion of the experience verification form during the last supervision session. For all credentials, be sure both you and your supervisee maintain documentation of all supervision completed to date. Review reason for transition. Review the reason for the transition, as outlined in your contract. If supervision is being terminated due to performance issues, clearly and honestly articulate this information to your supervisee. Establish future expectations. Let your supervisee know what they can and cannot rely on you for in the future. For example, they may be welcome to contact you for a letter of reference but may not contact you to discuss a specific case or receive direct clinical supervision. Speak with the next supervisor. If you know who the next supervisor will be, arrange a time to communicate with them about your supervision experience and productivity. Share where your supervisee left off, where their strengths and weaknesses lie, and how you were supporting them when supervision ended. Share all documentation and progress with the new supervisor. Conduct a “crossover” supervision session. If possible, arrange for a session where you and the new supervisor meet with the supervisee together. Facilitate a conversation about the transition, supervisee’s work style, expectations, and history in supervision. Exchange contact information. If you wish to stay in contact, exchange contact information with your supervisee so they may contact you in the future with requests for references or professional development support. You may wish to connect with them on a professional social media platform, such as LinkedIn ™. Wish them well. Sincerely and honestly wish your supervisee well on their professional journey. Maintain all documentation. It is your responsibility to maintain accurate and complete supervision documentation for at least 7 years and as otherwise required by law.

References Andzik, N. R., & Kranak, M. P. (2021). The softer side of supervision: Recommendations when teaching and evaluating behavior-analytic professionalism. Behavior Analysis: Research and Practice, 21(1), 65–74. https://doi.org/10.1037/bar0000194. Barkaia, A., Stokes, T. F., & Mikiashvili, T. (2017). Intercontinental telehealth coaching of therapists to improve verbalizations by children with autism. Journal of Applied Behavior Analysis, 50, 582–589. https://doi.org/10.1007/s40617-018-0235-y. Barnes, C. S., Mellor, J. R., & Rehfeldt, R. A. (2014). Implementing the verbal behavior milestones assessment and placement program (VBMAPP): Teaching assessment techniques. The Analysis of Verbal Behavior, 30, 1–12. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb. com/wp-content/ethics-code-for-behavior-analysts/.

152

Applied behavior analysis advanced guidebook

Boivin, N., Ruane, J., Quigley, S. P., Harper, J., & Weiss, M. J. (2021). Interdisciplinary collaboration training: An example of a preservice training series. Behavior Analysis in Practice, 14, 1223–1236. https://doi.org/10.1007/s40617-021-00561-z. Britton, L. N., Crye, A. A., & Haymes, L. K. (2021). Cultivating the ethical repertoires of behavior analysts: Prevention of common violations. Behavior Analysis in Practice, 14, 534–548. https://doi.org/10.1007/s40617-020-00540-w. Brodhead, M. T., Quigley, S. P., & Cox, D. J. (2018). How to identify ethical practices in organizations prior to employment. Behavior Analysis in Practice, 11, 165–173. https://doi. org/10.1007/s40617-018-0235-y. Conners, B., Johnson, A., Duarte, J., Murriky, R., & Marks, K. (2019). Future directions of training and fieldwork in diversity issues in applied behavior analysis. Behavior Analysis in Practice, 12, 767–776. https://doi.org/10.1007/s40617-019-00349-2. Cox, D. J. (2020). A guide to establishing ethics committees in behavioral health settings. Behavior Analysis in Practice, 13, 939–949. https://doi.org/10.1007/s40617-020-00455-6. Dubuque, E. M., & Dubuque, M. L. (2018). Guidelines for the establishment of a ­university-based practical training system. Behavior Analysis in Practice, 11(1), 51–61. https://doi.org/10.1007/s40617-016-0154-8. Grenny, J., & Patterson, K. (2021). Crucial conversations: Tools for talking when stakes are high. New York: McGraw Hill. Hajiaghamohsen, Z., Drasgow, E., & Wolfe, K. (2021). Supervision behaviors of board certified behavior analysts with trainees. Behavior Analysis in Practice, 14, 97–109. https://doi. org/10.1007/s40617-020-00492-1. Hartley, B. K., Courtney, W. T., Rosswurm, M., & LaMarca, V. L. (2016). The apprentice: An innovative approach to meet the behavior analyst certification board’s supervision standards. Behavior Analysis in Practice, 9(4), 329–338. https://doi.org/10.1007/ s40617-016-0136-x. Helvey, C. I., Thuman, E., & Cariveau, T. (2022). Recommended practices for individual supervision: Considerations for the behavior-analytic trainee. Behavior Analysis in Practice, 15, 370–381. https://doi.org/10.1007/s40617-021-00557-9. LeBlanc, L. A., & Luiselli, J. K. (2016). Refining supervisory practices in the field of behavior analysis: Introduction to the special section on supervision. Behavior Analysis in Practice, 9(4), 271–273. https://doi.org/10.1007/s40617-016-0156-6. LeBlanc, L. A., Onofrio, O. M., Valentino, A. L., & Sleeper, J. D. (2020). Promoting ethical discussions and decision-making in a human service agency. Behavior Analysis in Practice, 13, 905–913. https://doi.org/10.1007/s40617-020-00454-7. LeBlanc, L. A., Taylor, B. A., & Marchese, N. V. (2020). The training experiences of behavior analysts: Compassionate care and therapeutic relationships with caregivers. Behavior Analysis in Practice, 13, 387–393. https://doi.org/10.1007/s40617-019-00368-z. Lerman, D. L., LeBlanc, L. A., & Valentino, A. L. (2015). Evidence-based application of staff and caregiver training procedures. In H. Roane, J. Ringdahl, & T. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 321–351). San Diego, CA: Elsevier. Lindbald, T. L. (2021). Ethical considerations in clinical supervision: Components of effective clinical supervision across an interprofessional team. Behavior Analysis in Practice, 14, 478–490. https://doi.org/10.1007/s40617-020-00514-y. Ninci, J., Colic, M., Hogan, A., Taylor, G., Bristol, R., & Burris, J. (2021). Maintaining effective supervision systems for trainees pursuing a behavior analyst board certification during the COVID-19 pandemic. Behavior Analysis in Practice, 14, 1047–1057. https:// doi.org/10.1007/s40617-021-00565-9. Parry-Cruwys, D., Atkinson, R., & MacDonald, J. (2021). Teaching graduate students to identify and adhere to practicum requirements. Behavior Analysis in Practice. https://doi. org/10.1007/s40617-021-00571-x.



Supervising ABA trainees and service providers

153

Parsons, M. B., Rollyson, J. H., Iverson, J., & Reid, D. H. (2012). Evidence-based staff training: A guide for practitioners. Behavior Analysis in Practice, 5(2), 2–11. Schulz, A., & Wilder, D. A. (2022). The use of task clarification and self-monitoring to increase affirmative to constructive feedback ratios in supervisory relationships. Journal of Organizational Behavior Management. https://doi.org/10.1080/01608061.2021.2019168. Sellers, T. P., LeBlanc, L. A., & Valentino, A. L. (2016). Recommendations for detecting and addressing barriers to successful supervision. Behavior Analysis in Practice, 9(4), 309–319. https://doi.org/10.1007/s40617-016-0142-z. Sellers, T. P.,Valentino, A. L., Landon, T., & Aielo, S. (2019). Board certified behavior analysts’ supervisory practices: Survey results and recommendations. Behavior Analysis in Practice, 12, 536–546. https://doi.org/10.1007/s40617-019-00367-0. Sellers, T. P., Valentino, A. L., & LeBlanc, L. A. (2016). Recommended practices for individual supervision of aspiring behavior analysts. Behavior Analysis in Practice, 9(4), 274–286. https://doi.org/10.1007/s40617-016-0110-7. Shuler, N., & Carroll, R. A. (2018). Training supervisors to provide performance feedback using video modeling with voiceover instructions. Behavior Analysis in Practice. https:// doi.org/10.1007/s40617-018-00314-5. Simmons, C. A., Ford, K. R., Salvatore, G. L., & Moretti, A. E. (2021). Acceptability and feasibility of virtual behavior analysis supervision. Behavior Analysis in Practice, 14, 927–943. https://doi.org/10.1007/s40617-021-00622-3. Turner, L. B., Fischer, A. J., & Luiselli, J. K. (2016). Towards a competency-based, ethical and socially valid approach to the supervision of applied behavior analytic trainees. Behavior Analysis in Practice, 9(4), 287–298. https://doi.org/10.1007/s40617-016-0121-4. Valentino, A. L. (2021). Supervision and mentoring. In J. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational Behavior Management (OBM) Approaches for Intellectual and Developmental Disabilities. Valentino, A. L., LeBlanc, L. A., & Sellers,T. P. (2016).The benefits of group supervision and a recommended structure for implementation. Behavior Analysis in Practice, 9(4), 320–328. https://doi.org/10.1007/s40617-016-0138-8.

Further reading Behavior Analyst Certification Board. (2018). Supervision training curriculum outline (2.0). Littleton, CO: Author. Carnegie, D. (1964). How to win friends and influence people. New York: Simon and Schuster. DiGennero-Reed, F. D., & Henley, A. J. (2015). A survey of staff training and performance management practices: The good, the bad, and the ugly. Behavior Analysis in Practice, 8, 16–26. https://doi.org/10.1007/s40617-015-0044-5. Sellers, T. P., Alai-Rosales, S., & MacDonald, R. P. (2016). Taking full responsibility: The ethics of supervision in behavior analytic practice. Behavior Analysis in Practice, 9, 299–308. https://doi.org/10.1007/s40617-016-0144-x.

This page intentionally left blank

CHAPTER 7

Applied behavior analysis and college teaching Traci M. Cihona, Bokyeong Amy Kima, John Eshlemanb, and Brennan Armshawc a University of North Texas, Denton, TX, United States Retired Professor, Galesburg, IL, United States West Virginia University, Morgantown, WV, United States

b c

Teaching is the expediting of learning; a person who is taught learns more quickly than one who is not.

B. F. Skinner, Technology of Teaching

Behavior analytic efforts in education for the explicit purpose of teaching extend back to the beginnings of Applied Behavior Analysis (ABA), which could be marked as having an “official” starting date in 1968 with the inception of the Journal of Applied Behavior Analysis (JABA; see also Baer, Wolf, & Risley, 1968). B. F. Skinner also published The Technology of Teaching in 1968, in which he described two principal technologies and systems: teaching machines (Skinner, 1954) and programmed instruction (PI; cf. Holland & Skinner, 1961; Sidman & Sidman, 1965). However, the history of ABA predates the formation of JABA (e.g., Ayllon & Michael, 1959), with roots going as far back as 1948 when Dr. Sidney Bijou began working at the Child Development Institute at the University of Washington. Behavior analytic efforts in education extend back that far as well:The idea of building a teaching machine came to Skinner, for example, after visiting his daughter’s fourth-grade classroom in 1954 (Molenda, 2008; Vargas, 2013). The point here is that (1) the history of behavior analytic efforts in education extends back as far as ABA itself, and (2) education (and learning) has been a primary area of application of behavioral principles (or at least started out that way). It could be said that at its most fundamental level, behavior analysis is perhaps the most comprehensive and empirically grounded account of how organisms learn. The purpose of the present chapter, however, is not to recap the history of behavioral efforts in education. Treatments of the histories of these Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00007-6

Copyright © 2023 Elsevier Inc. All rights reserved.

155

156

Applied behavior analysis advanced guidebook

accomplishments are recounted elsewhere. For example, Vargas and Vargas (1991) cover the history of programmed instruction and teaching machines, Austin (2000) reviewed the research related to applications of behavior analysis to college teaching, and Bernstein and Chase (2013) provide a more general overview of behavior analytic contributions to higher education. The histories also include more recent treatments, such as that provided by Johnson, Street, Kieta, and Robbins (2021) (see also Johnson, 2016), which covers generative instruction. Instead, the purpose of this chapter is to provide a brief review and discussion of applications of behavior analysis in postsecondary settings, and particularly to teaching behavior analysis in these settings.We begin by defining teaching and describing some considerations regarding the measurement of its effects (i.e., learning) before introducing some of the practices common to higher education settings. Next, we introduce the critical features of instructional systems and innovations in behavioral education and a survey of behavior analytic strategies and tactics commonly employed and researched by behavior analysts in university settings. Finally, we provide some practical considerations for those teaching postsecondary students ABA, along with some suggestions for integrating research and practice in behavioral approaches to college teaching and future directions.

What is teaching? To begin, we must define teaching. From a behavioral perspective, we can define teaching as “changing behavior” (Vargas, 1977b). More specifically, teaching is a process that entails the arrangement of conditions under which behavior changes overtime (Vargas, 1977a). Learning, of course, has long been deemed something of the flip side of teaching, and has been defined by Kimble (1961) as a “relatively permanent change to behavior over time brought about by reinforced practice” (p. 6). Variations of that definition refer to changes to behavior potentials; some definitions leave out over time, though the effect remains the same. Yet the phrase, changing behavior, may seem elusive. Behaviorally it means several things in relation to teaching. First, it can mean shaping a new response or response class where none previously existed in a learner’s repertoire (cf. Peterson, 2004; Skinner, 1938). Secondly, it can entail bringing a response under effective stimulus control (cf. Sidman, 2008; Skinner, 1933), including contextual control. Thirdly, it can refer to increasing the fluency of existing but weak or fragile repertoires (cf. Binder, 1996; Bulla, Calkin, &



Applications of behavior analysis to college teaching

157

Sawyer, 2021; Evans, Bulla, & Kieta, 2021), with the expressed purpose of increasing retention, endurance, application, and stability (Fabrizio & Moors, 2003). Fourthly, it can mean expanding repertoires of learned behavior in coordinated or recombined ways (cf. Alessi, 1987), such as starting with chains and sequences of responses which in combination with other repertoires result in more complex learned skills like extended verbal repertoires, problem solving, or creativity (cf. Tiemann & Markle, 1990). In all behavior analytic systems of instruction, the above implications result in various teaching strategies and tactics. As previously mentioned, Skinner developed both programmed instruction (PI; Holland & Skinner, 1961) and a method of delivering PI known as teaching machines (Skinner, 1954), with the expressed purpose of shaping responding. Another example is Direct Instruction (DI; Engelmann & Carnine, 1982), which concerns itself with effective stimulus control based on what it refers to as faultless communication. Both DI and derivations from PI carry this further to teaching concepts. Concept learning requires multiple exemplar instruction (cf. Stokes & Baer, 1977), but to be effective, it requires teaching nonexamples based on a conceptual analysis of what constitutes variable attributes and critical features of whatever the concept is (Layng, 2018). Precision Teaching (PT) added a missing component of fluency-building to the preceding methods, focusing on improving the retention of learned skills over time, which enables and ensures that the behavior change persists (Binder, 1996; Evans et al., 2021). More complete systems of instruction, such as the Keller Plan or Personalized System of Instruction (PSI; Keller, 1968; discussed later in the chapter), the Morningside Model of Generative Instruction (MMGI; Johnson et  al., 2021), and the Comprehensive Application of Behavior Analysis to Schooling (CABAS; Greer, 1997; Singer-Dudek, Keohane, & Matthews, 2021) help address two variables: (1) how more extensive repertoires can be generated over time, and (2) how instructional efficiency can be arranged in an actual classroom environment to accomplish behavior change on a larger scale. The effect of successful teaching is that the student, also known as a learner, develops the necessary skills or repertoire to accomplish something as a result of the designed instructional sequence. In the introductory chapter in The Technology of Teaching, Skinner (1968) noted that many metaphors used regarding teaching (e.g., the teacher imparts knowledge that the learner absorbs) do not refer to the effect of instruction—the change in the student’s behavior. Rather, the metaphors lead to a popular understanding of teaching that refers to a teacher’s actions without considering the effects on the

158

Applied behavior analysis advanced guidebook

learner. They support traditional teaching practices such as lectures or presentations to accomplish teaching, whether the student learns anything or not is of no apparent concern. Either no one checks on the effects of teaching, or the feedback obtained precludes examination of effect. At best, such views of teaching invoke a system whereby learners have to teach themselves (e.g., by studying). That in turn makes for a select system whereby only those individuals with good self-management and self-teaching skills or who come from more privileged backgrounds, or both, are likely to succeed. Many other students will not succeed and seem to end up discarded by the system, perhaps accounting in part for the rates of dropping out of school or counter-control against it. This understanding of teaching reflected in Skinner’s (1968) metaphors and common views of teaching that occlude behavior change may support the preservation of a class-based societal system of education and life demarcated by inequity. A full examination of these cultural contingencies, however, has been presented elsewhere (e.g., Ellis & Magee, 2007) and goes beyond the scope of the present chapter. From a behavioral perspective, teaching involves “the arrangement of contingencies of reinforcement” (Skinner, 1968, p. 5) which expedite learning. However, defining teaching alone does not provide us with a measure of its effects (i.e., learning). College instructors need to be aware of the variations in measurement in behavior analytic systems as well as those most often employed in traditional educational settings, namely accuracy and percent correct. Some behavior analytic systems that derive from Skinner’s early efforts have kept Skinner’s (1959) rate-based measurement system. For example, around 1965, Ogden Lindsley was exploring how the basic datum of behavior analysis—rate of response—could be applied in educational settings. Skinner (1959) observed early on that when organisms learn, the behavior change over time of a response or response class always entails a change in the rate or frequency of a response. Frequencies can increase, decrease, or remain the same over time.The lattermost implies no change, hence no learning, and thus no teaching. When behavior frequencies increase or decrease, the change can be abrupt, immediate, or gradual over time, or possibly a combination of abruptness and gradualness such as a quick initial jump to a behavior frequency followed by a gradual acceleration of its rate over time (see Lindsley, 1995). Historically, educators and psychologists have focused less on frequency and more on the accuracy of responding (Goodenough, 1934). This focus has carried over into ABA as well, with percent correct serving as a primary datum. Skinner also reverted to using percent correct in PI, there seemingly



Applications of behavior analysis to college teaching

159

being no practical way to bring the rate of response into those early systems. One of the contributions of PT, discussed later in this chapter, has been to monitor and record the rates of correct and incorrect responding simultaneously.This contribution preserves what Johnston and Pennypacker (1980) referred to as standard, absolute, and universal measures of behavior; percent not technically being a measure as it lacks a unit (see Cooper, Heron, & Heward, 2020, definition of measurement). Of course, many behavior analytic methods and applications to teaching rely primarily on percent correct with only occasional uses of rate of response. Why that is so extends beyond the scope of the present chapter. Suffice to say that these methods or systems nevertheless do incorporate principles of behavior and may have thus achieved levels of success beyond typical educational methods and systems (e.g., DI in Project FollowThrough;Watkins, 1997; Headsprout; Layng,Twyman, & Stikeleather, 2003). In short, as college instructors are determining what measurement system to employ to assess teaching effectiveness or efficiency, they must constantly ask themselves ‘how do I know if the students have learned?’ It has long been said that college teaching is the only profession for which there is no professional training…would-be college teachers just start teaching. B. F. Skinner, The Shame of the American Education System

Higher education Admittedly, this chapter is restricted to a predominately Western, Educated, Industrialized, Rich, and Democratic (WEIRD; Henrich, Heine, & Norenzayan, 2010) orientation. Thus, readers should take caution to consider the macrosystems (cf. Malott & Martinez, 2006) to which other higher education institutions belong before making broad generalizations to their setting(s). Although many of the same contingencies and practices may be in effect in postsecondary institutions that are part of macrosystems outside of the US education system, much of the work in behavior analysis and education has been in response to the metacontingencies, contingencies, and cultural practices common to postsecondary institutions in the US (see, for example, Boyce & Hineline, 2002; Malott, 2005; Michael, 1991). One early example was Skinner’s (1984) paper, The Shame of the American Education System, in which he lamented on the problems that prevent successful educational reform in the US. He concluded that the solutions exist in the behavioral technologies (e.g., PI, teaching machines) that have demonstrated that students can indeed learn more in less time, particularly when what will

160

Applied behavior analysis advanced guidebook

be taught is defined, and students move through curricula that teach prerequisite and component skills and allow students to advance at their own pace (also see Geller, 1992). Later, Michael (1991) reported on the contingencies and practices common to college teaching, and especially those employed in content-focused, general education, high-enrollment classes. He noted that the common practices—relying on text and/or lecture as the primary modes of transmitting course content and basing student grades on only one or just a few high-stakes examinations—often leave students to teach themselves on their own time. When considered in the context of the competing and oftentimes more preferred contingencies students face outside of class time and the fixed interval scallop induced by infrequent exams (i.e., the procrastination scallop), many students learn very little in these courses and/or are often unsuccessful by the common metrics.

Instructional systems and innovations In response to the metacontingencies, contingencies, and cultural practices common to the US educational system, and institutions of higher education that create barriers to student learning, several behavior analytic instructional systems and innovations in teaching and learning have been developed to help students learn. These efforts extend back to the early days of the basic science itself (i.e., since the 1950s). Still other methods, some based on Skinner’s measurement system, or others based on principles derived from that science, have come about in more recent years. Table 1 provides a brief overview of some of the early behaviorally-driven teaching methods or systems, their approximate year of origin, who is generally credited with their development, and the current status thereof. We will, however, limit our discussion here to those that have been applied in postsecondary institutions.

Programmed Instruction For instance, Programmed Instruction (PI), inspired by Skinner’s (1954) teaching machines as a technology for its delivery, attempts to move a curriculum to a program of instruction (cf. Markle, 1964) that allows learners to master the material at their own pace (see Austin, 2000; Bernstein & Chase, 2013; Molenda, 2008 for reviews of PI and Vargas & Vargas, 1991 for explicit instructions for development and implementation of PI). Bernstein and Chase (2013) describe seven characteristics of PI: (1) defining what will be



Applications of behavior analysis to college teaching

161

Table 1  An overview of some of the early behaviorally-driven teaching methods or systems. Method or system (acronym)

Developer

Origin

Status

Programmed Instruction (PI) Personalized System of Instruction (PSI) Precision Teaching (PT) Direct Instruction (DI)a Interteaching

Skinner

1950s

Peaked 1969

Keller

1968

Faded out after 1980

Lindsley Engelmann Boyce & Hineline Johnson Greer

1965 1968 1990

Thriving, growing Ongoing, not growing Ongoing, thriving

1980 1980s

Ongoing, thriving Ongoing, growing

Generative Instruction (GI)a CABASa a

Note: Method or system is included in an effort to be comprehensive but will not be expanded upon in this chapter as the method or system is not frequently employed in postsecondary educational settings and instruction (though readers are encouraged to consider extensions of these methods and systems to these settings and learners). Terms included to differentiate current status of the method or system are based on the authors’ histories in behavior analysis and its applications to education as well as the continued (or not) presence of the method or system in the extant literature.

taught (i.e., what are the specific learning outcomes), (2) collecting and organizing the content, specifically “terms, laws, principles, and cases” (p. 524; e.g., what are the concepts, the critical features thereof, examples, and nonexamples necessary), (3) creating clear and interesting presentations of the materials, (4) organizing the content such that students progress through the material in small steps (i.e., shaping the new repertoire), (5) building in objective measures of student performance (i.e., assessing student learning at each step), (6) designing the sequence of instruction such that learners move forward only after performance measures indicate mastery of the previous step, and (7) supporting students’ learning at their own pace (i.e., self-paced instruction). Although PI is rarely employed in institutions of higher education presently (see Fienup, Hamelin, Reyes-Giordano, & Falcomata, 2011; Root & Rehfeldt, 2021 as exceptions), there are numerous examples of PI texts related to content in behavior analysis (e.g., Miller, 2005). For instance, two of the most widely known PI texts covering behavioral content are Holland and Skinner’s (1961) Analysis of Behavior and Peterson’s (1978) Introduction to Verbal Behavior (republished as Peterson & Ledoux, 2014). Historically, many other disciplines, including the health care professions, also worked with behavior analysts or independently developed PI texts such as Sidman and Sidman’s (1965) Neuroanatomy: A Programmed Text,Vol. 1 and Wilcox’s (1964) Blood Pressure Measurement: A Programmed Notebook for Nurses.

162

Applied behavior analysis advanced guidebook

Personalized System of Instruction An extension of both Skinner’s (1958) teaching machines and PI is Keller’s (1968) Personalized System of Instruction (PSI). PSI includes five defining characteristics: (1) self-paced learning, (2) a mastery requirement (­criterion-referenced grading), (3) an emphasis on written communication, (4) immediate consequences provided by peer proctors, and (5) lectures and demonstrations for motivational purposes only (Austin, 2000; Bernstein & Chase, 2013). In an effort to supplement the shortcomings of PI, PSI incorporates more human interaction, builds in various forms of feedback including that which is provided from peer proctors, adds more flexibility to instruction, and considers learners’ interests and motivation (Molenda, 2008). One early example is Johnston and Pennypacker’s (1971) work in the design of advanced undergraduate psychology classes at the University of Florida. Following many of the tenets of PSI, they sought to differentiate their courses from the traditional lecture- and exam-based models. They designed their classes such that students frequently met with student managers, for example, advanced students who had previously taken the course and earned an A. During these meetings, enrolled students practiced engaging in verbal responses, answering fill-in-the-blank questions on flip cards, discussing errors, and engaging in conversations related to the unit-based content encountered in the texts and lectures. Students recorded their own rates of correct and incorrect responding on cumulative graphs which were used to develop individual learning strategies. Grading was based on a mastery criterion that included both correct and incorrect responses. Students could continue to work until they met this criterion, which was also tied to earning an A in the course. Johnston and Pennypacker (1971) reported that these courses were met with “overwhelming popularity among students” (p. 244) and the instructional arrangement was more preferred and effective than those traditionally employed in college courses. Unfortunately, the prevalence of PSI in university settings appears to have regressed beginning in the 1980s. Instructors motivated to apply a PSI model in their courses face several barriers grounded in the contingencies and metacontingencies inherent in the mainstream higher education systems and structures. These barriers include but are not limited to the labor-intensive nature of PSI-based courses and expenses (for proctors; e.g., Austin, 2000; Bernstein & Chase, 2013). Additionally, the practices specific to PSI such as lectures for motivational purposes and emphasis on student mastery do not often fit with conventional university culture focused on



Applications of behavior analysis to college teaching

163

lectures to transmit information or readily align with courses offered over 8, 10, or 15-week terms, the latter of which is largely incompatible with self-pacing and mastery-based learning inherent to PSI.

Total Performance System The pioneering applications in PI and PSI, however, inspired further behavioral work in education. One example was the development of additional full-blown instructional systems and applications to advance human performance in organizations. Brethower (1970), in his dissertation, The Classroom as a Self-Modifying System, combined behavior analysis with a systems approach. He conceptualized the classroom and the interactional nature of teachers and students by way of feedback loops and reinforcement. This research laid the groundwork for the Total Performance System (Brethower, 1972) and applications of behavior analysis to the development of human performance technologies in business and industry (see Brethower, 2008; i.e., behavior systems analysis; also see Houmanfar, Fryling, & Alavosius, 2021).

Interteaching Another example is Boyce and Hineline’s (1990) interteaching. Drawing from Keller’s (1968) PSI, PT, peer-to-peer instructional strategies (e.g., Greenwood, 1997), and behavior analytic concepts and principles more generally, interteaching typically includes seven components. These components include “preparation guides…, in-class discussions between two or more students, record sheets …, brief clarifying lectures…, reinforcement contingencies for discussion/prep guide completion, frequent assessments, and quality points” (Hurtado-Parrado et al., 2022, p. 158). Several studies have demonstrated the superior effects of interteaching relative to traditional lecture-based modes of instruction (e.g., Arntzen & Hoium, 2010; Saville, Zinn, & Elliott, 2005; Saville, Zinn, Neef, Van Norman, & Ferreri, 2006) and/or have examined various contingencies or components of interteaching and the corresponding effects on student learning (e.g., Filipiak, Rehfeldt, Heal, & Baker, 2010; Rieken, Dotson, Carter, & Griffith, 2018; Rosales, Soldner, & Crimando, 2014; Saville, Cox, O'Brien, & Vanderveldt, 2011; Saville & Zinn, 2009; Truelove, Saville, & Van Patten, 2013). A recent systematic review of 38 records of applications of interteaching in college courses suggests that interteaching leads to greater quiz and exam performance relative to traditional lectures (Hurtado-Parrado et al., 2022). Furthermore, the use of interteaching correlates with improved

164

Applied behavior analysis advanced guidebook

s­ocial ­ validity scores. These findings suggest that interteaching is more effective and preferred than traditional lectures (Hurtado-Parrado et  al.). Interestingly, many of the studies included fewer than five of the seven interteaching components: “prep guides, discussion, clarifying lectures, record sheets, and frequent probes” (Hurtado-Parrado et al., 2022, p. 176). Overall, the various combinations of components showed minimal difference on the general outcome of interteaching. Instead, discussions were the most critical component, especially when students were organized as pairs rather than in large groups. Working in pairs allowed students to compare their knowledge, revise previous content, and engage in critical thinking.

Behavioral analytic training systems and instructional technology labs Other applications and innovations derived from PI and PSI manifested in the design, development, implementation, and evaluation of behavioral analytic training systems. This work included instructional technology labs in behavior analytic degree programs in institutions of higher education, many of which have strived to address barriers to student learning spurred by systemic and cultural contingencies common to institutions of higher education in the US. Behavior Analytic Training System. Perhaps the longest standing and most notable example of behavioral analytic training systems and instructional technology labs in behavior analytic degree programs is Dick Malott’s Behavior Analytic Training System (BATS; Malott, 2018). With its home at Western Michigan University, BATS is a university-based program designed to train science-based practitioners. Unlike scientist-practitioner focused programs that develop practitioners who conduct research, the sciencebased practitioner program trains practitioners who base their practice on research. The BATS program spans bachelor’s, master’s, and doctoral level training; however, the heart of the program is its master’s degree. Students who enter the BATS master’s program begin with a 10-week boot camp in the summer preceding their first semester in the program. Subsequently, BATS master’s students lead discussions in the undergraduate Introduction to Behavior Analysis courses for two semesters, complete a five-semester practicum and autism-focused project, and undertake Organizational Behavior Management (OBM) tasks within the various BATS systems. Doctoral students assist the BATS program faculty with supervising master’s students, managing the practicum experiences, and supporting the BATS system more generally. Many the BATS master’s program alumni become



Applications of behavior analysis to college teaching

165

Board Certified Behavior Analysts (BCBAs). Though few pursue doctoral programs, BATS master’s-level graduates are renown in the discipline as practitioners who excel in behavior training, staff training, and OBM skills. The undergraduate BATS program also serves as an effective recruitment system for the master’s program. BATS’ emphasis on experiential learning which is grounded in opportunities to apply basic behavioral concepts from courses to laboratory-based experiences and in applied settings through the autism-focused practicum, promotes a diverse behavioral perspective and a competent skill set. Students who excel in the undergraduate BATS program can pursue an honors thesis, and many of these students also pursue the master’s degree. Self-Paced, Personalized, Interactive and Networked System. Another example is Ramona Houmanfar’s Self-Paced, Personalized, Interactive and Networked (SPIN) system (Self-Paced Psychology 101, n.d.). Established in 1994 for the Psychology 101 students at the University of Nevada Reno (UNR), the SPIN system is coordinated by graduate and undergraduate behavior analysis students who are also members of the Learning Lab. The SPIN system is perhaps one of the few remaining systems of instruction that maintains many of the original features of PSI. In addition to being self-paced and mastery-based, student proctors provide the opportunities for frequent feedback and additional discussion opportunities, as well as motivational lectures to introduce undergraduate students to the course material. Teaching Assistant & Teaching Fellow System/Teaching Science Lab. The last example, also established in 1994, is Sigrid Glenn’s Teaching Assistant/Teaching Fellow (TA/TF) system, developed to support the undergraduate introduction to behavior analysis courses at the University of North Texas (UNT; cf. Cihon, Kieta, & Glenn, 2018). The initial course design was inspired by Keller’s PSI; courses were taught and supported by graduate students in the behavior analysis master’s program who served as teaching assistants (TAs) and teaching fellows (TFs) under the supervision of a faculty member at the then Center for Behavioral Studies. TFs were selected from graduate students who had served as a TA for at least two semesters.TFs served as lecturers and as supports for TAs who assisted in developing in-class exercises, tutoring, testing, and retesting. Classes met twice a week. The first class session consisted of a lecture supported with guided notes and small group discussions to encourage active student responding (ASR); the second class session focused on testing and tutoring. Student performance was assessed with weekly in-class quizzes, a midterm, and final

166

Applied behavior analysis advanced guidebook

exam, all of which were computerized and programmed to provide immediate feedback on correct and incorrect responses. Initially, the faculty supervisor for the TA/TF system held weekly lab meetings; however, as the system grew and course design became more stable, the TA/TF system evolved to require minimal faculty involvement. In 2010, Traci Cihon took over the TA/TF system. In 2014, she committed to a significant course redesign and a slight reorganization of the TA/TF system to support an increasing number of students enrolling in the courses, and to accommodate the growing interest from students joining the TA/TF system. Maintaining the role of graduate students’ responsibilities in course design, delivery, and evaluation of the instruction, the lab was renamed Teaching Science Lab (TSL; cf. Cihon et al., 2018) and was re-conceptualized. In addition, the introduction to behavior analysis course sequence was redesigned to support undergraduate students in achieving the following learning outcomes: (1) to shift in and out of viewing the world from the lens of the three term contingency, (2) to become fluent in basic terms and definitions related to principles and procedures in behavior analysis, (3) to apply basic behavior analytic principles and procedures to produce behavior change in their own repertoires if desired, and (4) to discuss and dispel common misconceptions about behavior analysis. Moreover, both undergraduate and graduate students were provided with opportunities to teach behavior analysis with behavior analytic methods, to evaluate their teaching practices and outcomes, and to share what they were learning about educational practices grounded in the principles of behavior with the larger community through publications and presentations. These objectives were met through five primary instructional strategies and tactics: lectures with ASR, Generalized Problem Solving (GPS; Kieta, Cihon, & Abdel-Jalil, 2018), Say All Fast Minute Every Day Shuffled (SAFMEDS; Adams, Cihon, Urbina, & Goodhue, 2018; Adams, Eshleman, & Cihon, 2017; Lovitz, Cihon, & Eshleman, 2020; Urbina, Cihon, & Baltazar, 2021), the Individual Descriptive and Exploratory Analysis (IDEA) project (Armshaw, Cihon, & Lopez, 2021), and Portable Operant Research and Training Lab (PORTL; Goodhue, Liu, & Cihon, 2019; each of the course components are described in full in Cihon et al., 2018, and all but GPS are briefly summarized in this chapter).

Themes & tactics in contemporary behavioral approaches to college teaching The historical focus on the design of instructional systems, instructional design, or even course design in applied behavior analytic approaches to



Applications of behavior analysis to college teaching

167

c­ ollege teaching has become less common over time (though see Heward & Twyman, 2021a, 2021b). The focus of behavioral approaches to college teaching has more recently emphasized explorations of specific tactics applied to common challenges in student behavior (e.g., procrastination and motivation), increase student responding and engagement, and expedite the acquisition of specific responses, classes of responses, and more. Other tactics have also been developed to facilitate students’ applications of behavioral concepts and principles.

Procrastination & motivation Since Michael’s (1991) conceptualization of student procrastination, several researchers have explored different contingency arrangements to combat this behavior. Perrin et al. (2011), for instance, evaluated the comparative effects of contingent and noncontingent access to practice quizzes on the distribution of studying behavior. Their results indicated that studying was more evenly distributed when access to additional practice quizzes was contingent upon completion of previous study materials. Bird and Chase (2021) replicated the results of Perrin et al. and found that studying behavior was better with contingent access to practice materials, but students preferred noncontingent access. Other efforts to address student procrastination have included adaptations to PSI (e.g., Wesp & Ford, 1982) and the use of specific rules in assignment descriptions (Johnson, Perrin, Salo, Deschaine, & Johnson, 2016). Other researchers have focused on the effects of game-based instruction or assessments on students’ motivation, attendance, preference, and/or quiz scores. Neef et al. (2007), for example, evaluated the differential effectiveness of traditional, student-directed question-and-answer-based study sessions or game-based study sessions on attendance and weekly quiz scores. Their results suggested that attending a study session, regardless of the study session format, leads to improved student performance as compared to peers who did not attend a study session. Extending Neef et  al. (2007), Neef, Perrin, Haberlin, and Rodrigues (2011) compared a student-directed gamebased study session to an attention control condition on students’ weekly quiz scores. The results suggested that following study sessions, weekly quiz scores were both educationally and statistically significantly better than weekly quiz scores following the control condition. Gist, Andzik, Smith, Xu, and Neef (2019) further extended this line of research, evaluating students’ quiz scores when quizzes were administered privately and were not game-based; that is, students took quizzes on a computer or personal device

168

Applied behavior analysis advanced guidebook

and were monitored by a graduate teaching assistant, or were game-based and administered in the classroom in a group (i.e., Kahoot!). The results showed increases in quiz scores over the course of the semester; however, there were no differences in quiz scores based on quiz format. Moreover, students reported a preference for the private quiz format.

Participation, engagement, & responding Attempts to increase student participation, engagement, and active student responding (ASR) has been a strong focus for many behavior analytic educators.This focus is likely fueled by the positive correlation between student engagement and academic performance (cf., Greenwood, Delquadri,  & Hall, 1984; Heward, 2022), the frequent need to provide high-quality instruction in large-group settings (Heward, 1994;Twyman & Heward, 2016), and the expansive literature supporting the use of these tactics across instructional settings, levels, students, and content (Heward, 2022). Tactics to promote ASR include (a) choral responding, defined by Twyman and Heward (2016) as “students responding orally in unison to a series of questions presented by the teacher” (p. 800), (b) guided notes, or “teacher-prepared handouts that “guide” a student through a lecture with standard cues and specific spaces in which to write key facts, concepts, and/ or relationships” (p. 82), and (c) response cards or “cards, signs, or items that students hold up to display their answers to teacher-posed questions or problems” (p. 80). These strategies have their roots in DI (see Engelmann & Carnine, 1982; Heward & Twyman, 2021a, 2021b). Twyman and Heward suggest that ASR is most effective when it incorporates a common framework which includes “sound instructional design…high rates of relevant learner responses with contingent feedback…and ongoing instructional decision-making based on direct and frequent measures of student performance” (p. 79). Several scoping reviews and papers that detail how to create and/or use ASR in the classroom have been published (e.g., Heward, 2022; Marsh, Cumming, Randolph, & Michaels, 2021;Twyman & Heward, 2016); therefore, this section will provide, when available, examples of their applications in the college classroom. Choral responding has decades of support for its use, especially in K-12 settings (see Heward, 2022; Twyman & Heward, 2016). In the college classroom, choral responding has largely adopted electronic student response systems (e.g., clickers, Kahoot!) rather than oral responding. Bicard et  al. (2008), for example, explored the differential effectiveness of an electronic student response system, questioning each student individually, and a no



Applications of behavior analysis to college teaching

169

questions condition in a graduate course for special educators on teaching methods.Their dependent measures included, “accuracy of delayed recall of lecture material, accuracy of application of lecture material, and frequency and accuracy of student responding” (Bicard et  al., 2008, p. 24). The results showed that students performed well regardless of the study condition; however, students performed better in both ASR conditions relative to the control condition. Like choral responding, much of the support for response cards has also been established in K-12 settings (see Marsh et al., 2021 for a metaanalysis), although several studies have demonstrated the effectiveness of response cards in university classrooms (e.g., Bulla, Wertalik, & Crafton, 2021; Clayton & Woodard, 2007; Kellum, Carr, & Dozier, 2001; Malanga & Sweeney, 2008; Marmolejo, Wilder, & Bradley, 2004; Zayack, Ratkos, Frieder, & Paulk, 2015). Bulla, Wertalik, and Crafton (2021), for instance, extended the research on response cards in the college classroom by evaluating the comparative effectiveness of two types of practice questions: asking students to recall definitions or asking students to differentiate between “examples and non-examples of concepts” (p. 133) on students’ daily quiz scores. Both forms of response cards improved students’ quiz scores with those questions that asked students to differentiate between examples and nonexamples resulting in higher quiz scores and higher rates of responding than those questions that asked students to provide definitions. Guided notes have often been incorporated in college teaching practices, not only as a tactic to increase ASR but also to support the development of college students’ note-taking skills (e.g., Austin, Lee, Thibeault, Carr,  & Bailey, 2002; Glodowski & Thompson, 2018; Neef, McCord, & Ferreri, 2006; Williams, Weil, & Porter, 2012). Recently, Biggers and Luo (2020) conducted a review of 22 articles spanning 10 years of research on the use of guided notes with adult learners. Consistent with previous reviews summarizing the effectiveness of guided notes (see Konrad, Joseph, & Eveleigh, 2009; Larwin & Larwin, 2013), Biggers and Luo found that guided notes,“can improve students [sic] learning gains, promote attendance, lead to better note-taking and attendance, and increase participation” (p. 19), and that students like them.

Teaching more in less time The systems-level contingencies in university settings, usually 15-week semesters and limited time in synchronous learning environments, often present instructors with the challenge of how to maximize their ­instructional

170

Applied behavior analysis advanced guidebook

efficiency. Given the university-imposed restrictions to instructional time and course duration, several tactics have emerged in the behavioral education research as ways in which course instructors can teach more in less time. Equivalence-Based Instruction. One example of tactics developed to combat these challenges is an emerging line of research based on stimulus equivalence (Sidman, 1994; Sidman & Tailby, 1982). Equivalence-based instruction (EBI), “Typically using match-to-sample (MTS) procedures… teaches learners to treat physically disparate stimuli as functionally interchangeable by training overlapping conditional discriminations. Instructors arrange contingencies to teach the respective conditional discriminations in order and to mastery” (Brodsky & Fienup, 2018, p. 96). Brodsky and Fienup (2018) conducted a meta-analysis of research on EBI in higher education settings. They were interested in evaluating its general effectiveness, its effectiveness in relation to other instructional strategies and tactics, and in identifying specific parameters of the EBI that lead to more effective student learning. The results of their analysis suggest that while EBI has demonstrated effectiveness in teaching a “wide range of academically relevant concepts to college students” (p. 110), much of the research has been conducted in highly controlled settings rather than in the college classroom, and the results of some studies did not indicate that EBI was more effective than traditional lectures in supporting students’ academic gains. Several studies on EBI in higher education have been conducted since Brodsky and Fienup’s (2018) review, providing further evidence on the effectiveness of EBI to teach a variety of skills (e.g., Blair et al., 2019; Blair, Shawler, Albright, & Ferman, 2021; Gallant, Reeve, Reeve, Vladescu, & Kisamore, 2021; Longo et  al., 2022; Ostrosky et  al., 2022). These studies provide evidence supporting the effectiveness of EBI in both controlled settings (Blair et  al., 2019, 2021; Longo et  al., 2022) and in the college classroom (Gallant et al., 2021; Ostrosky et al., 2022). However, the results of studies that compared EBI to self-study conditions (Gallant et al.; Longo et al.) suggest that EBI is only moderately more robust. In addition, neither of the procedural variations explored, group contingencies or individualized instruction (Ostrosky et al., 2022), were more effective than the other. The same was true regarding the matching-to-sample or stimulus-paring-yes-no EBI protocols (Gallant et  al.); neither was more effective than the other. Additionally, neither Ostrosky et al. (2022) nor Gallant et al. (2021) found EBI to be more effective than traditional lectures or self-study, respectively. Nevertheless, Blair et al. (2021) and Longo et al. (2022) offered support for



Applications of behavior analysis to college teaching

171

unique applications of EBI, notably its effectiveness in generating target repertoires with video-based stimuli and American Sign Language (ASL), respectively. Precision Teaching. Derived from Skinner’s rate-based measurement system, Precision Teaching (PT) “is a behavior monitoring and measurement system” (Lovitz et al., 2020, para. 1). Unlike Skinner’s (1968) PI and Keller’s (1968) PSI which are complete systems of instruction, “PT is a behavior monitoring and change system [that can] be applied to any existing instructional system” (Lovitz et al., para. 1). For example, PT can be applied to both PI and PSI (also see Lindsley, 1992). Its core features include (1) daily, direct, and continuous measures of a specific curricular skill, and (2) a focus on building fluency (Kubina, Morrison, & Lee, 2002). Evans et al. (2021) recently synthesized the “must have” features of PT, specifically (1) accelerating behavioral repertories, (2) precise behavior definitions, (3) continuous observation, (4) dimensional measurement, (5) the SCC, and (6) timely and effective data-based decisions. Although most behavior analytic applications to college teaching do not fully employ PT and all its critical features, one particular tactic grounded in PT and especially in its emphasis on rate-building (cf. Doughty, Chase, & O'Shields, 2004) or fluency-based instruction (cf. Binder, 1996), has been of particular interest to many: Say All Fast Minute Every Day Shuffled (SAFMEDS). Developed as an extension of traditional flashcards, SAFMEDS are commonly set up in a “see-say” learning channel (cf. Haughton, 1972) in which the student sees the text (or image) on one side of the card (i.e., the input channel) and says a response that, if correct, corresponds to the text on the other side (i.e., the output channel). The most common example of SAFMEDS in the college classroom is to support students in learning the definition of technical terms. The terms are printed/written on the “see” side of the card and the definitions are printed/written on the “say” side of the card.The rules for practicing SAFMEDS align with its unique name because the learner must say the answer out loud (say) with the goal of going through the entire deck (all) as quickly as they can (fast) during a short period of time (one minute). Students practice each day (everyday) and shuffle the deck each time (shuffled). When the learner sees the front of the card and responds, the answer on the back of the card is checked immediately by the learner if practicing alone or by the partner if practicing in pairs. The card is then placed in either a “correct” or “incorrect” pile. This sequence continues until the timer sounds at which point the number of correct and incorrect cards are counted and recorded so progress can be tracked

172

Applied behavior analysis advanced guidebook

(cf. Lovitz et al., 2020). SAFMEDS are often used to promote fluency of intraverbals (see Cihon, 2007; Skinner, 1957) related to the development of a technical vocabulary in behavior analysts, mostly terms and definitions. Graf and Auman (2005) and Lovitz et al. (2020) provide detailed descriptions of how to design and use SAFMEDS. Quigley, Peterson, Frieder, and Peck (2017) conducted a review of the literature pertaining to SAFMEDS with a specific focus on procedures and outcomes. Notably, not all studies reviewed were conducted in a university setting. Citing a total of 53 papers, 27 of which were data-based and peer-reviewed, Quigley et al. (2017) concluded that while SAFMEDS generate high rates of retention, endurance, and stability of the repertoires generated, the findings could be further supported by additional replications. Moreover, there were few comparison studies (but see Johnston, 1988) to lend support for the differential effectiveness of SAFMEDS over other instructional tactics employed to generate similar repertoires. They also suggest that further refinement or development of a standard set of procedures for SAFMEDS as an independent variable is needed (but see Lovitz et al., 2020 as tactics do not require a standard procedure per se). Since the Quigley et  al. (2017) review, three studies have been conducted on the use of SAFMEDS in college classrooms (Adams et al., 2018; Quigley et al., 2021; Urbina et al., 2021). In general, these studies focused on assessing the differential effectiveness of procedural variations on the use of SAFMEDS. Adams et al. (2018), for example, evaluated the comparative effectiveness of either cumulative practice (i.e., students practiced all the terms at once) or unitary practice (i.e., students practiced only those terms for a specific unit). The results suggested that students who engaged in unitary practice showed faster changes in the rate of correct responses and reached target frequency aims more quickly than students who engaged in cumulative practice, though both methods of practice resulted in increased frequencies of responding. Urbina et al. (2021) presented data from the use of SAFMEDS in four semesters of undergraduate introduction to behavior analysis courses as a descriptive analysis of the effects of various procedural manipulations made in response to student outcome measures collected each semester. In addition to detailing their decision-making processes, Urbina et al. provide suggestions for how course instructors might arrange the use of SAFMEDS to produce optimal student outcomes given their specific contexts. Quigley et al. (2021) compared a basic SAFMEDS procedure alone to a basic SAFMEDS procedure supplemented with procedural variations: (a) a cumulative or whole-deck presentation (similar to



Applications of behavior analysis to college teaching

173

that employed by Adams et al., 2018), (b) a unitary or incremental presentation of cards (also like Adams and colleagues), and (c) incorporating three, 1-min warm-up sprints with either all or only some of the cards prior to the SAFMEDS timing. The results suggested that participants increased the number of correct responses and decreased incorrect responses regardless of the procedural variation; however, participants also showed greater increases in correct responses when at least one of the supplementary procedures was also employed. In a variation of SAFMEDS conducted in a “see-type” learning channel, Lovitz et al. (2020) evaluated the use of TAFMEDS (Type All Fast Minute Every Day Shuffled) in several undergraduate introduction to behavior analysis courses. A series of correlational analyses were conducted to evaluate the relations between daily practice and performance frequencies, and performance frequencies and students’ retention, endurance, stability, application, and performance in a different learning channel with behavior analytic terminology. The results showed a correlation between daily practice and higher performance frequencies as well as between higher performance frequencies and retention, endurance, stability, and application.

Other applications of behavior analysis to college teaching Several additional areas of application of behavior analysis to education and with postsecondary students have also been explored, some of which will be briefly described next. Acceptance and Commitment Therapy. There has been a significant amount of research conducted on the application of Acceptance and Commitment Therapy (ACT) to support postsecondary students’ general well-being, including a focus on mental and physical health, and in some cases also on their academic achievement. A full description of ACT is beyond the scope of the current chapter; however, interested readers can consult two recent meta-analyses, A-Tjak et al. (2015) and French, GolijaniMoghaddam, and Schröder (2017). Additionally, some recent studies include Katajavuori,Vehkalahti, and Asikainen (2021), Paliliunas, Belisle, and Dixon (2018), and Viskovich and Pakenham (2020). Practical Skills Training. Another critical area of research and development in teaching behavior analysis to college students is the development of technologies that support application of behavioral concepts and principles in changing behavior. A recent study conducted by Leaf et  al. (2022) highlights the importance of practical skill training, especially for applied behavior analysts. Specifically, Leaf and colleagues showed that there

174

Applied behavior analysis advanced guidebook

was no significant correlation between accuracy of responses to a multiplechoice-based behavior analysis certification preparation examination and accuracy of implementation in conditioning reinforcers, discrete trial training, or general behavioral interventions. Fortunately, behavior analysts have also developed technologies that support the use and application of behavioral concepts and principles in changing behavior. Early examples of these technologies often included laboratory-based experiences or behavior change projects as activities and exercises built into introduction to psychology/behavior analysis courses. In addition, newer technologies such as Behavioral Skills Training (BST) have developed alongside the burgeoning practice of ABA and the practical training/experiential learning requirements necessary to be eligible to sit for the examination for certification as a behavior analyst. Laboratory-Based Experiences. Laboratory-based experiences in behavior analysis often involve opportunities for students to conduct experiments or teach new behaviors to nonhumans in operant chambers. However, the maintenance of nonhuman laboratories can be costly to universities, and several variations of this technology have been developed to address this barrier. Goodhue et al. (2019) summarize the history of laboratory-based experiences in behavior analysis courses, the barriers to the inclusion and sustainability of nonhuman operant laboratories in university settings, and the use of virtual laboratories as an alternative (e.g., Graf, 1995; Graham, Alloway, & Krames, 1994) before describing their use of another recently developed alternative—Rosales-Ruiz and Hunter’s (2016, 2019) Portable Operant Research and Teaching Laboratory (PORTL). PORTL is “a tabletop game that provides an interactive environment for learning about behavior principles and investigating behavioral phenomena. It is played using a collection of small objects, a clicker to select behavior, and small blocks as reinforcers” (Behavior Explorer, 2022, para. 2). Like other laboratorybased experiences, PORTL supports students of behavior analysis to further develop their teaching and research skills. Both Rosales-Ruiz and Hunter (2019) and Goodhue et al. provide examples on how to incorporate PORTL into behavior analytic coursework, sample exercises, and lesson plans for course instructors. Behavior Change Projects. Another tactic that has been employed by behavior analytic educators to support students’ use and application of behavioral concepts and principles in changing behavior is behavior change projects (BCPs). BCPs typically include the components of problem identification, goal definition, development of a data collection system,



Applications of behavior analysis to college teaching

175

c­ollection of baseline data, implementation of an intervention, programming for maintenance of behavior change, and an analysis and written description of the findings (Dodd, 1986). Armshaw et al. (2021) summarize the research related to the use of BCPs in behavior analytic coursework before describing a recent innovation in the use of BCPs. Armshaw et al. conducted a preliminary analysis of a BCP, adapted to include aspects of Goldiamond’s (1965, 1974) exploratory logs and constructional approach (i.e., the Individual, Descriptive, Exploratory Analysis [IDEA] Project), in undergraduate introduction to behavior analysis courses.The results provide preliminary support for the utility of the IDEA Project in the development of undergraduate students’ use of behavior analytic explanations (relating the cause of the behavior to the environmental controlling variables) to explain their behaviors of interest. Behavioral Skills Training. Behavioral skills training (BST) is a more recently defined instructional method that has been incorporated by behavior analytic educators to promote skill acquisition. BST involves teaching individuals skills through the use of instructions, modeling, rehearsal, and feedback (Miltenberger, 2012). Brock et al. (2017) in a comprehensive review and meta-analysis of over 100 practitioner training studies in the special education literature reported that “BST was associated with the most consistent improvement of implementation fidelity” (p. 21). Kirkpatrick, Akers, and Rivera (2019) limited their review to studies that employed BST to support skill development with teachers. In addition to reporting the limitations of the extant research, their results suggest that BST is effective in generating the desired teacher repertoires, often the implementation of discrete trial training and preference assessments. Although BST may be more commonly employed in experiential learning settings outside of the college classroom, it offers a generally effective method to develop the skills necessary for the effective implementation of various applied behavior analytic techniques. Interested readers are encouraged to review DiGennaro Reed, Blackman, Erath, Brand, and Novak (2018) for a comprehensive set of guidelines regarding how to incorporate BST in their teaching.

Instructional design & practical considerations Despite the long-standing history of behavioral approaches to education, especially in ABA, a review of the literature regarding applications of behavior analysis to education in postsecondary settings may leave readers with more questions than answers as they approach their own course design.

176

Applied behavior analysis advanced guidebook

For example, there is little mention of how to create a course syllabus, let alone evaluations of the effectiveness of different approaches to doing so. The design of instructional systems and even behavioral analytic training systems and instructional technology labs in behavior analytic degree programs has seemingly become less common not only in the behavioral literature but also in practice. Without extensive training in behavior analysis and education and familiarity with the extant literature, many instructors may lack the resources and component skill knowledge necessary to design courses that differ from traditional approaches to college teaching. Thus, pulling from our personal histories in behavior analysis and education, the extant literature, and our respective experiences in institutions of higher education, we offer some practical considerations for these situations.

Context, macrosystems, & metacontingencies As previously noted, the metacontingencies, contingencies, and cultural practices common to institutions of higher education in the US educational system often create barriers to student learning and have not always supported behavioral approaches to education and instructional systems (e.g., PI, PSI). Thus, it is important to the design and the sustainability of instructional systems, behavioral analytic training systems, and instructional technology labs that course instructors study the context in which they are preparing to teach. Variables such as the type of postsecondary educational setting (e.g., community college, 4-year college or university, health science center, research intensive institution, etc.), the type of degree program (e.g., associate’s degree, certificate program, undergraduate or graduate degree), the college and/or department in which the degree/certificate program is housed (e.g., health sciences, education, behavioral science, behavior analysis, etc.), and more are salient considerations. These variables constitute the various macrosystems in which one is teaching; the macrosystem includes the institutional contingencies and metacontingencies as well as the processing and receiving system(s) that inform and maintain these and other contingencies. The modality in which the course is offered (online, in-person, hybrid) and the timing of instruction (synchronous or asynchronous), for example, may affect the inputs and resources available to course instructors. In addition, the student demographics and the college or department in which the degree program is housed may reflect different contingencies and metacontingencies imposed by the receiving systems. Perhaps not strongly reflected in the behavioral education research, resources on how to conduct behavioral systems analyses are plentiful in behavior analysis. Readers are



Applications of behavior analysis to college teaching

177

encouraged to consult Brethower’s (1970, 1972) TPS (see also Malott, 2003) and other resources from and applications of behavioral systems analysis (cf. Houmanfar et  al., 2021) and culturo-behavior science (cf. Cihon & Mattaini, 2019, 2020) to educational settings and instructional activities to ensure compatibility between their design of instructional systems and the selecting macrosystem(s).

Syllabi, instructional design, & contingencies Syllabi are a contractual arrangement between the course instructor and the students and can include the responsibilities of and contingencies for each as well as the course design features. Course design can also be conceptualized from a component-composite framework (derived from PT; see Binder, 1996). The composite skills, specifically the general learning outcomes (GLOs) are met through acquiring a series of component skills defined in unit-specific learning outcomes (SLOs). These skills are developed and practiced through course activities and assignments that are linked to assessments of students’ learning also articulated by way of the GLOs and SLOs (cf. Carriveau, 2010). Course design typically works backward from the GLOs which specify what students are expected to know by the end of the course. Unit-based SLOs specify what the students will be able to do following each unit and accumulate to produce the repertoires articulated in the GLOs. Readers are encouraged to reference Bloom’s taxonomy (e.g., Crone-Todd & Pear, 2001), the rich history of information in the instructional design literature (cf. Kieta et  al., 2018; Layng, 2018; Mager, 1997; Tiemann & Markle, 1990), and apply their histories in writing target behavior definitions and behavioral objectives (Cooper et al., 2020) for populations other than college students in completing these steps. The course activities should provide students with the opportunity to accomplish what is stated in the SLOs and should include details as to whether the activity will be completed individually, in a small group, or as a class. The course activities might be informed by what has been learned from the previously described and synthesized research on strategies and tactics explored in the behavioral education literature (e.g., ASR, BCPs, EBI, PT, etc.) and in the plethora of research on college teaching more generally, often published in different disciplinary journals such as American Educational Research Journal, Instructional Science, and Journal of Experimental Education. Finally, the assessment should be developed such that it provides the students and course instructor with the information necessary to determine if the students have met the SLOs and GLOs. Several course units

178

Applied behavior analysis advanced guidebook

developed by the Behaviorists for Social Responsibility Special Interest Group (BFSR SIG) of the Association for Behavior Analysis International (ABAI) around the 5th Edition BACB Task List and focused on the topic of sustainability serve as excellent examples. These can be found on the BFSR SIG website: https://bfsr.abainternational.org/working-group-resources/. In addition ABAI hosts a syllabus bank that contains several sample syllabi (https://www.abainternational.org/constituents/educators/syllabus-bank. aspx). Another important set of variables for instructors to consider are the point contingencies for the various assignments and assessments and how the combination of point values interacts to create the total points for the course. Even if in many cases points and/or grades do or do not function as conditioned reinforcers for students, they can be arranged with respect to what we know from the literature on schedules of reinforcement (Ferster & Skinner, 1957; Michael, 1991), token economies (cf. Ayllon & Azrin, 1968), dimensions of reinforcement (e.g., Neef, Bicard, Endo, Coury, & Aman, 2005), and matching law (e.g., Reed & Kaplan, 2011). Michael’s (1991) procrastination scallop, for instance, follows from the patterns of behavior exhibited under fixed interval schedules of reinforcement, and has informed the literature regarding best practices for scheduling more frequent low-stakes assessments each class period or even weekly. Course instructors might also find it useful to consider the effort of the assignment in relation to the points available for the assignment, and in relation to the total points available for the course. Students may elect not to complete high effort, low stakes assignments (e.g., weekly reflection papers worth only 5 points each) or may choose not to complete high effort, high stakes assignments (e.g., a final paper worth 100 points in a class with a total of 1200 points) as the point distribution for assignments does not have a strong enough impact on their final grade if one or more assignments are not completed. This may be undesirable for the student if the assignment is necessary for them to acquire the target repertoires or for the course instructor if a response is needed for the student to demonstrate progress toward a learning outcome or provide important feedback regarding the effectiveness of instruction. Another consideration is that students are often enrolled in several courses at the same time, working, engaging in experiential learning opportunities, raising families, or providing critical support in other ways for family and friends, and are thus continually facing competing and concurrent contingencies. Relating back to the discussion of macrosystems and context, course instructors would be wise to consider



Applications of behavior analysis to college teaching

179

the demands that students are facing outside of class times and how instruction, assignments, and assessments can be organized in a way that promotes responding to course materials and avoids the use of coercive contingencies (cf. Sidman, 1989) to promote students’ engagement with the course material.

Future directions Even with the long-standing history of behavior analysis and education, there is still much to be done. One area that is ripe for exploration is to rekindle efforts in the design of instructional systems. Institutions of higher education are constantly changing. The surge in online and hybrid instructional modalities, for instance, offers a new context and perhaps even new receiving systems for which programmed and personalized systems of instruction might prove beneficial. Institutional contingencies, particularly those related to teaching, may also provide opportunities for the design and evaluation of innovative instructional systems.With further attention to the macrosystems, new systems of instruction can be designed in ways that maximize their fit with and sustainability in the context of the institutional contingencies and metacontingencies. Relatedly, existing strategies and tactics in behavioral approaches to education may also need to be revised or refined to support student learning in these new conditions. Both Rieken et al. (2018), who adapted interteaching to online asynchronous courses, and Lovitz et al. (2020), who explored the use of TAFMEDS, provide examples of how one can adapt existing tactics to changing learning environments. Changing learning environments also create the opportunity to consider alternative research methodologies than those more widely adopted in behavior analysis (cf. Fawcett, 1991, 2021). For example, much of the literature reviewed in this chapter adopted what might be considered by many as a ‘colonial’ approach, in which the course instructor determines the course goals, activities, assignments, and assessments without consultation with the students (also see Pritchett, Ala'i-Rosales, Cruz, & Cihon, 2021). An alternative strategy to conducting research on college teaching could be to adopt a participatory action research (PAR; cf. Baum, MacDougall, & Smith, 2006) approach, in which participants (in the context of college teaching, the students) are engaged in the research (and/or teaching) process. Incorporating PAR into the behavioral research on college teaching would involve engaging students in both the development of course GLOs and SLOs as well as in the design of instructional strategies and tactics

180

Applied behavior analysis advanced guidebook

(Fawcett, 1991, 2021; Pritchett et al.). Students are one of the key receiving systems for universities and for course instructors. Further engaging students in the development of the course goals, activities, assignments, and assessments could prove beneficial and yield valuable insights as to students’ needs and/or preferences regarding both the goals and learning outcomes for the course as well as the instructional strategies and tactics employed to support students in achieving them. Previous research conducted by behavioral community psychologists in public school settings adopting PAR has shown higher student engagement and longer lasting effects of instruction (cf. Nietzel, Winett, MacDonald, & Davidson, 1977). This research often considers students as change agents; the inclusion of peer tutoring in interteaching is one example of how students have served as change agents for each other’s behavior (Boyce & Hineline, 1990). Other applications of PAR in public school settings have examined the effects of students as change agents for teacher behavior (again see Nietzel et al.). Given the high levels of teacher and professor burnout reported since the onset of the COVID-19 pandemic (see for example Gewin, 2021), further engaging students as change agents for teacher behavior could provide much needed reinforcement for course instructors, offer opportunities for denser schedules of reinforcement, improve educational outcomes, and enhance both students’ and instructors’ quality of life.

References Adams, O., Cihon, T. M., Urbina, T., & Goodhue, R. J. (2018). The comparative effects of cumulative and unitary SAFMEDS terms in an introductory undergraduate behavior analysis course. European Journal of Behavior Analysis, 19(2), 176–194. https://doi.org/10 .1080/15021149.2017.1404394. Adams, O., Eshleman, J. W., & Cihon, T. M. (2017). Recent applications of SAFMEDS in a higher education learning environment. Standard Celeration Society Newsletter, 35(2). Alessi, G. (1987). Generative strategies and teaching for generalization. The Analysis of Verbal Behavior, 5, 15–27. https://doi.org/10.1007/BF03392816. Armshaw, B., Cihon, T. M., & Lopez, C. (2021). A constructional approach to the use of behavior change projects in undergraduate behavior analysis courses. Behavior Analysis in Practice.. https://doi.org/10.1007/s40617-021-00608-1. Arntzen, E., & Hoium, K. (2010). On the effectiveness of interteaching. The Behavior Analyst Today, 11(3), 155–160. https://doi.org/10.1037/h0100698. A-Tjak, J. G., Davis, M. L., Morina, N., Powers, M. B., Smits, J. A., & Emmelkamp, P. M. (2015). A meta-analysis of the efficacy of acceptance and commitment therapy for clinically relevant mental and physical health problems. Psychotherapy and Psychosomatics, 84(1), 30–36. https://doi.org/10.1159/000365764. Austin, J. L. (2000). Behavioral approaches to college teaching. In J. Austin, & J. E. Carr (Eds.), Handbook of applied behavior analysis (pp. 449–472). Context Press/New Harbinger Publications.



Applications of behavior analysis to college teaching

181

Austin, J. L., Lee, M. G., Thibeault, M. D., Carr, J. E., & Bailey, J. S. (2002). Effects of guided notes on university students’ responding and recall of information. Journal of Behavioral Education, 11(4), 243–254. https://doi.org/10.1023/A:1021110922552. Ayllon, T., & Azrin, N. (1968). The token economy: A motivational system for therapy and rehabilitation. Appleton-Century-Crofts. Ayllon, T., & Michael, J. (1959). The psychiatric nurse as a behavioral engineer. Journal of the Experimental Analysis of Behavior, 2, 323–334. https://doi.org/10.1901/jeab.1959.2-323. Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1(1), 91–97. https://doi.org/10.1901/ jaba.1968.1-91. Baum, F., MacDougall, C., & Smith, D. (2006). Participatory action research. Journal of Epidemiology and Community Health, 60(10), 854–857. https://doi.org/10.1136/ jech.2004.028662. Behavior Explorer. (2022). What is PORTL?. https://behaviorexplorer.com/articles/portl-intro/. Bernstein, D., & Chase, P. N. (2013). Contributions of behavior analysis to higher education. In G. J. Madden, W. V. Dube, T. D. Hackenberg, G. P. Hanley, & K. A. Lattal (Eds.), Translating principles into practice: Vol. 2. APA handbook of behavior analysis (pp. 523–543). American Psychological Association. https://doi.org/10.1037/13938-021. Bicard, S., Bicard, D. F., Casey, L. B., Smith, C., Plank, E., & Casey, C. (2008). The effects of student response system and single student questioning technique on graduate students’ recall and application of lecture material. Journal of Educational Technology, 5(1), 23–30. https://www.learntechlib.org/p/194558/. Biggers, B., & Luo, T. (2020). Guiding students to success: A systematic review of research on guided notes as an instructional strategy from 2009–2019. Journal of University Teaching & Learning Practice, 17(3). https://doi.org/10.53761/1.17.3.12. Binder, C. (1996). Behavioral fluency: Evolution of a new paradigm. The Behavior Analyst, 19(2), 163–197. https://doi.org/10.1007/BF03393163. Bird, Z., & Chase, P. N. (2021). Student pacing in a master’s level course: Procrastination, preference, and performance. Journal of Applied Behavior Analysis, 54(3), 1220–1234. https://doi.org/10.1002/jaba.806. Blair, B. J., Shawler, L. A., Albright, L. K., & Ferman, D. M. (2021). An evaluation of the emergence of untrained academic and applied skills after instruction with video vignettes. The Analysis Verbal Behavior, 37, 35–56. https://doi.org/10.1007/s40616-020-00140-3. Blair, B. J.,Tarbox, J., Albright, L., MacDonald, J. M., Shawler, L. A., Russo, S. R., et al. (2019). Using equivalence-based instruction to teach the visual analysis of graphs. Behavioral Interventions, 34(3), 405–418. https://doi.org/10.1002/bin.1669. Boyce, T. E., & Hineline, P. N. (1990). Interteaching: A strategy for enhancing the ­user-friendliness of behavioral arrangements in the college classroom. The Behavior Analyst, 25(2), 215–226. https://doi.org/10.1007/BF03392059. Boyce, T. E., & Hineline, P. N. (2002). Interteaching: A strategy for enhancing the ­user-friendliness of behavioral arrangements in the college classroom. The Behavior Analyst, 25(2), 215–226. https://doi.org/10.1007/BF03392059. Brethower, D. M. (1970). The classroom as a self-modifying system (Unpublished doctoral dissertation). University of Michigan. Brethower, D. M. (1972). Behavioral analysis in business and industry: A total performance system. Behaviordelia. Brethower, D. M. (2008). Historical background for HPT certification standard 2, take a systems view, part 2. Performance Improvement, 47(4), 15–24. https://doi.org/10.1002/pfi.198. Brock, M. E., Cannella-Malone, H. I., Seaman, R. L., Andzik, N. R., Schaefer, J. M., Page, E. J., et al. (2017). Findings across practitioner training studies in special education: A comprehensive review and meta-analysis. Exceptional Children, 84(1), 7–26. https://doi. org/10.1177/0014402917698008.

182

Applied behavior analysis advanced guidebook

Brodsky, J., & Fienup, D. M. (2018). Sidman goes to college: A meta-analysis of ­equivalence-based instruction in higher education. Perspectives on Behavior Science, 41, 95–119. https://doi.org/10.1007/s40614-018-0150-0. Bulla, A. J., Calkin, A., & Sawyer, M. (2021). Introduction to the special section: Precision teaching: Discoveries and applications. Behavior Analysis in Practice, 14(3), 555–558. https://doi.org/10.1007/s40617-021-00624-1. Bulla, A. J., Wertalik, J. L., & Crafton, D. (2021). A preliminary investigation of question type used during response card activities on establishing concept formation in an introductory college class. European Journal of Behavior Analysis, 22(1), 133–150. https://doi.org/ 10.1080/15021149.2020.1737406. Carriveau, R. (2010). Connecting the dots – Developing student learning outcomes and outcomes based assessments. Fancy Fox Publications. Cihon,T. M. (2007). A review of training intraverbal repertoires: Can precision teaching help? The Analysis of Verbal Behavior, 23, 123–133. https://doi.org/10.1007/BF03393052. Cihon,T. M., Kieta, A., & Glenn, S. (2018).Teaching behavior analysis with behavior analysis: The evolution of the teaching science lab at the University of North Texas. European Journal of Behavior Analysis, 19(2), 150–175. https://doi.org/10.1080/15021149.2017.1 404393. Cihon, T. M., & Mattaini, M. A. (2019). Editorial: Emerging cultural and behavioral systems science. Perspectives on Behavior Science, 42(4), 699–711. https://doi.org/10.1007/ s40614-019-00237-8. Cihon, T. M., & Mattaini, M. A. (Eds.). (2020). Behavior science perspectives on culture and community Springer. Clayton, M. C., & Woodard, C. (2007).The effect of response cards on participation and weekly quiz scores of university students enrolled in introductory psychology courses. Journal of Behavioral Education, 16, 250–258. https://doi.org/10.1007/s10864-007-9038-x. Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd Edition). Pearson Education. Crone-Todd, D. E., & Pear, J. J. (2001). Application of Bloom’s taxonomy to PSI. The Behavior Analyst Today, 2(3), 204–210. DiGennaro Reed, F. D., Blackman, A. L., Erath, T. G., Brand, D., & Novak, M. D. (2018). Guidelines for using behavioral skills training to provide teacher support. Teaching Exceptional Children, 50(6), 373–380. https://doi.org/10.1177/0040059918777241. Dodd, D. K. (1986). Teaching behavioral self-change: A course model. Teaching of Psychology, 12(2), 82–85. https://doi.org/10.1207/s15328023top1302_9. Doughty, S. S., Chase, P. N., & O’Shields, E. M. (2004). Effects of rate building on fluent performance: A review and commentary. The Behavior Analyst, 27(1), 7–23. https://doi. org/10.1007/BF03392086. Ellis, J., & Magee, S. (2007). Contingencies, macrocontingencies, and metacontingencies in current educational practices: No child left behind? Behavior and Social Issues, 16, 5–27. https://doi.org/10.5210/bsi.v16i1.361. Engelmann, S., & Carnine, D. (1982). Theory of instruction. Irvington Publishing Inc. Evans, A. L., Bulla, A. J., & Kieta, A. R. (2021). The precision teaching system: A synthesized definition, concept analysis, and process. Behavior Analysis in Practice, 14, 559–576. https://doi.org/10.1007/s40617-020-00502-2. Fabrizio, M. A., & Moors, A. L. (2003). Evaluating mastery: Measuring instructional outcomes for children with autism. European Journal of Behavior Analysis, 4(1–2), 23–36. https://doi.org/10.1080/15021149.2003.11434213. Fawcett, S. B. (1991). Some values guiding community research and action. Journal of Applied Behavior Analysis, 24(4), 621–636. https://doi.org/10.1901/jaba.1991.24-621. Fawcett, S. B. (2021). A reflection on community research and action as an evolving practice. Behavior and Social Issues, 30, 535–544. https://doi.org/10.1007/s42822-021-00083-x.



Applications of behavior analysis to college teaching

183

Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement. Appleton-Century-Crofts. Fienup, D. M., Hamelin, J., Reyes-Giordano, K., & Falcomata, T. S. (2011). College-level instruction: Derived relations and programmed instruction. Journal of Applied Behavior Analysis, 44(2), 413–416. https://doi.org/10.1901/jaba.2011.44-413. Filipiak, S. N., Rehfeldt, R. A., Heal, N. A., & Baker, J. C. (2010). The effects of points for preparation guides in interteaching procedures. European Journal of Behavior Analysis, 11(2), 115–132. https://doi.org/10.1080/15021149.2010.11434338. French, K., Golijani-Moghaddam, N., & Schröder,T. (2017).What is the evidence for the efficacy of self-help acceptance and commitment therapy? A systematic review and Metaanalysis. Journal of Contextual Behavioral Science, 6(4), 360–374. https://doi.org/10.1016/j. jcbs.2017.08.002. Gallant, E. E., Reeve, K. F., Reeve, S. A.,Vladescu, J. C., & Kisamore, A. N. (2021). Comparing two equivalence-based instruction protocols and self-study for teaching logical fallacies to college students. Behavioral Interventions, 36(2), 434–456. https://doi.org/10.1002/ bin.1772. Geller, E. S. (Ed.). (1992).The education crisis: Issues, perspectives, solutions [special section]. Journal of Applied Behavior Analysis, 25(1), 13–235. Gewin,V. (2021). Pandemic burnout is rampant in academia. Nature, 591, 489–491. https:// media.nature.com/original/magazine-assets/d41586-021-00663-2/d41586-02100663-2.pdf. Gist, C. M., Andzik, N. R., Smith, E. E., Xu, M., & Neef, N. A. (2019). The effects of gaming on university student quiz performance. Journal of Effective Teaching in Higher Education, 2(1), 109–119. https://doi.org/10.36021/jethe.v2i1.11. Glodowski, K., & Thompson, R. (2018). The effects of guided notes on pre-lecture quiz scores in introductory psychology. Journal of Behavioral Education, 27(1), 101–123. https:// doi.org/10.1007/s10864-017-9274-7. Goldiamond, I. (1965). Self-control procedures in personal behavior problems. Institute for Behavioral Research.. https://doi.org/10.2466/pr0.1965.17.3.851. Goldiamond, I. (1974). Toward a constructional approach to social problems: Ethical and constitutional issues raised by applied behavior analysis. Behaviorism, 2(1), 1–84. https:// doi.org/10.5210/bsi.v11i2.92. Goodenough, F. L. (1934). Developmental psychology: An introduction to the study of human behavior. Appleton-Century Crofts. Goodhue, R. J., Liu, S. C., & Cihon, T. M. (2019). An alternative to non-human laboratory experiences for students of behavior analysis: The portable operant research and teaching laboratory (PORTL). Journal of Behavioral Education, 28(4), 517–541. https://doi. org/10.1007/s10864-019-09323-y. Graf, S. (1995).Three nice labs, no real rats: A review of three operant laboratory simulations. The Behavior Analyst, 18, 301–306. https://doi.org/10.1007/bf033 92717. Graf, S. A., & Auman, J. (2005). SAFMEDS: A tool to build fluency. Graf Implements. https:// nebula.wsimg.com/ff6d60c3e9a58258142cd1a44793a40b?AccessKeyId=F33FC376F01581DAB5C0&disposition=0&alloworigin=1. Graham, J., Alloway, T., & Krames, L. (1994). Sniffy, the virtual rat: Simulated operant conditioning. Behavior Research Methods, Instruments, and Computers, 26(2), 134–141. https:// doi.org/10.3758/bf032 04606. Greenwood, C. (1997). Classwide peer tutoring. Behavior and Social Issues, 7, 53–57. https:// doi.org/10.5210/bsi.v7i1.299. Greenwood, C. R., Delquadri, J., & Hall, R.V. (1984). Opportunity to respond and student academic achievement. In W. L. Heward, T. E. Heron, D. S. Hill, & J. Trap-Porter (Eds.), Focus on behavior analysis in education (pp. 58–88). Merrill. Greer, R. D. (1997). The comprehensive application of behavior analysis to schooling (CABAS®). Behavior and Social Issues, 7, 59–63. https://doi.org/10.5210/bsi.v7i1.300.

184

Applied behavior analysis advanced guidebook

Haughton, E. C. (1972). Aims: Growing and sharing. In J. B. Jordan, & L. S. Robbins (Eds.), Let’s try doing something else kind of thing (pp. 20–39). Council on Exceptional Children. Henrich, J., Heine, S. J., & Norenzayan,A. (2010).The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/s0140525x0999152x. Heward, W. L. (1994). Three “low-tech” strategies for increasing the frequency of active studentresponse during group instruction. In R. Gardner III,, D. M. Sainato, J. O. Cooper,T. E. Heron, W. L. Heward, J. Eshleman, & T. A. Grossi (Eds.), Behavior analysis in education: Focus on measurably superior instruction (pp. 283–320). Brooks/Cole. Heward, W. L. (2022). Use strategies to promote active student engagement. In J. McLeskey, L. Maheady, B. Billingsley, M. T. Brownell, & T. J. Lewis (Eds.), High-leverage practices for inclusive classrooms (2nd ed., pp. 282–294). Routledge/Council for Exceptional Children. Heward, W. L., & Twyman, J. S. (2021a). Teach more in less time: Introduction to the special section on direct instruction. Behavior Analysis in Practice, 14, 763–765. https://doi. org/10.1007/s40617-021-00639-8. Heward, W. L., & Twyman, J. S. (2021b). Whatever the kid does is the truth: Introduction to the special section on direct instruction. Perspectives on Behavior Science, 44, 131–138. https://doi.org/10.1007/s40614-021-00314-x. Holland, J. G., & Skinner, B. F. (1961). The analysis of behavior: A program for self-instruction. McGraw-Hill. Houmanfar, R. A., Fryling, M., & Alavosius, M. P. (2021). Applied behavior science in organizations: Consilience of historical and emerging trends in organizational behavior management. Routledge. Hurtado-Parrado, C., Pfaller-Sadovsky, N., Medina, L., Gayman, C. M., Rost, K. A., & Schofill, D. (2022). A systematic review and quantitative analysis of interteaching. Journal of Behavioral Education, 31, 157–185. https://doi.org/10.1007/s10864-021-09452-3. Johnson, K., Street, E. M., Kieta, A. R., & Robbins, J. K. (2021). The Morningside model of generative instruction. Sloan Publishing. Johnson, K. R. (2016). Behavior analysts can thrive in general education, too. In R. D. Holdsambeck, & H. S. Pennypacker (Eds.), Behavioral science: Tales of inspiration, discovery, and service The Cambridge Center for Behavioral Studies. Johnson, P. E., Perrin, C. J., Salo, A., Deschaine, E., & Johnson, B. (2016). Use of an explicit rule decreases procrastination in university students. Journal of Applied Behavior Analysis, 49(2), 346–358. https://doi.org/10.1002/jaba.287. Johnston, J. M. (1988). Strategic and tactical limits of comparison studies. The Behavior Analyst, 11(1), 1–9. https://doi.org/10.1007/BF03392448. Johnston, J. M., & Pennypacker, H. S. (1971). A behavioral approach to college teaching. American Psychologist, 26(3), 219–244. https://doi.org/10.1037/h0031241. Johnston, J. M., & Pennypacker, H. S. (1980). Strategies and tactics of human behavioral research. Lawrence Erlbaum Associates. Katajavuori, N., Vehkalahti, K., & Asikainen, H. (2021). Promoting university students’ well-being and studying with an acceptance and commitment therapy (ACT)-based intervention. Current Psychology.. https://doi.org/10.1007/s12144-021-01837-x. Keller, F. S. (1968). Goodbye, teacher. Journal of Applied Behavior Analysis, 1, 79–90. https:// doi.org/10.1901/jaba.1968.1-79. Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101–104. https://doi. org/10.1207/S15328023TOP2802_06. Kieta, A. R., Cihon, T. M., & Abdel-Jalil, A. (2018). Problem solving from a behavior analytic perspective: Implications for educators. Journal of Behavioral Education, 28(2), 275–300. https://doi.org/10.1007/s10864-018-9296-9. Kimble, G. A. (1961). Hilgaral and marquis conditioning and learning (2nd edition). Prentice-Hall.



Applications of behavior analysis to college teaching

185

Kirkpatrick, M., Akers, J., & Rivera, G. (2019). Use of behavioral skills training with teachers: A systematic review. Journal of Behavioral Education, 28, 344–361. https://doi. org/10.1007/s10864-019-09322-z. Konrad, M., Joseph, L. M., & Eveleigh, E. (2009). A meta-analytic review of guided notes. Education and Treatment of Children, 32, 421–444. https://doi.org/10.1353/etc.0.0066. Kubina, R. M., Morrison, R., & Lee, D. L. (2002). Benefits of adding precision teaching to behavioral interventions for students with autism. Behavioral Interventions, 17(4), 233– 246. https://doi.org/10.1002/bin.122. Larwin, K. H., & Larwin, D. A. (2013). The impact of guided notes on postsecondary student achievement: A meta-analysis. International Journal of Teaching and Learning in Higher Education, 25(1), 47–58. Layng, T. V. J. (2018). Tutorial: Understanding concepts: Implications for behavior analysts and educators. Perspectives on Behavior Science, 42(2), 345–363. https://doi.org/10.1007/ s40614-018-00188-6. Layng, T.V. J., Twyman, J. S., & Stikeleather, G. (2003). Headsprout early Reading™: Reliably teaches children to read. Behavioral Technology Today, 3, 7–20. Leaf, J. B., Cihon, J. H., Ferguson, J. L., Milne, C., Leaf, R., & McEachin, J. (2022). Evaluating the relationship between performance on a multiple-choice examination and common ABA-based procedures. Focus on Autism and Other Developmental Disabilities.. https://doi. org/10.1177/10883576221110170. Lindsley, O. R. (1992). Precision teaching: Discoveries and effects. Journal of Applied Behavior Analysis, 25, 51–57. https://doi.org/10.1901/jaba.1992.25-51. Lindsley, O. R. (1995). Precision teaching: By teachers for children. Journal of Precision Teaching, XII(2), 9–17. Longo, A., Reeve, K. F., Jennings, A. M., Vladescu, J. C., Reeve, S. A., & Colasurdo, C. R. (2022). Comparing stimulus equivalence-based instruction to self-study of videos to teach examples of sign language to adults. Behavioral Interventions, 37(3), 713–731. https://doi.org/10.1002/bin.1871. Lovitz, E. D., Cihon, T. M., & Eshleman, J. W. (2020). Exploring the effects of daily, timed, and typed technical term definition practice on indicators of fluency. Behavior Analysis in Practice, 30, 14(3), 704–727. https://doi.org/10.1007/s40617-020-00481-4. Mager, R. F. (1997). Preparing instructional objectives: A critical tool in the development of effective instruction. Center for Effective Performance. Malanga, P. R., & Sweeney, W. J. (2008). Increasing active student responding in a university applied behavior analysis course: The effect of daily assessment and response cards on end of week quiz scores. Journal of Behavioral Education, 17, 187–199. https://doi. org/10.1007/s10864-007-9056-8. Malott, M. E. (2003). Paradox of organizational change: Engineering organizations with behavioral systems analysis. Context Press. Malott, M. E., & Martinez,W. S. (2006). Addressing organizational complexity: A behavioural systems analysis application to higher education. International Journal of Psychology, 41(6), 559–570. https://doi.org/10.1080/00207590500492773. Malott, R. (2005). Behavioral systems analysis and higher education. In W. L. Heward, T. E. Heron, N. A. Neef, S. M. Peterson, D. M. Sainato, G. Cartledge, … J. C. Dardig (Eds.), Focus on behavior analysis in education: Achievements, challenges, & opportunities (pp. 211–236). Merrill Prentice Hall. Malott, R. W. (2018). A model for training science-based practitioners in behavior analysis. Behavior Analysis in Practice, 11(3), 196–203. https://doi.org/10.1007/ s40617-018-0230-3. Markle, S. M. (1964). Individualizing programed instruction: The programer. Teachers College Record, 66(3), 219–228.

186

Applied behavior analysis advanced guidebook

Marmolejo, E. K., Wilder, D. A., & Bradley, L. (2004). A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis, 37(3), 405–410. https://doi.org/10.1901/ jaba.2004.37-405. Marsh, R. J., Cumming,T. M., Randolph, J. J., & Michaels, S. (2021). Updated meta-analysis of the research on response cards. Journal of Behavioral Education.. https://doi.org/10.1007/ s10864-021-09463-0. Michael, J. (1991). A behavioral perspective on college teaching. The Behavior Analyst, 14(2), 229–239. https://doi.org/10.1007/BF03392578. Miller, L. K. (2005). Principles of everyday behavior analysis. Thomson/Wadsworth. Miltenberger, R. G. (2012). Behavioral skills training procedures. In Behavior modification: Principles and procedures (pp. 251–269). Cengage Learning. Molenda, M. (2008). The programmed instruction era: When effectiveness mattered. TechTrends, 52(2), 52–58. https://doi.org/10.1007/s11528-008-0136-y. Neef, N. A., Bicard, D. F., Endo, S., Coury, D. L., & Aman, M. G. (2005). Evaluation of pharmacological treatment of impulsivity in children with attention deficit hyperactivity disorder. Journal of Applied Behavior Analysis, 38(2), 135–146. https://doi.org/10.1901/ jaba.2005.116-02. Neef, N. A., Cihon, T. M., Kettering, T., Guld, A., Axe, J., Itoi, M., et al. (2007). A comparison of study session formats on attendance and quiz performance in a college course. Journal of Behavioral Education, 16, 235–249. https://doi.org/10.1007/s10864-006-9037-3. Neef, N. A., McCord, B. E., & Ferreri, S. J. (2006). Effects of guided notes versus completed notes during lectures on college students’ quiz performance. Journal of Applied Behavior Analysis, 39(1), 123–130. https://doi.org/10.1901/jaba.2006.94-04. Neef, N. A., Perrin, C. J., Haberlin, A. T., & Rodrigues, L. C. (2011). Studying as fun and games: Effects on college students’ quiz performance. Journal of Applied Behavior Analysis, 44(4), 897–901. https://doi.org/10.1901/jaba.2011.44-897. Nietzel, M. T., Winett, R. A., MacDonald, M. L., & Davidson, W. S. (1977). Behavioral approaches to community psychology. Pergamon Press. Ostrosky, B. D., Reeve, K. F., Day-Watkins, J., Vladescu, J. C., Reeve, S. A., & Kerth, D. M. (2022). Comparing group-contingency and individualized equivalence-based instruction to a PowerPoint lecture to establish equivalence classes of reinforcement and punishment procedures with college students. The Psychological Record.. https://doi. org/10.1007/s40732-021-00495-6. Paliliunas, D., Belisle, J., & Dixon, M. R. (2018). A randomized control trial to evaluate the use of acceptance and commitment therapy (ACT) to increase academic performance and psychological flexibility in graduate students. Behavior Analysis in Practice, 11, 241– 253. https://doi.org/10.1007/s40617-018-0252-x. Perrin, C. J., Miller, N., Haberlin,A.T., Ivy, J.W., Meindl, J. N., & Neef, N.A. (2011). Measuring and reducing college students’ procrastination. Journal of Applied Behavior Analysis, 44(3), 463–474. https://doi.org/10.1901/jaba.2011.44-463. Peterson, G. B. (2004). A day of great illumination: B. F. Skinner’s discovery of shaping. Journal of the Experimental Analysis of Behavior, 82(3), 317–328. https://doi.org/10.1901/ jeab.2004.82-317. Peterson, N. (1978). An introduction to verbal behavior. Behavior Associates. Peterson, N., & Ledoux, S. (2014). An introduction to verbal behavior (2nd edition). ABCs. Pritchett, M., Ala’i-Rosales, S., Cruz, A. R., & Cihon, T. M. (2021). Social justice is the spirit and aim of an applied science of human behavior: Moving from colonial to participatory research practices. Behavior Analysis in Practice, 22, 1–19. https://doi.org/10.1007/s40617-021-00591-7. Quigley, S. P., Peterson, S., Frieder, J. E., Peck, K. M., Kennedy-Walker, A., & Chirinos, M. (2021). An evaluation of multiple SAFMEDS procedures. Behavior Analysis in Practice, 14(3), 679–703. https://doi.org/10.1007/s40617-020-00527-7.



Applications of behavior analysis to college teaching

187

Quigley, S. P., Peterson, S. M., Frieder, J. E., & Peck, K. M. (2017). A review of SAFMEDS: Evidence for procedures, outcomes and directions for future research. Perspectives on Behavior Science, 19, 41(1), 283–301. https://doi.org/10.1007/s40614-017-0087-8. Reed, D. D., & Kaplan, B. A. (2011). The matching law: A tutorial for practitioners. Behavior Analysis in Practice, 4(2), 15–24. https://doi.org/10.1007/BF03391780. Rieken, C. J., Dotson, W. H., Carter, S. L., & Griffith, A. K. (2018). An evaluation of interteaching in an asynchronous online graduate-level behavior analysis course. Teaching of Psychology, 45(3), 264–269. https://www.learntechlib.org/p/191884/. Root,W. B., & Rehfeldt, R. A. (2021).Towards a modern-day teaching machine:The synthesis of programmed instruction and online education. The Psychological Record, 71, 85–94. https://doi.org/10.1007/s40732-020-00415-0. Rosales, R., Soldner, J. L., & Crimando,W. (2014). Enhancing the impact of quality points in interteaching. Journal of the Scholarship of Teaching and Learning, 14(5), 1–11. https://doi. org/10.14434/josotlv14i5.12746. Rosales-Ruiz, J., & Hunter, M. (2016). PORTL: Your portable skinner box. Operants, 4, 34–36. Rosales-Ruiz, J., & Hunter, M. (2019). PORTL: The portable operant teaching and research lab. Behavior Explorer. Saville, B. K., Cox, T., O’Brien, S., & Vanderveldt, A. (2011). Interteaching: The impact of lectures on student performance. Journal of Applied Behavior Analysis, 44(4), 937–941. https://doi.org/10.1901/jaba.2011.44-937. Saville, B. K., & Zinn, T. E. (2009). Interteaching: The effects of quality points on exam scores. Journal of Applied Behavior Analysis, 42(2), 369–374. https://doi.org/10.1901/ jaba.2009.42-369. Saville, B. K., Zinn, T. E., & Elliott, M. P. (2005). Interteaching versus traditional methods of instruction: A preliminary analysis. Teaching of Psychology, 32(3), 161–163. https://doi. org/10.1207/s15328023top3203_6. Saville, B. K., Zinn, T. E., Neef, N. A.,Van Norman, R., & Ferreri, S. J. (2006). A comparison of interteaching and lecture in the college classroom. Journal of Applied Behavior Analysis, 39(1), 49–61. https://doi.org/10.1901/jaba.2006.42-05. Self-Paced Psychology 101 (n.d.). https://www.unr.edu/psychology/student-resources/ self-paced-psychology. Sidman, M. (1989). Coercion and its fallout. Authors Cooperative. Sidman, M. (1994). Equivalence relations and behavior: A research story. Authors Cooperative. Sidman, M. (2008). Reflections on stimulus control. The Behavior Analyst, 31(2), 127–135. https://doi.org/10.1007/BF03392166. Sidman, M., & Tailby, W. (1982). Conditional discrimination vs. matching to sample: An expansion of the testing paradigm. Journal of the Experimental Analysis of Behavior, 37, 5–22. Sidman, R. L., & Sidman, M. (1965). Neuroanatomy: A programmed text. Vol. 1. Lippincott Williams & Wilkins. Singer-Dudek, J., Keohane, D. D., & Matthews, K. (2021). Educational systems administration: The comprehensive application of behavior analysis to schooling (CABAS®) model. In A. Maragakis, C. Drossel, & T. J. Waltz (Eds.), Applications of behavior analysis in healthcare and beyond Springer. https://doi.org/10.1007/978-3-030-57969-2_17. Skinner, B. F. (1933). The abolishment of a discrimination. Proceedings of the National Academy of Sciences, 19, 825–828. Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. Appleton-Century. Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard Educational Review, 24, 86–97. Skinner, B. F. (1957). Verbal behavior. Appleton-Century-Crofts. Skinner, B. F. (1958). Teaching machines. Science, 128, 969–977. https://doi.org/10.1126/ science.128.3330.969.

188

Applied behavior analysis advanced guidebook

Skinner, B. F. (1959). Cumulative record. Appleton-Century-Crofts. Skinner, B. F. (1968). The technology of teaching. Appleton-Century-Crofts. Skinner, B. F. (1984).The shame of American education. American Psychologist, 39(9), 947–954. https://doi.org/10.1037/0003-066X.39.9.947. Stokes,T. F., & Baer, D. M. (1977). An implicit technology of generalization. Journal of Applied Behavior Analysis, 10(2), 349–367. https://doi.org/10.1901/jaba.1977.10-349. Tiemann, P.W., & Markle, S. M. (1990). Analyzing instructional content: A guide to instruction and evaluation. Stipes Publishing Company. Truelove, J. C., Saville, B. K., & Van Patten, R. (2013). Interteaching: Discussion group size and course performance. Journal of the Scholarship of Teaching and Learning, 13(2), 23–30. Twyman, J. S., & Heward, W. L. (2016). How to improve student learning in every classroom now. International Journal of Educational Research, 87, 78–90. https://doi.org/10.1016/j. ijer.2016.05.007. Urbina, T., Cihon, T. M., & Baltazar, M. (2021). Exploring procedural manipulations to enhance student performance on SAFMEDS in undergraduate introduction to behavior analysis courses. Journal of Behavioral Education, 30(1), 130–148. https://doi.org/10.1007/ s10864-019-09359-0. Vargas, E. A. (1977a). Individualizing mass education. In J. S.Vargas (Ed.), Behavioral psychology for teachers (pp. 289–330). Harper & Row. Vargas, E. A., & Vargas, J. S. (1991). Programmed instruction and teaching machines. In R. P. West, & L. A. Hamerlynck (Eds.), Designs for excellence in education: The legacy of B. F. Skinner Sopris West, Inc. Vargas, J. S. (1977b). Behavioral psychology for teachers. Harper & Row. Vargas, J. S. (2013). Behavior analysis for effective teaching (2nd Edition). Routledge. Viskovich, S., & Pakenham, K. I. (2020). Randomized controlled trial of a web-based acceptance and commitment therapy (ACT) program to promote mental health in university students. Journal of Clinical Psychology, 76, 929–951. https://doi.org/10.1002/jclp.22848. Watkins, C. L. (1997). Project follow through: A case study of contingencies influencing instructional practices of the educational establishment. Cambridge Center for Behavioral Studies. Wesp, R., & Ford, J. E. (1982). Flexible instructor pacing assists student progress in a personalized system of instruction. Teaching of Psychology, 9(3), 160–162. https://doi. org/10.1207/s15328023top1601_2. Wilcox, J. (1964). Blood pressure measurement: A programmed notebook for nurses. U.S. National Institutes of Health, Clinical Center. Williams,W. L.,Weil,T. M., & Porter, J. C. K. (2012).The relative effects of traditional lectures and guided notes lectures on university student test scores. The Behavior Analyst Today, 13(1), 12–16. https://doi.org/10.1037/h0100713. Zayack, R. M., Ratkos, T., Frieder, J. E., & Paulk, A. (2015). A comparison of active student responding modalities in a general psychology course. Teaching of Psychology, 43(1), 43–47. https://doi.org/10.1177/0098628315620879.

SECTION 2

Technology, telehealth, and remote service delivery

This page intentionally left blank

CHAPTER 8

Technology guidelines and applications Brittany J. Bice-Urbach

Medical College of Wisconsin, Milwaukee, WI, United States

Telebehavioral health (TBH) is a term used to describe mental and behavioral health services provided over various forms of technology including synchronous videoconferencing, telephone communications, and asynchronous electronic communication (e.g., text messaging, email; Hilty et  al., 2017). TBH has become a beneficial method of service delivery for teaching, training, consultation, and supervision and can be specifically useful for ABA training and implementation (Antezana, Scarpa,Valdespino,Albright, & Richey, 2017; Baretto, Wacker, Harding, Lee, & Berg, 2006; Lindgren et al., 2016; Luxton, Nelson, & Maheu, 2016; Machalicek et al., 2009, 2010; Suess, Wacker, Schwartz, Lustig, & Detrick, 2016; Wacker et al., 2013). There has been burgeoning evidence of the efficacy and effectiveness of TBH services over the last several decades with substantial growth in the last 15 years (McClellan, Florell, Palmer, & Kidder, 2020). However, the COVID-19 pandemic led to a rapid transition to implementing TBH services across various academic and clinical settings. This rapid transition may have led to implementation of TBH services without full training and practitioner knowledge around best practices in TBH service delivery. Although there are several agencies that have worked to develop telehealth guidelines, many practitioners needed to rapidly shift without expertise in the guidelines for practice when transitioning practice to a virtual environment. As the need for TBH services due to the pandemic has waned, we are now in a critical time period to fully determine the most appropriate uses for TBH services and how to implement evidence-based practices within the field. This chapter will largely prioritize videoconferencing services.Therefore, TBH will be used to highlight videoconferencing services and mention of telephone or asynchronous communication will be identified separately. The chapter also highlights current research and the evidence-base for Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00008-8

Copyright © 2023 Elsevier Inc. All rights reserved.

191

192

Applied behavior analysis advanced guidebook

treatment over TBH, practical considerations when developing and preparing for implementing services over TBH, and practice guidelines and recommendations for implementation of services.

Telebehavioral health evidence base The evidence base for TBH services has continued to grow over the last several decades. More generally, prior research has found that TBH services can effectively be provided and produce similar results to in-person services for treating a variety of mental and behavioral health concerns (Alessi, 2000; Cook & Doyle, 2002; Germain, Marchand, Bouchard, Guay, & Drouin, 2010; Glueckauf & Ketterson, 2004; Richardson, Frueh, Grubaugh, Egede, & Elahi, 2009). Additionally, prior research also suggests that providers, clients, and supervisees find TBH acceptable and feasible for implementation (Bice-Urbach & Kratochwill, 2016; Bice-Urbach, Kratochwill, & Fischer, 2017; Fischer et al., 2016). Prior research has also highlighted some of the continued barriers and specific benefits to services provided over telehealth (Luxton et al., 2016). Barriers of Telebehavioral Health Services. There are several barriers to appropriately providing TBH services that have been identified over the years. A primary concern of TBH services remains around underlying issues with technology that cause improper connections and disruptions to providing services (Bishop, O’Reilly, Maddox, & Hutchinson, 2002; Florell, 2016; Kennedy & Yellowlees, 2000; Scott Kruse et  al., 2018). Technology disruptions may include poor connection quality, disruptions in audio or video (e.g., distorted sound, lagging video), and lost connections. Research has suggested that difficulties with technology and connection may be a stronger predictor of an individual’s likelihood to recommend a service, even more so than the quality of the provider or the perception of services provided (Bice-Urbach & Rysdyk, in press).This continues to be a concern in order to ensure the quality of services provided to families over TBH and that providers can equitably offer TBH services for those that may be more likely to experience technology challenges, for example, persons who live in rural locations where connection quality is worse may have greater difficulty having access to certain pieces of technology. Prior research suggests that technology literacy can also be a barrier both in the accurate implementation of TBH services and the interest of individuals in using TBH services. Issues with technology literacy can make implementation especially challenging when running into errors that ­require



Technology guidelines and applications

193

troubleshooting. Those with lower technology literacy are more likely to have difficulty appropriately implementing evidence-based practices in TBH and have been found to be less likely to offer TBH services (Maheu, Pulier, McMenamin, & Posen, 2012; McGinty, Saeed, Simmons, & Yildirim, 2006). Research has specifically shown that a provider’s acceptance and attitude toward TBH services have an impact on the adoption of services (Wade, Eliott, & Hiller, 2014), meaning that those with lower technology literacy are less likely to use and pursue opportunities to use telehealth. Another noted concern from providers is the need to make adjustments to treatment protocols and communication styles. Prior to implementing services, providers have been found to be more likely to express concern about how to adapt their services to a TBH format (Brenes, Ingram, & Danhauer, 2011; Day & Schneider, 2002; Kroll, Brosig, Malkoff, & BiceUrbach, 2021). Factors such as slowing down speed of communication and exaggerating hand gestures have been found to be more concerning for some providers. Additionally, adjusting how materials will be shared, how to connect with clients and families over video services, and how to handle crisis situations are all noted concerns from providers. Clients have also expressed concerns around data privacy and security (Rohland, Saleh, Rohrer, & Romitti, 2000). Although HIPAA compliant videoconferencing services are now more widely available, clients and families have expressed concern around the security of video calls and how data is saved and stored. Benefits of TBH services. Despite these identified limitations, research has also found a number of benefits for use of TBH services. TBH is generally considered to increase access to services and supervision, especially when working in more remote locations and with underserved populations.When trying to address the shortage of providers in certain locations,TBH services are found to help reduce travel time and are financially as cost-­effective to face-to-face services when considering the initially higher cost of purchasing proper equipment as compared to the reduced cost in travel expenses over time (Ruskin et al., 2004; Schopp, Johnstone, & Merrell, 2000). TBH services have also been found to allow for increased flexibility in how and when services are provided (Baltrinic, O’Hara, & Jenscius, 2016; Florell, 2016; Kroll et al., 2021). Providers and clients note preference for reduced travel, ease of observing in a variety of locations and settings, and increased flexibility in the times when clients can be seen (Kroll et al., 2021). This flexibility in time can not only allow for observations at times when in-person meetings may not have previously been feasible (e.g., earlier or

194

Applied behavior analysis advanced guidebook

later in the day, connecting when a child is at school, connecting if there is illness in the home of child or provider) but also allows for a quick transition between appointments given that providers can end one video call and start another within minutes of each other. Another significant benefit of TBH is that services over this format are observed to generally be effective (Acierno et al., 2016; Myers & Turvey, 2013) and satisfactory (Luxton, Pruitt, & Osenbach, 2014; Swinton, Robinson, & Bischoff, 2009). Prior research has found that both provider and patient comfort and ability to connect while completing services over videoconferencing has generally been rated as equally positive or more positive than face-to-face meetings. In particular, there are some populations of individuals where videoconferencing services can feel more comfortable such as children who experience anxiousness and are therefore more engaged when completing services over videoconferencing. Additionally, research has suggested that similar outcomes can be obtained when completing consultation, supervision, and intervention over videoconferencing formats (Florell, 2016; Miller, Clark,Veltkamp, Burton, & Swope, 2008; Rees, Krabbe, & Monaghan, 2009).

Practical considerations when preparing for TBH services Various organizations, including the American Telemedicine Association (ATA), the American Psychological Association (APA), and the Coalition for Technology in Behavioral Science (CTiBS) have worked to develop practice guidelines around TBH. These include determining necessary outcomes to ensure the quality of practice; complying with relevant legislation, regulations, and accreditation requirements; ensuring that clients are aware of their rights and responsibilities; developing additional components of informed consent for addressing the use of technology; and adhering to the same set of standards and guidelines as used during face-to-face treatment (APA, 2013; ATA, 2007; Maheu et al., 2021; Shore et al., 2018; Yellowlees, Shore, & Roberts, 2009). They additionally developed technical standards that include providing equipment that will support the diagnostic needs of the clinician, providing strategies to ensure the safe use of the equipment, complying with safety laws, complying with laws regarding confidentiality of consultee and client information, and developing policies for maintenance of equipment (APA, 2013; ATA, 2007; Shore et al., 2018;Yellowlees et al., 2009). All of these technical guidelines provide some understanding of necessary considerations when developing programming



Technology guidelines and applications

195

over ­videoconferencing and implementing services. This next section will highlight important considerations and recommendations from across previously developed guidelines when considering implementing TBH services for intervention, consultation, and/or supervision. This section will highlight (a) general requirements for implementation, (b) room and device set-up for developing a telepresence, and (c) training recommendations for preparing providers for TBH implementation. General Requirements. When engaging in TBH services, there are general factors to consider for evidence-based practices, namely equipment for providing services, system specifications, and legal and regulatory factors, all important for initial set-up and practice. Equipment and Software. Although the equipment and technology used to implement TBH services are rapidly advancing with many devices capable of supporting audio and video calling, there are some basic considerations that are important to recognize when determining hardware and software for videoconferencing. In general, it is recommended that individuals involved in TBH services use a piece of equipment which includes a screen, camera, microphone, and speakers (e.g., tablet, computer). Although many phones include these capabilities, the screen size is small and often makes it more difficult to appropriately conduct necessary services. Phones can also be difficult when engaging in screensharing, as the size is too small to see the visuals and maintain a visual of those individuals on the call. Although not a necessity, research has found it beneficial to use a camera with the capability to move (i.e., plan, tilt, and zoom), especially when considering ABA services where a child and provider may be active. Headphones are also recommended in order to help reduce the possibility of an echo over audio (ATA, 2009; Elliott, Abbass, & Cooper, 2016; Fischer et al., 2016). It is also important to have display monitors with appropriate resolution and processing power in order to view video images without pixilation (ATA, 2009, 2014; Gibson, Pennington, Stenhoff, & Hopper, 2010). Additional considerations when choosing hardware for TBH services is the portability and connectivity of specific hardware, which may be more or less important depending on the location where services are being provided (Florell, 2016). For example, if a provider is completing TBH services in a consistent location where hardware will not be moving, a desktop computer or additional monitors connected to a device may be beneficial for allowing a large viewing screen. In comparison, if a provider is moving to multiple locations while completed TBH services, a more portable device (e.g., tablet, laptop) would be beneficial. Along with portability, providers may benefit from

196

Applied behavior analysis advanced guidebook

using a device that connects with software programs used within the larger system where they are working (Florell, 2016). For example, a provider may choose to use a PC device instead of an Apple product in order to better connect with software programs used in a system that is also using PC products. In addition to the physical hardware components necessary for conducting TBH services, providers must also consider the software program that will be used to conduct services.Various software programs have been developed in order to complete videoconferencing services. Of greatest importance is choosing a HIPAA compliant service that requires a higher level of data security and confidentiality if completing more direct services with clients and families (Novotney, 2011). However, other software features that can enhance the videoconferencing experience include picture-in-picture, notification of dropped calls, notification of muted calls, screen-sharing, and instant messaging (ATA, 2009; Elliott et  al., 2016). Fischer, Schultz, Collier-Meek, Zoder-Martell, and Erchul (2016) reviewed commonly used software packages within telehealth literature and highlighted specific programs. Additional information about videoconferencing software program options can also be found at www.telementalhealthcomparisons.com. System Specifications. In addition to choosing specific hardware and software, providers also need to be aware about system specifications in order to provide high-quality TBH services.When conducting videoconferencing, these system specifications are valuable to consider on both ends of the call (e.g., supervisor and supervisee, consultant and consultee, provider and client). Guidelines suggest that both ends of the call require a bandwidth of at minimum 384 Kbps (ATA, 2009; McGinty et al., 2006; Shore et al., 2018) and an internet speed of at least 0.5 mbps (Rousmaniere, 2014). Screen resolution should be a minimum of 30 frames per second (Goldstein & Glueck, 2016). Meeting these minimum requirements is necessary for reducing the technology challenges that can occur in a video call, most commonly lagging audio, lagging video, and dropped calls. Legal and Regulatory Factors. Given the kind of services being provided and the setting in which a provider is employed, there are various legal and regulatory factors to consider when completing services over telehealth. Those not working in a “covered entity” (e.g.., health care providers, health plans, health-care clearing houses), which requires HIPAA compliance, may still wish to pursue HIPAA compliant practices in order to meet ethical standards for protecting and securing client health information. Given that services are being completed over various formats of technology, there are safety risks that can occur in regard to data security and client privacy.



Technology guidelines and applications

197

The Health Insurance Portability and Accountability Act of 1996 provides specific expectations for data security and privacy of electronic protected health information. The Health Information Technology for Economic and Clinical Health Act (HITECH) (2009) also passed to strengthen some of the civil and criminal enforcement of HIPAA rules. These laws highlight the importance of taking steps necessary to maintain client privacy and security of data, given that transmitting communication over technology runs a risk for data to be intercepted by a third party. When conducting TBH, choosing a HIPAA compliant software program ensures that video and audio data will be secure and properly encrypted. Encryption and protection can occur through using a point-to-point circuit, Integrated Services Network, Advanced Encryption Standard (AES), or Virtual Private Network (VPN). HIPAA (2007) standards require a minimum of 128-bit encryption (Kramer, Mishkind, Luxton, & Shore, 2013). In addition to considering encryption during TBH videoconferencing services, providers should also consider how data and patient information is being stored on their devices. With the increased access to client ­information, clinicians must take care to ensure that data are stored and accessed through confidential means (Schwartz & Lonborg, 2011;Van Allen & Roberts, 2011). Password protecting word documents related to client information, using a personal email with a password, not using direct client names in the title or body of an email, and having a password to access data on your computer or phone are all beneficial for improving security and maintaining confidentiality (Eonta et al., 2011;Van Allen & Roberts, 2011; Yuen, Goetter, Herbert, & Forman, 2012). When preparing to set-up TBH services for consultation, supervision, or treatment, it will be important to determine the steps that are being taken individually or within an organization to ensure client privacy and data security. This information needs to be laid out and identified for all providers completing services over TBH. Creating a document which highlights how data is encrypted to ensure client privacy and data security can be useful to help providers answer any subsequent questions from clients and families. Providing a checklist of specific procedures that providers will need to complete in order to ensure data security on their devices (e.g., password protecting documents, not using full patient names in emails, password protecting emails and devices) can also be useful for increasing adherence. These steps help develop expectations for practice within a larger system and for ensuring families feel comfortable moving forward with TBH services knowing that their data will be protected.

198

Applied behavior analysis advanced guidebook

Another consideration regarding legal and regulatory factors related to TBH is ensuring practice occurs within the parameters of your license. At this time, each state has control over licensing requirements, which means that each state determines requirements for obtaining a license to practice. Although there have been some changes that occurred through agreements between states, specifically around emergency services during the pandemic, providers are generally only approved to practice within the states they are licensed. This restriction can become more complicated when clients may be located in other states. Therefore, current expectations around interjurisdictional practice require the provider to be licensed in the state where the client is located. Groups, such as the Association of State and Provincial Psychology Boards (ASPPB) have developed the Psychology Inter-jurisdictional Compact (PSYPACT) in order to facilitate services across state boundaries, though each state is required to pass legislation. At this time, there are 31 states participating in PSYPACT with 28 of those states with legislation currently in effect (https://psypact.site-ym.com/). Given the changes that have been occurring around licensing, it will be important to continuously review your state’s current regulations as they pertain to your practice license. In addition to legal considerations for TBH practice, various organizations have begun to develop guidelines for practice and ethical considerations in practice. CTiBS has more specifically been working with various behavioral health areas (e.g., psychology, social work) in order to develop guidance more specific to practice expectations in various behavioral health fields. More generally, the ATA and APA have developed some guidance about practice over telehealth.as well (APA, 2013; ATA, 2007). Telepresence. Once decisions have been made regarding the technology equipment that will be used for completing TBH services, each provider needs to prepare their space for conducting services over videoconferencing. Guidelines often include specific recommendations around lighting, background, and camera positions. When preparing your room, it is recommended that providers chose a space with a combination of direct and indirect lighting to minimize appearing washed out (Krupinski & Leistner, 2017). In general, room lighting that includes natural light or is as close to daylight as possible (e.g., incandescent instead of overhead lighting, full spectrum bulbs, fluorescent daylight bulbs) is recommended (ATA, 2009; Krupinski & Leistner, 2017). The visible background on the screen should be neutral and avoid clutter, for example, moving curtains and objects that distract the other person on the call. A clear free wall is



Technology guidelines and applications

199

r­ ecommended so that the brightest object on the screen view is the individual’s face (Shore et al., 2018;Yellowlees et al., 2009).There are filters that exist on some videoconferencing programs that allow individuals to choose a specific background and hide the physical background around them or blur the background behind the person speaking. These are alternative options if it is not possible to conduct a session in a more neutral location. If the provider’s desk is visible, removing papers and other visible clutter is recommended. Additionally, providers should consider the amount of background sound present and make attempts to minimize excess noise by using headphones with a microphone located in the headphone or a sound machine to block additional noise. Finally, recommendations for setup of the camera include placing the camera at the provider’s eye level to help improve natural eye contact. As much as possible, trying to reduce the distance between the camera and where the provider looks at the computer screen helps to create a more natural gaze. Training for TBH Services. In preparation for implementing services over TBH, it is recommended that all providers receive some level of training. Maheu et al. (2021) identified a competency framework for TBH practice which specified clinical evaluation and care, virtual environment and telepresence, technology, legal and regulatory issues, evidence based and ethical practice, mobile health technologies, and telepractice development. Providers most likely fall in one of three levels: novice, proficient, and authority. Although not every provider would be expected to be an authority on TBH practice, all providers should have some level of proficiency prior to implementing services over videoconferencing. TBH training offers an opportunity to increase competence and improve attitudinal barriers to implementation (Bruno & Abbott, 2015; Gluekauf et  al., 2018; Gray et  al., 2015; Guise & Wiig, 2017; McCord, Saenz, Armstrong, & Elliott, 2015). When completing training, there are several options for how to effectively share information. Information can be presented through formal presentations, written handouts, role-playing or mock practice sessions, and further consultation. Formal presentations are useful in sharing basic information about best-practices in TBH services and have been reported to be helpful in learning basic TBH information (Kroll et al., 2021). Handouts are also useful for highlighting key information (e.g., troubleshooting documents) that can be easily referenced at a later point in time. In addition, research has found that opportunities to view mock practice sessions over videoconferencing or directly role-play through a session are important for increasing provider competence, increasing comfort with .

200

Applied behavior analysis advanced guidebook

v­ ideoconferencing, and being able to recognize ways to be flexible when completing services over videoconferencing (McClellan et al., 2020; Ruiz Morilla, Sans, Casasa, & Giménez, 2017). In addition to initial training, providers completing TBH services have expressed desire for further support on an ongoing basis as they gain proficiency through either additional consultation sessions or connection with a telehealth champion (Wade et al., 2014). Despite best attempts to prepare providers for the transition to TBH services, there are some experiences that unexpectedly occur when implementing services regularly. Therefore, opportunities for consultation or a contact person available to field questions will help ensure ongoing support for any concerns that arise. When onboarding new staff, a training protocol should be developed which includes the initial training options via watching presentations on the topic, handouts, opportunities for viewing mock sessions or live role play, and supports on an ongoing basis through consultation or supervision sessions with a telehealth expert. Preparing for making the transition to TBH services requires initial time and thought around various considerations discussed in this section. Although these recommendations do require initial thought and effort, preparing for implementing TBH services, especially if considering implementation within a larger system, helps ensure a smoother transition. In order to ensure that TBH implementation goes smoothly, the proper equipment, space for providing services, and training need to be considered.

Practice guidelines when implementing TBH services Once a system is in place that allows providers to implement services over TBH, there are a number of practice guidelines and considerations that help to improve implementation of services over TBH. This section of the chapter will cover recommendations around (a) determining appropriateness of providing services over TBH, (b) provider expectations for TBH implementation, (c) managing possible crises or common technology failures over TBH, (d) supervision over TBH, and (e) TBH for skills training. Determining Appropriateness of services over TBH. Although research has found a benefit of TBH service for direct service, consultation, and supervision regarding a variety of presenting concerns,TBH services are not appropriate for every situation. When completing direct service and consultation, providers need to consider if the client and/or family could benefit from services over videoconferencing. Providers need to assess if TBH is an appropriate fit for a client and if there are more effective options available.



Technology guidelines and applications

201

Questions about the effectiveness of treatment over videoconferencing may be concerns about client engagement, safety, and behavior during sessions, and ability to use technology. In encountered, these concerns should be communicated to clients and/or families and alternative options discussed (Shore et al., 2018; Yellowlees et al., 2009). If it is difficult to determine if services will be effective at the outset of treatment, providers should take steps to reassess appropriateness of services as treatment continues. Additionally, it is valuable to consider that services can be conducted as a combination of in-person and videoconferencing meetings. Considering what format is most effective for the services a provider is completing regarding the combination of in person meetings and videoconferencing meetings could be useful. For example, some treatment can occur solely over video conferencing. Other forms of treatment may benefit from planned in-person meetings, to maintain connections or teach certain skills, and virtual follow-ups (e.g., meeting in person once a month and virtually the remaining three weeks of a month). Other providers may wish to complete teaching sessions in person but prefer in-ear coaching happens virtually. Providers can determine what they find as the most effective combination of in-person and virtual services. If billing for services, it will just be important to ensure that insurance companies will reimburse similarly across the in-person and virtual formats. Provider Expectations. As providers prepare to complete services over videoconferencing, there are various considerations and recommendations to help effectively translate services to a videoconferencing format. Guidance is provided around (a) informed consent; (b) setting clear expectations with a client, consultee, or supervisee; (c) session preparation and starting a session; and (d) practice recommendations for translating work to videoconferencing. Informed Consent. While preparing to start services over TBH, clients and consultees need to be aware of the risks and benefits of TBH services. Informed consent for practice is important for face-to-face services as well as videoconferencing. However, videoconferencing highlights some additional factors that should be discussed while individuals are consenting to services. Providers should share with clients and consultees some of the known risks and benefits of providing services over videoconferencing (Shore et  al., 2018). In particular, highlighting research that suggests effectiveness of TBH services while discussing the increased risk for confidentiality breaches and session disruptions due to technology difficulties

202

Applied behavior analysis advanced guidebook

is important. Clarifying that a provider has the right to end services over TBH if they no longer appear to be the best fit should be discussed with clients and families. Additionally, informed consent should specify typical limitations to confidentiality (e.g., safety concerns) and discuss expectations around an emergency management plan if crises were to occur. Setting Clear Expectations for Clients, Consultees, and Supervisees. In preparation for the start of treatment, providers should set clear expectations with their clients, consultees, or supervisees around what is expected for them during service delivery and establish boundaries for treatment (Shore et al., 2018; Yellowlees et al., 2009). In particular, clients should receive information in advance to the start of treatment, which ideally would happen verbally when appointments are scheduled and through a written document that highlights their expectations related to TBH services. Factors to be discussed with families include technology expectations (e.g., families need access to devices with videoconferencing capabilities, families need secured internet connection), recommendations for where they should conduct sessions (e.g., choosing a place in the home that minimizes distractions), and how privacy will look for sessions (e.g., highlighting parent involvement in a child session and possible parent expectations around behavior management). Further. there should be discussion about how information will be shared with families (e.g., phone calls, emails, through electronic medical record), what to do if technology failures occur (e.g., troubleshooting steps such as restarting device or moving to a phone call, providing contact information if call fails), and adhering to emergency management plans. Laying out these expectations in advance increases family knowledge and can lead to a smoother transition to TBH services. Fig. 1 is a checklist created for families to bring attention to various factors in preparation of a telehealth visit within a hospital setting. In addition to setting clear expectations, services over TBH also highlight the importance of setting appropriate boundaries with families. As technology becomes more integrated into treatment, providers need to ensure they are setting realistic expectations for clients, consultees, and supervisees around proper boundaries. Providers may need to address reasons why they will not connect over social media (e.g., Facebook, Linked In, Instagram), delineate forms of acceptable communication (e.g., phone calls, over electronic medical records), and explain that responses may not be immediate. Given how quickly information can now be accessed, providers must clarify that they may not be able to respond to a client or consultee for 24 to 48 h after receiving the message and confirm times when messages with not be received and addressed (e.g., after hours, weekends).



Technology guidelines and applications

203

Telehealth Visit Checklist Technology Needs You will need: A computer, tablet, or cell phone with a camera and microphone installed. If using a computer or tablet, have your cell phone charged and nearby in case your provider needs to call you.

Strong internet connecon via mobile, Wi-Fi, or wired internet.

Preparing for Your Visit A few days before the visit: Download Zoom and the MyChart apps on the device you plan to use for the visit. Pracce logging into your child’s MyChart account. Find a good space for your visit. A good space is: Quiet, private, and free from distracon. Well lit, so the provider can see you. Have light on your face, not behind you.

Make sure you have a stable internet connecon. Try to avoid driving, using elevators, and moving from one locaon to another when compleng the visit. Engaging in such acvies can lead to less reliable Wi-Fi and interfere with sound and picture quality during the visit.

Charge your device for the visit. A few minutes before your visit: Plug your device in, to ensure you have enough baery power for the visit. Make sure your webcam is at eye level. Make sure the volume on your device is turned up. Close any extra programs, apps, or windows on your device. Having these open can slow down your internet connecon. 10 minutes before the visit: Open the MyChart app, click on the appointments tab, idenfy the correct appointment lisng and complete the e-check in for your visit, then click “Begin Video Visit”. Created By: Bice-Urbach, B.J. and Rysdyk, S. (2021)

Fig. 1  Family telehealth visit checklist.

Preparing and starting TBH sessions. Translating services to TBH requires a level of advanced preparation that may exceed what a provider requires when providing services face-to-face. Prior to starting a TBH session, providers need to assess the plan for treatment in the session and identify places where the transition to video services may require additional changes. One area to consider would be in the handouts or paper documents used within treatment. Providers should make a list of commonly used items and ensure that these are available in an electronic format, especially if these are documents that would be shared with a client, consultee, or supervisee. Some documents may be created during a session such as writing out goals or plans using the whiteboard feature. In this scenario, providers are able to save an image or take a screen shot of what was created in session in order

204

Applied behavior analysis advanced guidebook

for clients or consultees to have access to the material after the session is complete. Providers should determine the best ways to share these documents with families be it email or mailing hard copies and when these resources need to be shared with families before, during, or after a session. If information is sent in advance of scheduled meetings, allow sufficient time in order to send these documents to clients, families, consultees, or supervisees. In addition to document preparation, providers should consider what components of treatment would be more challenging to complete over videoconferencing and if there are alternative options that would be more useful virtually. For example, instead of playing a game like Connect 4 with a child in person, there are options to use a computer-based Connect 4 game and share the screen. In general, providers may need to consider how to make virtual sessions more interactive, especially if working more directly with children in a session. When ready to start a TBH session, the provider should have shared all relevant documents needed for the session or have them ready to view via screenshare on the applicable device. This preparation minimizes wasted time in a session and helps sessions feel closer to the experience in person. When a session begins, the provider should start by scanning the room to identify that no other individuals are present or should verbally and visually identify other individuals present with the provider or on the call. Providers should also take time to highlight expectations for the day, review any materials that may be needed for conducting the session (e.g., paper, pencils, specific toys, food items), and encourage individuals to identify any technology concerns they notice immediately so that they can be addressed promptly. Practice Recommendations for Translating Services. As providers work to make videoconferencing processes as effective as possible, it is recommended that they sit further away from the screen in order to be seen from the waist up, as many in-computer cameras often only capture a provider from the shoulders up if sitting close to the device. Providers should slow their rate of communication to help ensure individuals hear what is shared, even with minor delays or lags. Hand gestures should also be slower and completed at mid-chest level. As discussed previously, look not only at the video image on the screen but also directly into the camera to mimic direct eye contact. Additionally, if the provider leaves the screen, looks away from the screen, or is doing something while completing the session (e.g., typing), the provider should verbally explain what they are doing, for example, “If you see me looking down or my



Technology guidelines and applications

205

arms and hands moving, that is because I am typing up what you share with me today.” Although these are general recommendations, research suggests that providers may need to adapt their practice to take into consideration possible cultural differences (e.g., preference for eye contact, comfort with certain gestures; Evason, 2016; Shore et al., 2018). Emphasized earlier, providers should prepare for how services may need to be changed over videoconferencing. Although being able to observe child in home without the child being aware or providing in-ear consultation to parents in the home setting are advantages of videoconferencing, providers must determine how to most effectively translate their practice to a virtual format and if there are components that are not appropriate to conduct virtually. Providers may wish to develop specific virtual checklists to ensure adherence to protocols over videoconferencing. Additionally, providers can get creative in using screensharing to make sessions more interactive. Screen sharing can be used to share documents, highlight relevant information (e.g., showing a video example), or engage in more interactive activities such as accessing games online and using the drawing features. Providers can grant access to allow the other person control over what is shown on the screen or not grant this access if inappropriate to do so.When videoconferencing with groups of people, providers may wish to utilize breakout rooms in order to offer opportunities for additional practice or discuss varying topics with fewer individuals. Breakout rooms are especially useful when consulting with a larger group in order to maximize opportunities to engage and ask questions. Recommendations for Managing Emergencies and Trouble­ shooting Common Issues. When completing services over TBH, especially those that are direct with a client, there is some risk of a safety or mental health emergency. Given this risk, providers should have specific protocol in place for managing emergencies (Shore et  al., 2018). At the start of treatment, discussion of the emergency management plan and information gathering (e.g., patient address, local EMS information, caregiver contacts) is essential. During services over TBH, the provider should be monitoring specific risks related to safety or mental health emergencies. The emergency management plan should highlight how the provider will proceed if concerned about a safety or mental health emergency. Unique to the virtual format, specific care should be taken to ensure the provider knows what to do if a connection is lost with the client over videoconferencing during an emergency in order to ensure the client receives necessary follow-up care.

206

Applied behavior analysis advanced guidebook

As technology issues remain a major barrier to videoconferencing services in practice, access to information technology (IT) support staff may be beneficial in addressing certain troubleshooting barriers that may occur. However, there are some situations when reaching out to an IT support staff may not be efficient for addressing an issue, especially if the issue is something that could be addressed directly in the meeting. Ending contact with the client, consultee, or supervisee in order to reach out to IT can take a long time and may not lead to an answer in the time scheduled for the meeting, which can be frustrating for all parties involved when the meeting needs to be rescheduled and large amounts of time are lost trying to connect. Although some factors may be beyond a provider’s control such as poor the internet connectivity in rural locations and software updates from the videoconferencing program, there are some common underlying challenges that can be more effectively addressed through protocols which can be incorporated into a troubleshooting documents. Fig. 2 is an example of such a document that was created for providers in a hospital setting in order to have specific steps to follow based on the common technology issues that were occurring. In general, more common technology issues that would benefit from a specific protocol may include what steps to take if an individual is having difficulty connecting or entering the video call, difficulties with sound, issues with video appearing, lagging sound/video, and hearing an audio echo. Common first steps for troubleshooting involve encouraging the client to (a) check if they are connected to audio and video by making sure the audio and video are enabled for the call, (b) exit and re-enter the call, (c) shutdown and restart the software system (e.g., Zoom) or the entire device, (d) switch to a different device if possible, (e) check internet speed, (f) ensure connection (e.g., home wireless internet instead of phone internet), (g) use headphones or ensure that audio is connected properly (e.g., connected via Bluetooth to headphones or to the device itself), and (h) adjust volume up to an audible level on the device. Although addressing these issues takes time, most of them can be quickly and easily solved by walking through some basic troubleshooting steps in order to allow more time for the purpose of the TBH service. In addition to having some specific recommendations for when these more common issues arise, providers should take time prior to the start of meetings over TBH or in the first meeting in order to discuss some valuable considerations with clients, consultee, or trainees in order to enhance the experience over telehealth. There are some additional considerations that we know can give the best opportunity for a stronger connection and maximize the time connected to



Technology guidelines and applications

207

Troubleshoong Common Concerns in Video Visits Paent has not arrived to virtual appointment: 1. Wait 5-10 minutes for family to arrive 2. Call provided number in Epic to reach out to family: In voicemail, provide name and number for either your office line or the clinic in order for family to reach out if they are trying to connect Paent unable to login to MyChart: 1. Offer opon to email link to families 2. Have families reach out to Clinic front office for MyChart troubleshoong Paent having issues geng sound: 1. Either verbally or typing in chat, encourage family to check that the microphone on the bo€om of the page in not crossed out (indicang they are muted) 2. Either verbally or typing in chat, encourage family to check audio input (e.g., are they using phone audio, are they connected via Bluetooth to headphones) 3. Either verbally or typing in chat, encourage family to ensure sound volume is up 4. If unable to acvely get sound, call the family and use phone for audio (Try if unsuccessful within 5 minutes of start of appointment) Paent having issues geng video: 1. Ask family to check the video box on the bo€om of the page is not crossed out (indicated video is off) 2. Ask family to ensure the device they are using has video and it is connected Paent sound/video is lagging: 1. Have family check if they are connected to Wifi or best internet connecon available to them a. You can have the paent look at number of bars (4-5 is recommended) or have them use google to find an internet speed test (above 10mbps) i. Direcons for Finding Internet Speed Test: Type Internet Speed Test into google search. A box should pop up with a buon to run internet speed test. A pop up will come up with your devices number of mbps. b. If on Wifi, have paent move closer to router or to best area in home for internet Created By: Bice-Urbach, B.J. and Rysdyk, S. (2021)

2. If within first 15 minutes, you can have the family: a. Exit and restart call b. Try a different device c. Restart device 3. Call the family over the phone and mute sound on video call Hearing audio echo: 1. Place speakers away from laptop mic or lower the volume 2. Use headset or headphones Paent’s sound volume for hearing Provider is very low when using a phone: 1. Have family check that volume is up fully on their phone 2. If possible, have pa€ent try using a headset or headphones with their phone 3. Switch visit to audio call

Fig. 2  Provider troubleshooting document.

208

Applied behavior analysis advanced guidebook

a client, consultee, or supervisee in a virtual meeting. One example would be ensuring that providers and clients remain in one location for the entirety of the video call. Although this may seem obvious to some, given the portability of many devices, it is not uncommon for providers, clients, consultees, and supervisees to connect while traveling from one location to another. Despite the perceived convenience of taking a call while traveling and multi-tasking, there are certainly some safety issues involved if a person is driving, it impacts the ability to focus on the purpose of the meeting while in the process of traveling, and moving locations increases risks for dropped calls. It is recommended that providers remain in one location and encourage those they are connecting with to do the same. Additionally, it is important to discuss with clients and consultees the importance of checking for updates on devices and the software system prior to the start of the video call. If scheduling only a few meetings with an individual, it may be beneficial to recommend that the individual plans to start getting connected approximately 15 min prior to the first meeting. This allows for opportunities to download any updates to the software or device, ensure that they are able to connect to the program, restart the device if something is not working, and reach out prior to the start of the call if issues are arising. If having more regular contact over video with an individual, you may recommend that they open their device 10 min early to ensure that they have an opportunity to address any system updates that could delay the start of the video call. Providers should take time prior to starting a meeting over TBH or in the first meeting to discuss some of the planned steps if technology difficulties occur. For example, providers should identify how a client, consultee, or supervisee should reach out if they are experiencing technology issues (e.g., providing a phone or email contact). In addition, providers should obtain the best way to connect with a client, consultee, or supervisee if the provider is experiencing technology difficulties or if the client has not yet connected to a call. For example, the protocol shared with families may be that the provider will reach out over the phone after 5 min if the family has not connected to the video call in order to provide troubleshooting support. As well, the provider may wish to discuss putting a time limit on how long troubleshooting steps will be attempted before the meeting is rescheduled, that is, “We will reschedule if a solution to the issue is not found in the first 15 minutes of our scheduled time.” These steps give some clarity to those involved of what to expect and also reduce spending long periods of time attempting to connect.



Technology guidelines and applications

209

Considerations for Supervision Over Videoconferencing. Tele­ supervision is valuable for addressing many barriers that occur when trying to complete in-person supervision (e.g., geographic distance, supervisees at multiple sites, and weather-related events (Sellers & Walker, 2019). Various forms of technology can be used in supervision to increase access to supervision time and enhance supervision practices. In particular, email communication with supervisees, bug-in-ear training, individual and group-based videoconferencing, and video/audio recordings of sessions enhance the supervision experience (Sellers & Walker, 2019). Despite the possible enhancements of supervision services with additional technology, in-person supervision remains valuable, especially when considering efforts to develop a strong and trusting relationship with a supervisee (Rousmaniere, 2014). Rousmaniere (2014) notes that the decision to use telesupervision should not be driven by convenience but because it can enhance the supervision experience available to a supervisee. Similar to intervention and consultation services over videoconferencing, supervisors should take initial steps to outline expectations with supervisees and provide direct teaching around videoconferencing etiquette (Florell, 2016). Supervisors should also be mindful of the most appropriate method of supervision given the situation and consider use of telesupervision when it enhances training experience or reduces barriers (e.g., increasing frequency of supervision, offering supervision in situations where in-person options are not available; Sellers & Walker, 2019). Sellers and Walker (2019) include a decision-making guide which highlights when and how telesupervision services may be most useful. Enhancing Skills Training Over Videoconferencing. Videocon­ ferencing has also been used to help increase access to skills training and enhance provider experiences in preparing for implementing various interventions and skills.TBH training has specifically been used in the ABA community to address issues around barriers of geographic distance in order to properly train parents, school staff, and graduate students in functional analysis (Baretto et al., 2006; Frieder, Peterson, Woodward, Crane, & Garner, 2009; Machalicek et al., 2009, 2010). At this time, technology is used for various skills training and continuing professional development opportunities including recorded webinars, virtual conferences, training sessions, and professional consultation meetings (i.e., peer-to-peer learning), and online courses (Florell, 2019). Despite concern if these virtual formats would be as effective for learning, research has broadly suggested that virtual formats appear to be effective for a broad scope of learning topics and learner types (Fischer, Schumaker,

210

Applied behavior analysis advanced guidebook

Culbertson, & Deshler, 2010; Means, Toyama, Murphy, Bakia, & Jones, 2010). Skills training is facilitated by repeated opportunities to view skills such as on an online training where learners can return to the model to examine the skill multiple times (McCutcheon, Lohan, Traynor, & Martin, 2015). An identified barrier for virtual training can be a lack of interaction with peers or instructors through observing prerecorded sessions (Carver & Kosloski, 2015). Therefore, options for live interaction, such as planned virtual peer group meetings or additional consultation packages included with a prerecorded training, can enhance virtual skills training experiences.

Chapter summary TBH services have rapidly grown over the last fifteen year with even more rapid development due to the COVID-19 pandemic. As restrictions for in-person meetings are lifting within clinical settings, this offers a wonderful opportunity to examine current TBH practices and determine if there are additional components to consider in order to enhance the telehealth experience for providers, clients, consultees, and supervisees. A number of resources are available that highlight recommended TBH guidelines, though it can be challenging to determine how to integrate recommendations into your practice setting and apply discussed recommendations based on the unique aspects of your practice. This chapter highlighted current research on the limitations, strengths, and the evidence base for use of TBH services; guidance and recommendations when developing a system for providing TBH service and training others; and practice guidelines for adapting services from in-­person to virtual formats and enhancing the implementation of TBH services. The chapter gives current recommendations and highlights important considerations given how TBH is commonly used within practice at this time. However, the discussed recommendations are also meant to guide critical thinking as to important considerations and adaptations that need to occur when we begin to incorporate new technology into our practice. Finally, given the speed at which new technology is developed and can be used to enhance training and treatment, the chapter underscores the value of thinking about legal and ethical factors while creatively adapting our work in ways that enhance the services that can be provided with new technologies.

References Acierno, R., Gros, D. F., Ruggiero, K. J., Hernandez Tejada, B. M. A., Knapp, R. G., Lujuez, C. W., et al. (2016). Behavioral activation and therapeutic exposure for posttraumatic stress



Technology guidelines and applications

211

disorder: A noninferiority trial of treatment delivered in person versus home-based telehealth. Depression and Anxiety, 33, 415–423. https://doi.org/10.1002/da.22476. Alessi,N.(2000).Child and adolescent telepsychiatry:Reliability studies needed.Cyberpsychology & Behavior, 3(6), 1009–1015. https://doi.org/10.1089/109493100452273. American Psychological Association (APA). (2013). Guidelines for the practice of telepsychology. American Psychologist, 68(9), 791–800. https://doi.org/10.1037/a0035001. American Telemedicine Association. (2014). Core operational guidelines for telehealth services involving provider-patient interactions. Retrieved from wwww.amaericantelemed.org/ resources/telemedicine-practice-guielines/telemedicine-practice-guideline/core-operational-guidelines-for-telehealth-services-involving-provider-patient-interactions#. V9v_WmWhTnQ. American Telemedicine Association (ATA). (2007). Core standards for telemedicine operations. Retrieved from http://www.americantelemed.org/files/public/standards/. CoreStandards_withCOVER.pdf. American Telemedicine Association (ATA). (2009). Practice guidelines for videoconferencing based telemental health. Retrieved from http://www.americantelemed.org/files/public/stan. dards/PracticeGuidelinesforVideoconferencing-Based%20TelementalHealth.pdf. Antezana, L., Scarpa, A., Valdespino, A., Albright, J., & Richey, J. A. (2017). Rural trends in diagnosis and services for autism spectrum disorder. Frontiers in Psychology, 8, 590. Baltrinic, E. R., O’Hara, C., & Jenscius, M. (2016). Technology-assisted supervision and cultural competencies. In T. Rousmnaniere, & E. Renfro-Michel (Eds.), Using technology to enhance clinical supervision. Alexandria,VA: American Counseling Association. https://doi. org/10.1002/9781119268499.ch04. Baretto, A., Wacker, D. P., Harding, J., Lee, J., & Berg, W. K. (2006). Using telemedicine to conduct behavioral assessments. Journal of Applied Behavior Analysis, 39(3), 333–340. https://doi.org/10.1901/jaba.2006.173-104. Bice-Urbach, B., Rysdyk, S. in press. Enhancing family and provider experience with the transition to virtual family feedback sessions for psychological evaluations in response to COVID-19. Bice-Urbach, B. J., & Kratochwill, T. R. (2016). Teleconsultation: The use of technology to improve evidence-based practices in rural communities. Journal of School Psychology, 56, 27–43. https://doi.org/10.1016/j.jsp.2016.02.001. Bice-Urbach, B. J., Kratochwill, T. R., & Fischer, A. J. (2017). Teleconsultation: Application to provision of consultation services for school consultants. Journal of Educational and Psychological Consultation, 28, 255–278. https://doi.org/10.1080/10474412.2017.1389651. Bishop, J. E., O’Reilly, R. L., Maddox, K., & Hutchinson, L. J. (2002). Client satisfaction in a feasibility study comparing face-to-face interviews with telepsychiatry. Journal of Telemedicine and Telecare, 8(4), 217–221. https://doi.org/10.1258/135763302320272185. Brenes, G. A., Ingram, C.W., & Danhauer, S. C. (2011). Benefits and challenges of conducting psychotherapy by telephone. Professional Psychology: Research and Practice, 42(6), 543–549. https://doi.org/10.1037/a0026135. Bruno, R., & Abbott, J. M. (2015). Australian health professionals’ attitudes toward and frequency of use of internet supported psychological interventions. International Journal of Mental Health, 44, 107–123. https://doi.org/10.1080/00207411.2015.1009784. Carver, D. L., & Kosloski, M. F. (2015). Analysis of student perception of the psychological learning environment in online and face-to-face career and technical education courses. The Quarterly Review of Distance Education, 16(4), 7–21. Cook, J. E., & Doyle, C. (2002). Working Alliance in online therapy as compared to face-toface therapy: Preliminary results. Cyber Psychology and Behavior, 5(2), 95–105. https://doi. org/10.1089/109493102753770480. Day, S. X., & Schneider, P. L. (2002). Psychotherapy using distance technology: A comparison of face- to-face, video, and audio treatment. Journal of Counseling Psychology, 49(4), 499–503. https://doi.org/10.1037/0022-0167.49.4.499.

212

Applied behavior analysis advanced guidebook

Elliott, J., Abbass, A., & Cooper, J. (2016). Group supervision using videoconference technology. In T. Rousmnaniere, & E. Renfro-Michel (Eds.), Using technology to enhance clinical supervision. Alexandria, VA: American Counseling Association. https://doi. org/10.1002/9781119268499.ch04. Eonta, A. M., Christon, L. M., Hourigan, S. E., Ravindran, N., Vrana, S. R., & Southam Gerow, M. A. (2011). Using everyday technology to enhance evidence-based treatments. Professional Psychology: Research and Practice, 42(6), 513–520. https://doi.org/10.1037/ a0025825. Evason, N. (2016). Cultural atlas: Japanese culture. https://culturalatlas.sbs.com.au/ japanese-culture/japanese-culturecommunication. Fischer, A. J., Dart, E. H., LeBlanc, H., Hartman, K., Steeves, R. O., & Gresham, F. M. (2016). An investigation of the acceptability of videoconferencing within a school-based behavioral consultation framework. Psychology in the Schools, 53, 240–252. https://doi. org/10.1002/pits.2016.53.issue-3. Fischer, A. J., Dart, E. H., Radley, K. C., Richardson, D., Clark, R., & Wimberly, J. (2016). Evaluating the effectiveness of videoconferencing as a behavioral consultation medium. Journal of Educational and Psychological Consultation, 1–22. https://doi.org/10.1080/1047 4412.2016.1235978. Fischer, A. J., Schultz, B. K., Collier-Meek, M. A., Zoder-Martell, K., & Erchul, W. P. (2016). A critical review of videoconferencing software to support school consultation. International Journal of School and Educational Psychology. https://doi.org/10.1080/21683 603.2016.1240129. Fischer, J. B., Schumaker, J. B., Culbertson, J., & Deshler, D. D. (2010). Effects of a computerized professional development program on teacher and student outcomes. Journal of Teacher Education, 64(5), 302–312. https://doi.org/10.1177/0022487113494413. Florell, D. (2016). Web-based training and supervision. In Computer-assisted and web based innovations in psychology, special education, and health (p. 313). Florell, D. (2019). Technology in professional learning. In A. J. Fischer, T. Collins, E. H. Dart, & K. C. Radley (Eds.), Technology applications in school consultation, supervision, and school psychology training. Routledge: New York, NY. Frieder, J. E., Peterson, S. M., Woodward, J., Crane, J., & Garner, M. (2009). Teleconsultation in school settings: Linking classroom teachers and behavior analysist through web-based technology. Behavior Analysis in Practice, 2(2), 32. https://doi.org/10.1007/BF03391746. Germain, V., Marchand, A., Bouchard, S., Guay, S., & Drouin, M. (2010). Assessment of the therapeutic alliance in face-to-face or videoconference treatment for posttraumatic stress disorder. Cyberpsychology, Behavior and Social Networking, 13(1), 29–35. https://doi. org/10.1089/cyber.2009.0139. Gibson, J. L., Pennington, R. C., Stenhoff, D. M., & Hopper, J. S. (2010). Using desktop videoconferencing to deliver interventions to a preschool student with autism. Topic in Early Childhood Special Education, 29(4), 214–225. https://doi. org/10.1177/0271121409352873. Glueckauf, R. L., & Ketterson, T. U. (2004). Telehealth interventions for individuals with chronic illness: Research review and implications for practice. Professional Psychology: Research and Practice, 35(6), 615–627. https://doi.org/10.1037/0735-7028.35.6.615. Gluekauf, R., Maheu, M., Drude, K., Wells, B., Wang,Y., Gustafson, D., et al. (2018). Survey of psychologists’ telebehavioral health practices: Technology use, ethical issues and training needs. Professional Psychology: Research and Practice, 49(3), 205–219. https://doi. org/10.1037/pro0000188. Goldstein, F., & Glueck, D. (2016). Developing rapport and therapeutic alliance during telemental health sessions with children and adolescents. Journal of Child and Adolescent Psychopharmacology, 26(3), 204–211. Gray, M. J., Hassija, C. M., Jaconis, M., Barrett, C., Zheng, P., Steinmetz, S., et  al. (2015). Provision of evidence-based therapies to rural survivors of domestic violence and sexual



Technology guidelines and applications

213

assault via telehealth: Treatment outcomes and clinical training benefits. Training and Education in Professional Psychology, 9, 235–241. https://doi.org/10.1037/tep0000083. Guise,V., & Wiig, S. (2017). Perceptions of telecare training needs in home healthcare services: A focus group study. BMC Health Services Research, 17, 164. https://doi.org/10.1186/ s12913-017-2098-2. Health Information Technology for Economic and Clinical Health Act (HITECH). (2009). Security standard. Retrieved from http://www.cms.gov/Medicare/ Compliance-and-Audits/Par t-A-Cost-Reor t-Audit-and-Reimbursement/ HITECH-Audits. Health Insurance Portability and Accountability Act (HIPAA). (2007). Security standard. Retrieved from http://www.cms.hhs.gov/SecurityStandard/. Hilty, D. M., Maheu, M. M., Drude, K., Hertlein, K., Wall, K., Long, R., et  al. (2017). Telebehavioral health, telemental health, e-therapy and e-health competencies: The need for an interdisciplinary framework. Journal of Technology in Behavioral Science, 2(3–4), 171–189. https://doi.org/10.1007/s41347-017-0036-0. Kennedy, C., & Yellowlees, P. (2000). A community-based approach to evaluation of health outcomes and costs for telepsychiatry in a rural population: Preliminary results. Journal of Telemedicine and Telecare, 6, 155–157. https://doi.org/10.1258/1357633001934492. Kramer, G. M., Mishkind, M. C., Luxton, D. D., & Shore, J. (2013). Managing risk and protecting privacy. In K. Myers, & C. L. Turvey (Eds.), Telemental health: An overview of legal, regulatory, and risk-management issues (pp. 83–107). Boston, MA: Elsevier. Kroll, K., Brosig, C., Malkoff, A., & Bice-Urbach, B. (2021). Evaluation of a systems-wide telebehavioral health training implementation in response to COVID-19. Journal of Patient Experience, 8, 1–4. https://doi.org/10.1177/2374373521997739. Krupinski, E., & Leistner, G. (2017). Let there be light: A quick guide to telemedicine lighting. Retrieved from http://higherlogicdownload.s3.amazonaws. com/AMERICANTELEMED/618da447-dee1-4ee1-b941c5bf3db5669a/ UploadedImages/NEW%20Practice%20Guidelines/Let_There_Be_Light_Qui ck_ Guide.pdf. (Accessed 4 February 2020). Lindgren, S., Wacker, D., Suess, A., Schieltz, K., Pelzel, K., Kopelman, T., et  al. (2016). Telehealth and autism:Treating challenging behavior at lower cost. Pediatrics, 137, S167– S175. https://doi.org/10.1542/peds.2015-2851O. Luxton, D. D., Nelson, E. L., & Maheu, M. M. (2016). Telesupervision and training in telepractice: How to conduct legal, ethical, and evidence-based telepractice. New York: American Psychological Association. Luxton, D. D., Pruitt, L. D., & Osenbach, J. E. (2014). Best practices in remote psychological assessment via telehealth technologies. Professional Psychology: Research and Practice, 45, 27–35. https://doi.org/10.1037/a0034547. Machalicek,W., O’Reilly, M., Chan, J. M., Rispoli, M., Lang, R., Davis,T., et al. (2009). Using VC to support teachers to conduct preference assessments with students with autism and developmental disabilities. Research in Autism Spectrum Disorders, 3, 32–41. https:// doi.org/10.1016/j.rasd.2008.03.004. Machalicek, W., O’Reilly, M. F., Rispoli, M., Davis, T., Lang, R., Franco, J. H., et  al. (2010). Training teachers to assess the challenging behaviors of students with autism using video tele-conferencing. Education and Training in Autism and Developmental Disabilities, 45, 203–215. Maheu, M. M.,Wright, S. D., Neufeld, J., Drude, K. P., Hilty, D. M., Baker, D. C., et al. (2021). Interprofessional telebehavioral health competencies framework: Implications for telepsychology. Professional Psychology: Research and Practice, 52(5), 439–448. https://doi. org/10.1037/pro0000400. Maheu, M., Pulier, M. L., McMenamin, J. P., & Posen, L. (2012). Future of telepsychology, telehealth, and various technologies in research and practice. Professional Psychology: Research and Practice, 43(6), 613–621.

214

Applied behavior analysis advanced guidebook

McClellan, M., Florell, D., Palmer, J., & Kidder, C. (2020). Clinician telehealth attitudes in a rural community mental health center setting. Journal of Rural Mental Health, 44, 62–73. https://doi.org/10.1037/rmh0000127. McCord, C. E., Saenz, J. J., Armstrong, T. W., & Elliott, T. R. (2015). Training the next generation of counseling psychologists in the practice of telepsychology. Counseling Psychology Quarterly, 28, 324–344. https://doi.org/10.1080/09515070.2015.1053433. McCutcheon, K., Lohan, M., Traynor, M., & Martin, D. (2015). A systematic review evaluating the impact of online or blended learning vs. face-to-face learning of clinical skills in undergraduate nurse education. Journal of Advanced Nursing, 71(2), 255–270. https:// doi.org/10.1111/jan.12509. McGinty, K. L., Saeed, S. A., Simmons, S. C., & Yildirim, Y. (2006). Telepsychiatry and e-mental health services: Potential for improving access to mental health care. Psychiatric Quarterly, 77, 335–342. https://doi.org/10.1007/s11126-006-9019-6. Means, B., Toyama,Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington DC: U.S. Department of Education. https://doi.org/10.1016/j.chb.2005.10.002. Miller, T. W., Clark, J., Veltkamp, L. J., Burton, D. C., & Swope, M. (2008). Teleconferencing model for forensic consultation, court testimony, and continuing education. Behavioral Sciences & the Law, 26, 301–313. Myers, K., & Turvey, C. L. (Eds.). (2013). Telemental health: Clinical, technical, and administrative foundations for evidence-based practice. Boston, MA: Elsevier. Novotney, A. (2011). A new emphasis on telehealth: How can psychologists stay ahead of the curve-and keep patients safe? Monitor on Psychology, 42(6), 40–44. Rees, C. S., Krabbe, M., & Monaghan, B. J. (2009). Education in cognitive-behavioral therapy for mental health professionals. Journal of Telemedicine and Telecare, 15, 59–63. https://doi. org/10.1258/jtt2008.008005. Richardson, L. K., Frueh, B. C., Grubaugh, A. L., Egede, L., & Elahi, J. D. (2009). Current direction in videoconferencing tele-mental health research. Clinical Psychology, 16(3), 323–338. Rohland, B. M., Saleh, S. S., Rohrer, J. E., & Romitti, P. A. (2000). Acceptability of telepsychiatry to a rural population. Psychiatric Services, 51(5), 672–674. https://doi.org/10.1176/ appi.ps.51.5.672. Rousmaniere,T. (2014). Using technology to enhance clinical supervision and training. In C. E. Watkins, & D. L. Milne (Eds.), The Wiley international handbook of clinical supervision (pp. 204–237). Chichester: Wiley-Blackwell. Ruiz Morilla, M. D., Sans, M., Casasa, A., & Giménez, N. (2017). Implementing technology in healthcare: Insights from physicians. BMC Medical Informatics and Decision Making, 17, 92. https://doi.org/10.1186/s12911-017-0489-2. Ruskin, P. E., Silver-Aylaian, M., Kling, M. A., Reed, S. A., Bradham, D. D., Hebel, J. R., et al. (2004). Treatment outcomes in depression: Comparison of remote treatment through telepsychiatry to in-person treatment. The American Journal of Psychiatry, 161, 1471–1476. https://doi.org/10.1176/appi.ajp.161.8.1471. Schopp, L., Johnstone, B., & Merrell, D. (2000). Telehealth and neuropsychological assessment: New opportunities for psychologists. Professional Psychology: Research and Practice, 31, 179–183. https://doi.org/10.1037/0735-7028.31.2.179. Schwartz, T. J., & Lonborg, S. D. (2011). Security management in telepsychology. Professional Psychology: Research and Practice, 42(6), 419–425. https://doi.org/10.1037/a0026102. Scott Kruse, C., Karem, P., Shifflett, K., Vegi, L., Ravi, K., & Brooks, M. (2018). Evaluating barriers to adopting telemedicine worldwide: A systematic review. Journal of Telemedicine and Telecare, 24, 4–12. https://doi.org/10.1177/1357633X1 6674087. Sellers, T., & Walker, S. (2019). Telesupervision: In-field considerations. In A. J. Fischer, T. Collins, E. H. Dart, & K. C. Radley (Eds.), Technology applications in school consultation, supervision, and school psychology training. Routledge: New York, NY.



Technology guidelines and applications

215

Shore, J. H., Yellowlees, P., Caudill, R., Johnston, B., Turvey, C., Mishkind, M., et al. (2018). Best practices in videoconferencing-based telemental health April 2018. Telemedicine Journal and e-Health, 24(11), 827–832. https://doi.org/10.1089/tmj.2018.0237. Suess, A. N., Wacker, D. P., Schwartz, J. E., Lustig, N., & Detrick, J. (2016). Preliminary evidence on the use of telehealth in an outpatient behavior clinic. Journal of Applied Behavior Analysis, 49, 686–692. https://doi.org/10.1002/jaba.305. Swinton, J. J., Robinson, W. D., & Bischoff, R. J. (2009). Telehealth and rural depression: Physician and patient perspectives. Families, Systems & Health, 27, 172–182. https://doi. org/10.1037/a0016014. Van Allen, J., & Roberts, M. C. (2011). Critical incidents in the marriage of psychology and technology: A discussion of potential ethical issues in practice, education, and policy. Professional Psychology: Research and Practice, 42(6), 433–439. https://doi.org/10.1037/ a0025278. Wacker, D. P., Lee, J. F., Padilla Dalmau,Y. C., Kopelman, T. G., Lindgren, S. D., Kuhle, J., et al. (2013). Conducting functional communication training via telehealth to reduce the problem behavior of young children with autism. Journal of Developmental and Physical Disabilities, 25, 35–48. https://doi.org/10.1007/s10882-012-9314-0. Wade, V. A., Eliott, J. A., & Hiller, J. E. (2014). Clinician acceptance is the key factor for sustainable telehealth services. Qualitative Health Research, 24, 682–694. https://doi. org/10.1177/1049732314528809. Yellowlees, P., Shore, J., & Roberts, L. (2009). Practice guidelines for ­videoconferencing-based telemental health. Telemedicine Journal and e-Health. https://doi.org/10.1089/ tmj.2010.0148. Yuen, E. K., Goetter, E. M., Herbert, J. D., & Forman, E. M. (2012). Challenges and opportunities in internet-mediated telemental health. Professional Psychology: Research and Practice, 43(1), 1–8. https://doi.org/10.1037/a0025524.

This page intentionally left blank

CHAPTER 9

Data recording and analysis David J. Coxa, Asim Javeda, Jacob Sosinea,b, Clara Cordeiroa, and Javier Sotomayora,c a Behavioral Data Science Research Lab, Endicott College, Beverly, MA, United States Behavioral Health Center of Excellence, Los Angeles, CA, United States Habita Behavioral Care, San Diego, CA, United States

b c

Data recording and analysis Data are a permanent product of many human behaviors and often become a visual stimulus to which scientists respond. As a result, the concepts of data accuracy, data reliability, and data validity are defined relative to human behavior and our choice in how to collect, store, and analyze data. As questions:What does it mean to convert interconnected physical events that dynamically change over time into discrete and static stimuli of a different physical structure (i.e., what does it mean to “collect data”)? How do we know if we’ve done this well or poorly? And, as a functional-contextual science, how do behavior analysts create and leverage technologies for data collection, storage, and analysis that optimize the reason for which data are used empirically or clinically? To fully appreciate the functional role of data recording and analysis in a science of behavior, we must understand fully what options are available to us as scientists and practitioners and the conditions under which those options are more or less likely to be useful. The purpose of this chapter is to outline those nuts, bolts, and tools available to us as scientists and practitioners. This chapter is organized into three main sections. First, we begin with the antecedent context in which we find ourselves. We begin by providing a general overview of data recording and analysis in current behavior analytic literature. This first section also includes a brief introduction to maps and topology which, arguably, is the generalized framework used to construct classic single-subject experimental design graphs in behavior analysis. Contextualizing popular behavior analytic graphs in this way opens the door for creative and alternative methods of using data to explore ­behavior-environment relations. Once contextualized, the second section Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00009-X

Copyright © 2023 Elsevier Inc. All rights reserved.

217

218

Applied behavior analysis advanced guidebook

of this chapter dives into data and analytics. Specifically, we discuss data types, data recording methods, and data analysis techniques likely to be familiar and unfamiliar to behavior analysts. Finally, we close the chapter by discussing technologies and tools that are currently transforming what is possible with data collection and analytics in behavior analysis.To conclude, we end the chapter by summarizing and providing recommendations regarding this exciting area of science.

Contextualizing data and visual displays of information The critical nature of data to a behavior science might be best exemplified by the first section of the first chapter in The Behavior of Organism (Skinner, 1938): Behavior as a Scientific Datum. After arguing why data for a behavior science ought to be at the level of whole organism behavior and the corresponding environmental conditions in which it occurs, Skinner (1938) turned in chapter two to focus on how best to control your context when collecting data (i.e., experimental method) so the data are useful once collected. Entire books have been written on experimental methods for behavior sciences (Sidman, 1960; Skinner, 1938) and we need not revisit them here. However, we do highlight that if behavior as datum was the first most important topic to Skinner, the second most important topic was exactly how those data were recorded. This is exemplified by the first figure in the book being dedicated to the operant conditioning chamber and the automated recording of lever presses during an experiment. Restated for emphasis, the two most fundamental topics to get two scientists/practitioners on the same conversational plane is to agree on what and how data were collected. Though perfect for small organisms such as rats and pigeons, operant chambers were not quite up to task when behavior analysts moved into applied settings and began working with human participants (Ayllon & Michael, 1959). The expansion of behavior analysis to new settings and populations required the expansion of what behavior-environment relations were recorded and how they were recorded. Fast forward to today, data recording methodologies have continued to adapt and become more variable to suit the many needs and environments within which behavior analysts practice and conduct research. In short, behavior analysts now have many options for recording data that clarify behavior-environment relations. A common tactic to use data practically is to display the data visually in a graph, where a graph can be defined as a two-dimensional visual



Data recording and analysis

219

structure that models recorded relationships between two or more objects (Bondy & Murty, 1976). Graphs can be considered one instance of a map from the field of topology (e.g., Husain, 1977). Many people are familiar with geographical maps that contain a 1:1 direct correspondence with the distance between two physical points in the world. Single subject graphs similarly contain a 1:1 direct correspondence between patterns of behavior (y-axis), the environment (condition labels), and how those relations change in time (x-axis). For example, the second figure in The Behavior of Organisms (Skinner, 1938) shows sample lines of defined slopes for the cumulative records the reader contacts throughout the rest of the book. There are several critical points to be made about Fig. 2 in The Behavior of Organisms (Skinner, 1938; p. 59). First, there is nothing inherently perfect or beautiful about the dimensions shown in this figure. The length of the x-axis could have been longer or shorter, the length of the y-axis could have been longer or shorter, and the relative scale of the y-axis to the xaxis could have been altered so that 1 response per min fell at a 45O angle from the origin as opposed to approximately a 20O angle as shown. Second, this graph displays how behavior changes in time via contact with a contingency. However, there’s no physical law we are aware of that requires “responses” to be plotted on the y-axis, “time” to be plotted on the x-axis, and both axes to be arranged perpendicular to each other. Finally, unstated anywhere in the graph are the fixed environmental conditions under which the data were obtained (e.g., FR2,VI 60 s). More broadly, the graphical display involves a set of arbitrary decisions for how to convert observations of the physical world into a two-dimensional visual structure that models the recorded relationships between a schedule of reinforcement and cumulative rates of responses over time. But, by establishing consistent dimensions along which collected data are displayed visually, scientists can begin to “see the same thing”, tact similar patterns, and create the same environmental arrangements to cause similar patterns of behavior within their own research or practice settings. Visual displays of recorded data, thus, serve a function. As a set of stimuli arranged in a particular way, a person with the requisite conditioning can quickly learn about behavior-environment relations and use that information to manipulate the physical world more efficiently. To serve this function, the graph viewer must understand the relationship between what data are being collected, how it is being collected, and the question being answered. Importantly, the learned nature of scientific behaviors means that alternative behaviors can serve the same function. There might be many

220

Applied behavior analysis advanced guidebook

ways to record, graphically display, and analyze data that lead a scientist to a better understanding of behavior-environment relations; and, it is unlikely any single approach is the most effective at fulfilling this function for all scientific questions and in all situations (Barnes-Holmes et al., 2001; Cox, 2019). Variation and selection are hallmark assumptions of the evolutionary sciences (of which behavior analysis can be counted; Skinner, 1981). Variation within a population allows that population to adapt over time and to changing conditions be it variation in physical traits of members of a species (Darwin, 1859) or physical topographies of members of a response class (Skinner, 1981). As recording, graphically displaying, and analyzing data are all behaviors emitted within the population of scientists and practitioners, variation and selection have shaped what can be claimed as the current data recording, data graphing, and data analysis conventions in behavior analysis (Cooper, Heron, & Heward, 2020; Parts 2 and 3). Next, we review these current conventions before introducing variability so that readers can select the data recording, graphical display, and data analysis methods that best allow them to answer the questions they have about behavior-environment relations.

Data recording and analysis Many different data types exist but some are more commonly observed than others. In this section, we discuss the most common data types and recording methods used by behavior analysts and provide examples that relate to skill acquisition and behavior reduction.

Common data types in ABA Frequency and Rate. Frequency and rate are arguably the most common types of data collected in behavior analysis. At the same time, some confusion exists around how these two terms relate to each other (for a detailed discussion, see Merbitz, Merbitz, & Pennypacker, 2015). Frequency can be described as counting or tallying the number of times a behavior occurs, and rate can be described as the number of times a behavior occurs within a specified period. Frequency and rate measures are appropriate for questions like, “How many times did the response occur?” or “How often does the student do that?”. Examples of frequency (rate) measures might be the number of times a student correctly names an object in a 60 min session and the number of times a student hits their head per min. Notably, some



Data recording and analysis

221

authors argue that all behavior occurs across time (Merbitz et al., 2015), thus the distinction between frequency and rate is trivial. Duration. Duration is the length, or amount of time, that a behavior spans. Duration measures are most easily and consistently collected across people if the behavior is operationally defined so that it has a distinct start and a distinct end (Mudford, Martin, Hui, & Taylor, 2009). Duration measures are appropriate for questions like, “How long did this occur for?” or “How much time does he spend doing that?”. Examples of duration data are the amount of time a student engaged with a learning task and displayed an episode of self-directed head-hitting. Latency. Latency, sometimes referred to as “delay to responding”, is another data type that is closely related to duration because it uses time as the primary data collected.Whereas duration is the length of time a behavior or episode lasts, latency is typically defined as the length of time between the onset of an antecedent stimulus and the onset (or offset) of the behavior of interest. Latency measures are appropriate for questions like, “How much time did it take the student to respond?” or “How long after beginning work did the student display the behavior?” Some examples include the time between a teacher presenting an SD (e.g., “What’s that?”) and a student answering (e.g., “It’s a rainbow!”) and the time between a teacher presenting work materials and a student hitting their head. Percentage. Percentage is a proportion that references two values: a part and a whole. The part is divided by the whole, and then multiplied by 100 to generate a percentage. A percentage data type answers questions like, “How many times did the student get the right answer out of all opportunities to respond?” or “What percentage of a session was spent on this task?”. Some examples include the percentage of correct responses on a labeling task (i.e., correct responses divided by total responses, multiplied by 100) and the percentage of time spent in a hitting episode during a specific task (i.e., duration of episode divided by duration of session, multiplied by 100). A commonly derived measure from percentage is the change in percentage over time based on the presence or absence of an intervention. Percent change is appropriate for questions like, “How has the behavior changed with intervention?” or “Compared to before, is the student doing this more or less often?” The formula for percent increase is: Percent Increase = [(new value − previous value)/(previous value)]x100. Some examples are the percent increase or decrease in correct responses from one session to another and the percent increase or decrease in time spent in hitting episodes from one day to another.

222

Applied behavior analysis advanced guidebook

Trials to Criterion. A trial, sometimes referred to as a learning opportunity or learn units (Greer & McDonough, 1999; Sigafoos, Carnett, O'Reilly, & Lancioni, 2019), consists of a SD (e.g., “Name this”), a response (e.g., a learner vocalizes, “tree”), and a consequence (e.g., a teacher provides praise by vocalizing “You got it!”). A criterion is a pre-established standard or benchmark for responding that indicates the targeted end of an intervention for a specific response (for a detailed discussion on mastery criterion, see Fienup & Carr, 2021). Meeting criterion can initiate either another stage of intervention for the same response which establishes a new response criterion or initiate a generalization or maintenance phase. Trials to criterion, then, refer to the number of learning opportunities that were needed until responding met a specified standard. Trials to criterion may answer questions like, “How quickly is this student learning?” or “How long did it take for her to generalize responding to other environments?”. Examples are the number of trials for a student to correctly name 100% (criterion) of the objects in an array of 10 items and the number of trials for a student to reduce head hitting to below 5 times a session (criterion).

Common recording methods in ABA Numeric Event Recording. Event recording refers to many different types of procedures used to describe how researchers or practitioners might tally or count instances of behavior-environment relations (Cooper et al., 2020). This typically involves defining a particular antecedent context and the desired behavior of which we want to count its presence or absence. For example, we might be interested in the event of correct joint tact-­ intraverbal control of the response, “It’s a rainbow.” Thus, we may count the number of times during a session where the requisite SD is presented that is meant to evoke the response as well as count the number of times the learner emits the correct response. Note this may occur in a “discrete trial” arrangement (Lerman, Valentino, & LeBlanc, 2016) or a “natural environment teaching” arrangement (Weiss, 2001). The focus is on targeting data collection around a defined event (i.e., SD-response relation). Many of the examples in the previous section would fall under the broad umbrella label of event recording where numeric data are collected. ABC Recording. ABC data recording involves collecting data on the stimuli before (antecedent—A) and after (consequence—C) a behavior (B) of interest. Compared to previous data types, ABC data recording differs in that the data are often recorded in a textual format instead of numeric.



Data recording and analysis

223

ABC data recording can help identify patterns within antecedent-­behaviorconsequence relations to generate hypotheses about the function of the behavior. ABC data are appropriate for questions like, “Why does this behavior occur?” or “What might be maintaining this behavior?” Typically, the answer to these questions are textual descriptions such as describing the behavior a student demonstrates in response to an antecedent condition and the behavior–contingent consequence implemented by a teacher. ABC data are typically useful only in so far as we can easily analyze textual responses collected. To do this well often involves transposing the textual data into a numeric format (discussed in greater detail below). Permanent Product. Permanent product recording involves measuring the effects on the environment that a behavior has (Cooper et al., 2020). This requires that the behavior alters the environment, and that alteration remains relatively unchanged for a duration long enough for the data collector to observe the change and record the data they need. A permanent product approach may be appropriate for questions like,“How do I measure the response without having to collect data in the moment?”. Of note, the response does not have to naturally leave a permanent product. Tactics such as video recording or audio recording a session are methods for creating a permanent product that can be used later for data collection. Examples of permanent product data recording methods are measuring the number of correct responses on a naming task from a worksheet and recording surface tissue damage from self-injury (Iwata, Pace, Kissel, Nau, & Farber, 1990). Time Sampling. With time sampling methods, data are collected on whether a behavior occurs during specified intervals rather than collecting data continuously as it occurs. Common time sampling methods include whole interval recording (Collier-Meek, Sanetti, Gould, & Pereira, 2021; Houten & Nau, 1980; Ricciardi, Luiselli, & Camare, 2006), partial-­interval recording (Mace & Lalli, 1991; Nefdt, Koegel, Singer, & Gerber, 2010; Roane, Vollmer, Ringdahl, & Marcus, 1998), and momentary time sampling (e.g., Harrop & Daniels, 1986; Nankervis, Ashman, Weekes, & Carroll, 2020). With whole interval recording, the behavior must occur throughout the entire interval. In contrast, for partial interval recording, the behavior can occur at any point during the interval.With momentary time sampling, behavior must occur at a specific time point (e.g., the end of the interval). Summarizing via example, if a teacher has a datasheet of 15 min intervals starting from 12:00 pm to 1:00 pm (i.e., 12:00–12:15, 12:15–12:30, 12:30– 12:45, and 12:45–1:00 pm), to record a student’s head-hitting, the different recording methods would be whole interval (the student hits head with no

224

Applied behavior analysis advanced guidebook

more than a 30 s pause between hits for the entire 15 min interval), partial interval (the student hits head at least once any point during the interval), and momentary time-sampling (the student hits head during the last minute of the interval). Though research has found that some of these methods commonly produce underestimates (i.e., whole-interval) or overestimates (i.e., partial-­ interval) of target behavior, they can make it easier to collect data (LeBlanc, Raetz, Sellers, & Carr, 2015). This is useful because some data is better than no data and effort to collect data is likely a common reason why data is not collected.

Common data analyses in ABA Visual Analysis and Baseline Logic. Visual analysis is the most common type of analytic method used to understand behavior-environment relations in behavior analytic research and practice (Cooper et al., 2020).Visual analysis leverages “baseline logic” for behavior analysts to make claims about behavior-environment relations (e.g., Catania, 2012; Cooper et  al., 2020; Sidman, 1960) which rests on the assumption that the patterns of behavior being observed would remain the same if we did not intervene. In the complex, dynamic, and ever-changing world we live in, this assumption rarely(never?) holds. But, often this assumption is good enough that we can make practical claims. And, when this assumption is grossly violated, we provide analytic methods later in the chapter to continue to make sense of the data one has. The top panel in Fig. 1 shows what we might predict to happen with behavior if we continued to collect data without any changes occurring to the environment around the individual. When a condition change occurs (vertical dashed lines), if the contingencies we manipulate have no functional relation to behavior, we would predict the pattern we currently observe would continue. The researcher or practitioner must also make a second assumption to leverage baseline logic to visually analyze the effect of an intervention. That assumption is that everything functionally related to the behavior(s) of interest remains unchanged when we introduce the intervention, except for the intervention. If this assumption holds, then any changes in responding that coincide with the introduction of the intervention might be attributable to that intervention. The second panel in Fig. 1 highlights this analysis. To the extent there is a “convincing” difference between what was observed and predicted, we might have identified a functional relation between our intervention and behavior.



Data recording and analysis

225

Fig. 1  Example data demonstrating baseline logic of a reversal experimental design.

To better pinpoint how confident we can be about the effect of an intervention, researchers and practitioners need to consider several additional characteristics of the single-subject graph they are analyzing. First and foremost is the number of times the intervention effect has been replicated. Here, the idea is to collect data that provide evidence that whatever change

226

Applied behavior analysis advanced guidebook

we observe in behavior is the result of the change in behavior-environment relations we implemented, rather than the result of some unknown, confounding variable. Replication can occur in at least two ways, first by increasing the number of observations in each condition. For example, consider the leftmost and second to leftmost panels in Fig. 2. Obtaining three data points during baseline and three data points during intervention arguably demonstrates an intervention effect and might be all the time a researcher or practitioner has for some behaviors (e.g., severe behavior posing risk of severe injury). However, a more convincing demonstration of an intervention effect exists if we obtained 10 observations during the baseline and intervention conditions as shown in the second-to-left panel in Fig. 2. This is because we would expect any uncontrolled confounding variables that influenced the behavior to be changing randomly. Thus, the longer we stay in any one condition, the greater the probability that potential confounding variables will change. To the extent that the effect of an intervention persists across a greater number of sessions, the more confident we can be that it is our intervention that led to the change. To the extent that patterns of behavior change while the intervention remains unchanged, the less confident we can be that it was our intervention that led to the observed change in behavior. A more convincing and commonly used reference to replication is through the repetition of switching between two conditions such as baseline and intervention. For example, the second-to-rightmost and rightmost panels in Fig. 2 show reversal designs where the researcher or practitioner observed that behavior changed between baseline and the intervention conditions. Generally speaking, researchers and practitioners can talk more confidently about the “controlling contingencies” the more repetitions of changing environmental conditions they conduct that lead to consistent changes in behavior. In Fig. 2, that would suggest the second-to-rightmost

Fig. 2  Example data demonstrating different types of replications that can be analyzed for within-subjects graphs.



Data recording and analysis

227

panel would likely be judged as a more convincing demonstration of an intervention effect compared to the rightmost panel. The final step to visually analyzing single-subject designs is to analyze changes in the trend, level, and variability of the data across replications (see Cooper et al., 2020 for an in-depth discussion of these analyses). Fig. 3 highlights several patterns in our data that we would expect to see if the contingencies we manipulate have a functional relation with the behavioral dimension for which we are collecting data. The left panel in Fig. 3 shows a functional relation between the contingencies we manipulate and the trend in responding.Visual analysis allows us to identify increasing response trends, decreasing response trends, and no trend. The middle panel shows a functional relation between our contingencies and level of responding. Visual analysis again allows us to confirm increase in level, decrease in level, and no change in level of responding. Lastly, the right panel in Fig. 3 shows a functional relation between the manipulated contingencies and variability in responding with visual analysis offering three general interpretations: increased variability, decreased variability, or no change in variability. The data presented in Figs.  1–3 are relatively “neat” in that each of those decisions is seemingly straightforward because we designed them to be. But most data are unlikely to be as neat and receive expert visual analysis. Relatedly, researchers publishing in the behavioral economics literature repeatedly find that humans do not always make the most optimal decisions (Ariely, 2010; Mazur, 1981) and that different behavior analysts can arrive at different conclusions when looking at the same dataset (Cox, 2021a; Cox & Brodhead, 2021; DeProspero & Cohen, 1979; Rader,Young, & Leaf, 2021). Thus, researchers have sought to improve training in the visual analysis of single-subject data (Dowdy, Jessel, Saini, & Peltier, 2022) to increase reliability to that of experts (Kahng et al., 2010); or to create systematic approaches to analyzing single-subject time-series data to improve the reliability of the

Fig. 3  Example data demonstrating patterns of behavior-environment functional relations across the patterns of trend, level, and variability.

228

Applied behavior analysis advanced guidebook

analysis (Fisher, Kelley, & Lomas, 2003; Lanovaz, Giannakakos, & Destras, 2020; Lanovaz & Hranchuk, 2021; Rader et al., 2021). Quantitative Analyses of Behavior. Quantitative analyses of behavior are another common, though less popular, method for analyzing behavior-environment data in behavior analysis. Traditional, visual analytic approaches often seek to identify the relation between one, or perhaps a few, levels of an independent variable (e.g., presence of an intervention; FI-60 s schedule) and a dependent variable (e.g., rate of responding). In contrast, quantitative analyses seek to identify the mathematical relationship between a dependent variable (e.g., response rates) and all values that the independent variable might practically take such as five different treatment fidelity levels ranging 10%–100% or five different FI schedules ranging FI-5 s to FI 1000 s. The most common quantitative analyses of behavior are the matching law (Baum, 1974; Herrnstein, 1970), discounting (Mazur, 1987; Rachlin, 2006), and demand (Gilroy, Kaplan, Reed, Koffarnus, & Hantula, 2018; Hursh & Silberberg, 2008). Each analysis exposes research participants to different levels of the independent variable and determines how well specific equations describe the resulting pattern of behavior. A full review of the behaviors that go into quantitative analyses of behavior is outside the scope of this chapter, however, interested readers are referred to Dallery and Soto (2013). Additional resources include the Society for the Quantitative Analyses of Behavior (SQAB, n.d.) and a recent issue from the journal Perspectives on Behavior Science focused on how this analytic approach is being used to good effect in applied settings (Jarmolowicz, Greer, Killeen, & Huskinson, 2022).

Alternative data-related behaviors The previous two sections of this chapter discussed data types commonly found in behavior analytic journals, how they are typically recorded, and how they typically are analyzed. Broadly, these common approaches can be categorized as instances of collecting continuous (e.g., rate of responding, percentage of correct trials) or discrete numerical measures (e.g., count, frequency). Data recording is largely time-based (e.g., whole-interval recording) or event-based (e.g., discrete trials) and visual analysis is the primary method for analyzing data. In this final section, we look at data types, data recording methods, and data analytic approaches common in other scientific fields and that behavior



Data recording and analysis

229

analysts might find useful as we expand into new topics and new ways of thinking about functional relations between behavior and the environment. We also review recent advances in technology as well as how behavior analysts who experiment with these less common approaches may benefit.

Additional data types Knowing about additional data types might be useful because it will likely help you have a better understanding of what you are actually doing. For example, the classic reversal design graph (e.g., Figs. 1 and 2) contains numeric data types representing some behavioral dimension, date data types to capture when the session occurred (x-axis), and categorical data types meant to provide a one-word tact of the prevailing contingencies that are outlined in detail in the corresponding manuscript or behavior plan (i.e., “baseline” and “intervention”). Each of these data types carry certain assumptions and properties that determine how they can and cannot be used for analytic purposes. Knowing what you’re working with will help identify what is possible analytically. Other data types are useful to give you a better understanding of what you are not doing. As noted above, different data types carry different assumptions and properties that restrict what you can do with the data. For example, categorical data (e.g., “baseline” vs. “intervention”) cannot be arranged quantitatively. There is nothing inherent about the labels that make one larger or smaller than the other. As a result, a correlational analysis using “baseline” and “intervention” as one input and response rate as the second input would be nonsensical. You can convert categorical labels to numeric variables, though this assumes the variable can be described using cardinal or ordinal relationships (e.g., Jech, 1978) which open up many additional types of analytics. However, doing so adds a new dimension to your data and questions whether the traditional time-series plot is the best way to display your data that now varies quantitatively along three dimensions (e.g., behavior, condition, and time). Converting categorical labels to numeric values is a specific type of what’s referred to as “data transformation.” Transforming your data can be very helpful if there is a question about your dataset that cannot be answered based on the current data types you have. And, understanding what data types you need to answer your question will help you in your next steps. Perhaps, you can transform your data to conduct the analytics you want (e.g., converting baseline/intervention to 0/1 to denote absence/presence of DRA). Or, perhaps, you need to collect new data types moving ­forward

230

Applied behavior analysis advanced guidebook

(e.g., a direct measure of the time per day spent in each DRA session). Or, perhaps you need to do something else. Knowing how the current data types you use will restrict what is possible allows you to better understand what your data cannot tell you. Finally, understanding that different data types exist and are being fruitfully used in other sciences might help you think about behavior-­ environment relations in novel and unique ways. For example, researchers in other domains have become experts in describing the topography of verbal behavior quantitatively (Bird, Klein, & Loper, 2009) and using the resulting numerical embeddings to program robots to functionallyverbally? interact with humans (Dale, 2016; Raj, 2019). Similarly, depending on the function of the researcher’s behavior (Cox, 2019), a method termed one-hot encoding allows data analysts to convert all categorical variables in a dataset into 1 s and 0 s for quantitative analysis (Li, So, Xu, & Jiang, 2018; though not without risk—Cerda & Varoquaux, 2022; Rodriguez, Bautista, Gonzalez, & Escalera, 2018). By understanding what is possible if we were to analyze behavior-environment relations using different types of data and in different ways will add variability to the scientific practices of behavior analysts. Though any one novel approach to analyzing behavior-­environment relations has an uncertain payoff, adding variability gives selective processes something to work on and is likely to benefit the field on average and in the long run (Darwin, 1859; Skinner, 1981).

Nonnumeric data types & their uses Nonnumeric Datum. Categorical data are recorded commonly by behavior analysts. Categorical data types are used when the data collected cannot be easily quantified along a single dimension. For example, data on response rates during “baseline” and “intervention” are often analyzed categorically (i.e., nominally via labels) rather than quantitatively (e.g., plotted along a quantitative dimension such as rate of reinforcer delivery). Other examples of categorical data include race, ethnicity, gender, location where a client receives services, and diagnostic labels. Analytically, categorical variables are most often analyzed using contingency tables (Zuvić-Butorac, 2006) and can help behavior analysts identify the conditions under which certain interventions are successful before more rigorous experimental analysis of those variables are explored. The date-time data type is another nonnumeric data type likely familiar to behavior analysts. Date-time data types store the calendar date and sometimes the time at which the data were collected. Date-time data can



Data recording and analysis

231

technically be converted to a number. For example, you might convert “the session conducted on April 14, 2021, from 10:47–11:02 AM” to “Session 3”. Note here, though, that “Session 3” references a time point, the third session post the onset of the research study. What this highlights is that the reference to a time point often matters. And, in some instances, what matters is reference to a specific date or time. For example, researchers studying the behavioral patterns of cyber attackers might be interested in the time of day such attacks most frequently occur, economists are often interested in seasonal variations in consumer purchase patterns, and governments the world over were interested in people’s responses to lockdown policies relative to the date and time they were implemented (Wang et  al., 2022). Similarly for behavior analysts, the time of day might be very important if traffic volume (Myers, Zane,Van Houten, & Francisco, 2022) or medication regimen (Jarmolowicz et al., 2020) are relevant to the analysis. Seasonal data are critical to understand when working on consumer behavior analysis (Foxall, 2001, 2021) and patient outcomes relative to the onset of treatment are being analyzed (e.g., Linstead et al., 2017). Strings are a third nonnumeric data type with increasing use outside of behavior analysis (Bird et al., 2009). Strings are data types comprised of alpha-numeric characters (aka text). In many fields, strings are converted to numbers for practical use (Bird et al., 2009). For example, converting strings to numbers powers chatbots (Dale, 2016; Raj, 2019), speech-to-text software (Ballard, Etter, Shen, Monroe, & Tan, 2019; Hair et al., 2020; Malik, Malik, Mehmood, & Makhdoom, 2021), popular search engines such as Google (Rogers, 2002), and virtual assistant technologies such as Alexa or the Echo (Kim, 2018). Though publications with these technologies are not common in behavior analysis, there are examples of counting the number of times specific words are emitted (Dounavi, 2014; Petursdottir, Carr, & Michael, 2005), the proportion of times a verbal response is emitted following other verbal stimuli (Carp & Pettursdottir, 2015; Partington & Bailey, 1993), and quantitatively analyzing the behavior analytic published literature (Blair, Shawler, Debacher, Harper, & Dorsey, 2018; Dixon, Reed, Smith, Belisle, & Jackson, 2015). Nevertheless, the technologies described above highlight the utility of more sophisticated handling of string data that behavior analysts are beginning to explore (Sosine & Cox, 2022). Booleans are a fourth data type that behavior analysts likely already use but without taking full advantage of their special properties. Boolean data have a “True” or “False” value, often represented as 1 or 0, respectively. Above, we used the example of presence/absence of the intervention ­being

232

Applied behavior analysis advanced guidebook

coded as a 1 or a 0, an example of a Boolean data type. Other examples might be whether the client had access to the maintaining reinforcer before session (e.g., open or closed economy; Mathis, Johnson, & Collier, 1996; Yoshino & Reed, 2008), whether the client’s preferred RBT is running session (Keijsers, Schaap, & Hoogduin, 2000), the participant met the step count goal for the day (Kurti & Dallery, 2013), or the participant’s CO levels are below the threshold needed to access a monetary reward (Jarvis & Dallery, 2017). Booleans are particularly useful as independent variables when we do not have the level of detail in our collected data to conduct parametric analyses; and Booleans are useful as dependent variables when predicting the next response on a discrete trial (e.g., Cox, 2021b) or whether an intervention will be effective (e.g., Cox, Garcia-Romeu, & Johnson, 2021). Different Types of Aggregates. When we collect and talk about data, we often refer to individual observations or, perhaps, the aggregate of observations across a session. When you store that data in a specific cell within a program like Excel or Numbers, you tell your computer that a particular physical location on your computer’s RAM should be the number 24, or the string “baseline”, or the datetime “10/17/1979 at 7:04:00 PM”. When you do this, you’re creating a physical object.We can manipulate that object by, for example, dividing 24 by 3, counting the number of characters in the string “baseline”, or identifying whether our datetime is before or after another datetime. The key point here is that your data are a physical object on your computer, and those objects can be manipulated. We can also write the above manipulations as equations. For example, we can write: 24/3 = x, length(baseline) = y, and “10/18/1979” > “10/17/ 1079” = z. In each of these situations, we can use a little bit of math or logic to solve for x, y, and z to get the values of x = 8, y = 8, and z  = TRUE. What we are highlighting is that the object you stored on your computer (i.e., your data) can be manipulated to give us an answer. Simple, right. Let’s go one step further. If you remember back to your middle school or high school days, you might recall that the x, y, or z variable can move all around the equation. Maybe we’re asked to: solve for x when 2× = 8; solve for y when length(y) = 3 and y starts with “th”; or find all dates where z ≥ “10/17/1979” = TRUE. Feel free to work these out, but what we are highlighting is that we might be curious what it would take for our data to go from one value to another. For example, “What rate of behavior change is needed to reduce challenging behavior from 24 responses per min to 3 responses per min in 8  days?”; “What is the longest word our client can



Data recording and analysis

233

accurately spell without prompts?”; or “How many sessions have we conducted since the medication change?” All of this is often done “under the hood” with computer programs. All of this involves the simple steps of find an object on your computer, define a logical relation you want to derive, let the computer solve it for you and return an answer. Still simple, right. Up to now, we have talked about objects as a single physical thing that can be manipulated (e.g., the number 24, the string “baseline”). But objects can take on many different forms. One type of object is called an array or a list. Arrays are a set of data, often in a particular sequence, and that contain a single data type. For example, Fig. 4 shows one way we might arrange in Excel the data from a skill acquisition program so that we can easily graph it. Each column in Fig. 4 would be one instance of an array. Column A is an array of numbers, Column B is an array of strings, and so forth. Lists are the same idea but where the items contained in the set of data can be of multiple data types (e.g., numbers and strings). Fig. 4 also highlights how we can stack a bunch of arrays and lists together, side-by-side. Other terms for this way to structure data are a dataframe, matrix, or tabular data. In the ideal format, each column contains a set of data of a single type (i.e., an array) and each row references a single

Fig. 4  Example dataset often described as structured, tabular, or tidy.

234

Applied behavior analysis advanced guidebook

­ bservation at a single point in time with the corresponding value of each o data element (e.g., row 2 references the value of all data elements during session 1). Note that Fig. 4 uses the data types of numeric data (Responses per Min, Trials per Min, and FR Schedule), ordinal (Session), categorical (Variable, RBT, Setting), and Boolean (Condition, Pre-Session SR+ Contact). Arrays, lists, and dataframes are useful because each can be represented as a single object on your computer. You likely have already played with data in this way. For example, one common question is the relationship between two arrays. If you have ever calculated the correlation between two columns, you’ve asked the question, “What is the relationship between column 1 and column 2?”, which in computer talk is asking for the slope between the two arrays: column 1 and column 2. The correlation cannot be calculated for each datum within each array but can only be calculated between the sets of data (i.e., the two arrays) as complete objects. Dataframes can be represented as a single object, too. And, we can also ask the question about the relationship between two dataframes (e.g., “Do we see similar intervention effects across different programs for the same patient while controlling for other potentially functionally relevant information?”). Importantly, thinking about our data this way allows us to ask about linear relationships (e.g., simple correlations) as well as more complex, nonlinear relationships very easily by asking computers to do the hard work for us. We can also ask questions about how we might compress or expand our dataset to identify functional relations between behavior and the surrounding environment. For example, we may have data from many different variables (Fig. 4). But maybe it is hard work to collect all this data and we want to identify only the variables that are important to predicting responses per min. Here dimensionality reduction techniques (Reddy et al., 2020) allow us to determine which variables are most predictive and which variables we can likely drop while still maintaining our ability to predict responses per min. Similarly, maybe we are having a hard time predicting responses per min from a complex arrangement of variables. Dimensionality expansion techniques such as one-hot encoding or feature splitting give us more information that might improve our predictive capabilities (Li et al., 2018; Rodriguez et al., 2018). Lastly, it’s possible that we’re looking at our data in the wrong way and that changing the way we view it will allow for us to readily identify functional relationships between behavior and the environment. As noted at the beginning of the chapter, only by arranging the data into a specific visual stimulus arrangement was Skinner able to easily “see” the functional



Data recording and analysis

235

r­ elations between behavior and the environment for the simple schedules of reinforcement he was studying. As another example from behavior analysis, log-transformations of data from matching law experiments turn curved lines into straight lines which makes it much easier to talk about an organism’s sensitivity to changing schedules of reinforcement (Baum, 1974). Such transformations are examples of array transformations and we can similarly play with dataframe transformations to uncover interesting functional relationships in our datasets (Grus, 2015; Strang, 2003). For readers interested in conducting these more complex, multiple-­ control analyses of behavior-environment relations, an immediate question is where exactly the data for these analyses will come from. It can be a lot of work to get a staff working directly with a client to collect data consistently and reliably on just the behaviors we target across the many different interventions that comprise a client’s ABA program. How are we to get the types of datasets where these analyses would be useful? Here we have the good fortune of living during a time where technology has made automated data collection on both behavior and the environment much easier than in the past. In this final section, we discuss some of these technologies that allow for the automated collection of behavior and environment data spanning the many data types listed above.

Technology for collecting data on behavior The most ubiquitous technology for automatically collecting data on behavior are smartphones. Smartphones are estimated to be owned and used by 4 out of 5 people worldwide (Pew Research Center, 2021) and contain a variety of tools and technologies to collect data on a range of behaviors. For example, (a) accelerometers track the direction and rate of movement through space; (b) Global Positioning Systems (GPS) tracks your geolocation and can be compared to other datasets to identify where you go, what time you go there, and how long you stay; (c) smartphone cameras track what you look at on the screen and your facial expressions while looking at it; (d) the apps you use log and share your Internet searches which can be leveraged as a form of preference assessment; (e) applications track and share information regarding your finances and spending patterns which is another type of preference assessment; and (f) social media applications track who you interact with, what you say, and the verbal topics you are most likely to interact with—often to the advantage of the social media app (e.g., Cambridge Analytica; The Great Hack). For more complete reviews, see Crowley-Koch and Van Houten (2013) and Bak et al. (2021).

236

Applied behavior analysis advanced guidebook

Wearable devices are another type of technology that collects data on behavior (for a comprehensive survey of wearable technology see Ometov et al., 2021). Step trackers such as FitBits, Garmin wrist watches, or Apple watches automatically collect data on how much and how fast people move throughout their day. Smart rings such as the Oura or Movano Ring automatically record heart rate, sleep patterns, body temperature, and blood oxygen content. Smart shirts such as the Hexoskin automatically collect data on heart rate, respiratory rate, activity levels, and movement of the upper body and limbs; and, smart glasses can automatically track the eye-movements of the wearer and those who they talk with.Together, these data are then being used to predict challenging behaviors in clients (Simons, Koordeman, de Looff, & Otten, 2021); detect and classify self-injurious behavior and stereotypy (Cantin-Garside et al., 2020; Fasching et al., 2013); detect the extent and duration of eye contact in children (Ye et al., 2012); monitor patient health in real-time (Ed-daoudy & Maalmi, 2018; Jones et al., 2011; Nair, Shetty, & Shetty, 2018) for just-in-time-adaptive interventions (Graham & Bond, 2015; Hardeman, Houghton, Lane, Jones, & Naughton, 2019; Rabbi, Pfammatter, Zhang, Spring, & Choudhury, 2015;Van Dantzig, Bulut, Krans, Van der Lans, & De Ruyter, 2018;Van Dantzig, Geleijnse, & Halteren, 2013); and monitor and improve patient engagement and adherence to treatment regimens (Davenport & Kalakota, 2019). Standalone technologies are also used to automatically record data on behavior. For example, Crowley-Koch and Van Houten (2013) show how Radio Frequency Identification (RFID) tags can be used for preference assessments under free operant arrangements. Breathalyzers and CO2 meters have been used as a proxy measure for the amount of alcohol or cigarettes smoked for use within contingency management programs (Jarvis & Dallery, 2017; Kaplan & Koffarnus, 2019) while smart refrigerators can track food inventory, document food consumption patterns, and prompt meal recipes (Prapulla, Shobha, & Thanuja, 2015). For researchers and practitioners interested in verbal behavior, automatic speech recognition (ASR) devices can process continuous speech in ­real-time using machine learning algorithms (Malik et al., 2021). Common examples include Apple’s Siri, Amazon’s Alexa, and Google’s Assistant. Such automated recording of verbal behavior is being used to improve speech intelligibility (Ballard et al., 2019; Hair et al., 2020) with easy-to-imagine future use cases of providing consistent automated feedback on pronunciation and automated reinforcement to shape vocal-verbal behavior.



Data recording and analysis

237

Finally, the COVID-19 pandemic increased the amount of telehealth and other forms of remote interactions. Video recording sessions create a permanent product that can be used for collecting data at a later point in time. This makes remote data recording possible when in-person interaction is not practical and also can expand opportunities for traditional IOA and treatment fidelity checks. In addition to making current data recording methods more accessible, video recording sessions might allow for the automatic collection of data via computer vision which has one aim of identifying, labeling, and analyzing the presence and movement of objects in the environment in pictures or video (Khan, Laghari, & Awan, 2021). For behavior analysts, computer vision can be used to track behavior-­ environment relations via object and action detection (Trevisan, Becerra, Benitez, Higbee, & Gois, 2019).

Technology for collecting data on the environment Technological advances notwithstanding, data on behavior without data on the surrounding environment provides an incomplete picture when taking a functional approach to understanding behavior. Just like tools that allow for the automated detection of behavior, many tools exist that allow behavior analytic researchers and practitioners to automate the collection of data on the environment. Smartphones and the Internet again offer opportunities to collect many data on the environment around the individual based on the behavior the researcher is interested in targeting. For example, researchers interested in verbal behavior can leverage open-source tools to collect and analyze the verbal environment surrounding social media posts (Twitter-scraper, 2020; Zhou, 2020). In turn, such data can be used to diagnose and intervene on behavioral health challenges such as suicide or other low-frequency, high-intensity behaviors (Carson et  al., 2019; Ji et  al., 2019; Walsh, Ribeiro, & Franklin, 2017, 2018). Similarly, researchers interested in the relation between world events and verbal behavior exchanges within and between large groups of individuals have much readily accessible descriptive data at their fingertips (Javed, 2022). Considering that Internet users worldwide spend approximately 2.5 h per day on social media (Statista, 2022), 3.5 h per day reading and responding to email (Adobe, 2019), and an average of 5.7 h per day on smartphone devices (Provision Living, 2019), behavior within these online verbal environments comprise increasingly larger amounts of total human behavioral allocation.

238

Applied behavior analysis advanced guidebook

GPS data can also provide relevant information on the environment surrounding a behavior of interest. For example, GPS data has been used to study physical activity and the surrounding environment (Mag, Titze, Oja, Jones, & Ogilvie, 2011), the interactions between built and natural landscapes and behavior (Handcock et al., 2009), contact with psychosocial stressors that influence substance use (Kwan et al., 2019), and integrated with electronic diaries to monitor contact with drug availability and psychosocial stressors in polydrug-abusing participants (Vahabzadeh, Mezghanni, Lin, Epstein, & Preston, 2010). Understanding where, for how long, and with what environments people are interacting with has the potential to alleviate challenges in working to change behavior with individuals that behavior analysts are unable to have eyes on for 30+ hours per week. Researchers have also taken advantage of sensors, smartphone apps, and information and communication technologies (ICT) to collect environmental data that could be used analytically. For example, Gouveia, Fonseca, Camara, and Ferreira (2004) discuss how ICTs can be used to promote citizen scientists to collect and communicate data for environmental monitoring. Finally, the speech-to-text software and computer vision technologies mentioned above are not restricted to only target behaviors. Because they capture all sound waves that hit the microphone and all objects in the visual field, those same technologies can be used to automate data collection on the environment antecedent and consequent to the behavior of interest.

Chapter summary Behavior analysts have a long and rich history of collecting data on ­behavior-environment relations and analyzing those relations to identify patterns that allow for the description, prediction, and control of behavior. Throughout the past century, behavior analytic researchers and practitioners have witnessed many variations in how data are collected, what data are collected, and the manner data are visually displayed to communicate functional behavior-environment relations with other researchers and practitioners. Whereas automated data collection systems through operant chambers is often the status quo in the experimental analysis of behavior, most applied behavior analytic settings use manual data collection by a human observer who is physically present with the individual engaging in a behavior of interest. Across experimental and applied settings, numeric data types are most often collected (e.g., rate, duration, percent correct) though less common data types have proven useful (e.g., analysis of string data types



Data recording and analysis

239

for ABC collection). Understanding that other data types exist and that researchers have developed practical ways of analyzing those data types opens the door for behavior analysts to get more creative in what types of data they collect and how they analyze behavior-environment relations. Further, advances in technology are making the automated collection of large datasets on behavior and the environment increasingly accessible. Researchers willing to learn these tools will be afforded datasets much richer and bigger than were accessible in the past. It is in this final area is where researchers who learn to think about columns and datasets as objects that can be related to other objects will have the best opportunity to take advantage of advanced analytics such as machine learning and artificial intelligence.

Conflict of interest There are no conflicts of interest to report.

Financial support The authors did not receive funding to complete this project.

References Adobe. (2019). 2019 Adobe email usage study. Retrieved from: https://www.slideshare.net/ adobe/2019-adobe-email-usage-study. Ariely, D. (2010). Predictably irrational:The hidden forces that shape our decisions. Harper Perennial, ISBN:9780061353246. Ayllon, T., & Michael, J. (1959). The psychiatric nurse as a behavioral engineer. Journal of the Experimental Analysis of Behavior, 2(4), 323–334. https://doi.org/10.1901/ jeab.1959.2-323. Bak, M.Y. S., Plavnick, J. B., Dueñas, A. D., Brodhead, M. T., Avendaño, S. M., Wawrzonek, A. J., … Oteto, N. (2021). The use of automated data collection in applied behavior analytic research: A systematic review. Behavior analysis: Research and Practice, 21(4), 376–405. https://doi.org/10.1037/bar0000228. Ballard, K. J., Etter, N. M., Shen, S., Monroe, P., & Tan, C. T. (2019). Feasibility of automatic speech recognition for providing feedback during tablet-based treatment for apraxia of speech plus aphasia. American Journal of Speech-Language Pathology, 28, 818–834. Barnes-Holmes, D., O’Hora, D., Roche, B., Hayes, S. C., Bissett, R. T., & Lyddy, F. (2001). Understanding and verbal regulation. In S. C. Hayes, D. Barnes-Holmes, & B. Roche (Eds.), Relational frame theory: A post-Skinnerian account of human language and cognition (pp. 106–117). New York, NY: Kluwer Academic Publishers. Baum, W. M. (1974). On two types of deviation from the matching law: Bias and undermatching. Journal of the Experimental Analysis of Behavior, 22(1), 231–242. https://doi. org/10.1901/jeab.1974.22-231. Bird, S., Klein, E., & Loper, E. (2009). Natural language processing with Python., ISBN:9780596516499.

240

Applied behavior analysis advanced guidebook

Blair, B. J., Shawler, L. A., Debacher, E. A., Harper, J. M., & Dorsey, M. F. (2018). Ranking graduate programs based on research productivity of faculty: A replication and extension. Education and Treatment of Children, 41(3), 299–318. Bondy, J. A., & Murty, U. S. R. (1976). Graph theory with applications.The Macmillan Press Ltd. http://www.maths.lse.ac.uk/Personal/jozef/LTCC/Graph_Theory_Bondy_Murty.pdf. Cantin-Garside, K. D., Kong, Z., White, S. W., Antezana, L., Kim, S., & Nussbaum, M. A. (2020). Detecting and classifying self-injurious behavior in autism spectrum disorder using machine learning techniques. Journal of Autism and Developmental Disorders, 50(11), 4039–4052. https://doi.org/10.1007/s10803-020-04463-x. Carp, C. L., & Pettursdottir, A. I. (2015). Intraverbal naming and equivalence class formation in children. Journal of the Experimental Analysis of Behavior, 104(3), 223–240. https://doi. org/10.1002/jeab.183. Carson, N. J., Mullin, B., Sanzhez, M. J., Lu, F.,Yang, K., Menezes, M., & Le Cook, B. (2019). Identification of suicidal behavior among psychiatrically hospitalized adolescents using natural language processing and machine learning of electronic health records. PLoS One, 14, e0211116. https://doi.org/10.1371/journal.pome.0211116. Catania, A. C. (2012). Learning (5th edition). Sloan Publishing, ISBN:1-59738-023-7. Cerda, P., & Varoquaux, G. (2022). Encoding high-cardinality string categorical variables. IEEE Transactions on Knowledge and Data Engineering, 34(3), 1164–1176. https://doi. org/10.1109/TKDE.2020.2992529. Collier-Meek, M. A., Sanetti, L. M., Gould, K., & Pereira, B. (2021). An exploratory comparison of three treatment fidelity assessment methods: Time sampling, event recording, and post-observation checklist. Journal of Educational and Psychological Consultation, 31(3), 334–359. Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd edition). Pearson. Cox, D. J. (2019). The many functions of quantitative modeling. Computational Brain & Behavior, 2(3–4), 166–169. https://doi.org/10.1007/s42113-019-00048-9. Cox, D. J. (2021a). Descriptive and normative ethical behavior appear to be functionally distinct. Journal of Applied Behavior Analysis, 54(1), 168–191. https://doi.org/10.1002/ jaba.761. Cox, D. J. (2021b). Reinforcement learning (of the machine learning kind) to predict the next response. In Society for the quantitative analyses of behavior 43rd annual meeting. Virtual conference. Cox, D. J., & Brodhead, M. T. (2021). A proof of concept analysis of decision-making with time-series data. The Psychological Record, 71, 349–366. https://doi.org/10.1007/ s40732-020-00451-w. Cox, D. J., Garcia-Romeu, A., & Johnson, M. W. (2021). Predicting changes in substance use following psychedelic experiences: Natural language processing of psychedelic session narratives. American Journal of Drug & Alcohol Abuse, 47(4), 444–454. https://doi.org/10. 1080/00952990.2021.1910830. Crowley-Koch, B., & Van Houten, R. (2013). Automated measurement in applied behavior analysis: A review. Behavioral Interventions, 28(3), 225–240. https://doi.org/10.1002/ bin.1366. Dale, R. (2016). The return of the chatbots. Natural Language Engineering, 22(5), 811–817. https://doi.org/10.1017/S1351324916000243. Dallery, J., & Soto, P. L. (2013). Quantitative description of behavior-environment relations. In G. J. Madden (Ed.), APA handbook of behavior analysis:Vol 1. Methods and principles (pp. 219– 249). The American Psychological Association. https://doi.org/10.1037/13937-010. Darwin, C. (1859). On the origin of species. John Murray. Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6, 94–98. https://doi.org/10.7861/futurehosp.6-2-94.



Data recording and analysis

241

DeProspero, A., & Cohen, S. (1979). Inconsistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 12, 573–579. https://doi.org/10.1901/jaba.1979.12-573. Dixon, M. R., Reed, D. D., Smith, T., Belisle, J., & Jackson, R. E. (2015). Research rankings of behavior analytic graduate training programs and their faculty. Behavior Analysis in Practice, 8(1), 7–15. https://doi.org/10.1007/s40617–015–0057–0. Dounavi, K. (2014). Tact training versus bidirectional intraverbal training in teaching a foreign language. Journal of Applied Behavior Analysis, 47, 165–170. https://doi.org/10.1002/jaba.86. Dowdy, A., Jessel, J., Saini, V., & Peltier, C. (2022). Structured visual analysis of single-case experimental design data: Developments and technological advancements. Journal of Applied Behavior Analysis, 55(2), 451–462. https://doi.org/10.1002/jaba.899. Ed-daoudy, A., & Maalmi, K. (2018). Application of machine learning model on streaming health data event in real-time to predict health status using spark. In International symposium on advanced electrical and communication technologies (ISAECT), Rabat, Morocco (pp. 1–4). Fasching, J., Walczak, N., Toczyski, W. D., Cullen, K., Sapiro, G., … Papanikolopoulos, N. (2013). Assisted labeling of motor stereotypies in video. In Poster presented in 60th meeting of American Academy of Child and Adolescent Psychiatry. Fienup, D. M., & Carr, J. E. (2021). The use of performance criteria for determining “mastery” in discrete-trial instruction: A call for research. Behavioral Interventions, 1–9. https:// doi.org/10.1002/bin.1827. Fisher, W. W., Kelley, M. E., & Lomas, J. E. (2003). Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs. Journal of Applied Behavior Analysis, 36, 387–406. https://doi.org/10.1901/jaba.2003.36-387. Foxall, G. R. (2001). Foundations of consumer behaviour analysis. Marketing Theory, 1(2), 165–199. https://doi.org/10.1177/147059310100100202. Foxall, G. R. (2021). The Routledge companion to consumer behavior analysis. Routledge, ISBN:9781032242460. Gilroy, S. P., Kaplan, B. A., Reed, D. D., Koffarnus, M. N., & Hantula, D. A. (2018). The demand curve analyzer: Behavioral economic software for applied research. Journal of the Experimental Analysis of Behavior, 110(3), 553–568. https://doi.org/10.1002/jeab.479. Gouveia, C., Fonseca, A., Camara, A., & Ferreira, F. (2004). Promoting the use of environmental data collected by concerned citizens through information and communication technologies. Journal of Environmental Management, 71(2), 135–154. https://doi. org/10.1016/j.jenvman.2004.01.009. Graham, T. J., & Bond, D. S. (2015). Behavioral response to a just-in-time adaptive intervention (JITAI) to reduce sedentary behavior in obese adults: Implications for JITAI optimization. Health Psychology, 34(Suppl), 1261–1267. https://psycnet.apa.org/doi/10.1037/hea0000304. Greer, R. D., & McDonough, S. H. (1999). Is the learn unit a fundamental measure of pedagogy? The Behavior Analyst, 22(1), 5–16. https://doi.org/10.1007/BF03391973. Grus, J. (2015). Data science from scratch: First principles with Python. O’Reilly Media, Inc. Hair, A., Markoulli, C., Monroe, P., McKechnie, J., Ballard, K., Ahmed, B., & GutierrezOsuna, R. (2020). Preliminary results from a longitudinal study of a tablet-based speech therapy game [paper presentation]. In Computer human-interaction conference on human factors in computing, Honolulu, Hawaii (virtual). https://psi.engr.tamu.edu/wp-content/ uploads/2020/02/hair2020chi.pdf. Handcock, R. N., Swain, D. L., Bishop-Hurley, G. J., Patison, K. P., Wark, T., Valencia, P., … O’Neill, C. J. (2009). Monitoring animal behaviour and environmental interactions using wireless sensor networks, GPS collars and satellite remote sensing. Sensors, 9(5), 3586–3603. https://doi.org/10.3390/s90503586. Hardeman, W., Houghton, J., Lane, K., Jones, A., & Naughton, F. (2019). A systematic review of just-in-time adaptive interventions (JITAIs) to promote physical activity. International Journal of Behavioral Nutrition and Physical Activity, 16, 31–51. https://doi.org/10.1186/ s12966-019-0792-7.

242

Applied behavior analysis advanced guidebook

Harrop, A., & Daniels, M. (1986). Methods of time sampling: A reappraisal of momentary time sampling and partial interval recording. Journal of Applied Behavior Analysis, 19(1), 73–77. Herrnstein, R. J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13(2), 243–266. Houten, R. V., & Nau, P. A. (1980). A comparison of the effects of fixed and variable ratio schedules of reinforcement on the behavior of deaf children. Journal of Applied Behavior Analysis, 13(1), 13–21. Hursh, S. R., & Silberberg, A. (2008). Economic demand and essential value. Psychological Review, 115(1), 186–198. https://doi.org/10.1037/0033-295X.115.1.186. Husain, T. (1977). Topology and maps. Springer US. ISBN-13: 9781461588009. Iwata, B. A., Pace, G. M., Kissel, R. C., Nau, P. A., & Farber, J. M. (1990). The self-injury trauma (SIT) scale: A method for quantifying surface tissue damage caused by self-­ injurious behavior. Journal of Applied Behavior Analysis, 23(1), 99–110. Jarmolowicz, D. P., Greer, B. D., Killeen, P. R., & Huskinson, S. L. (2022). Applied quantitative analysis of behavior: What it is, and why we care—Introduction to the special section. Perspectives on Behavior Science, 44(4), 503–516. https://doi.org/10.1007/ s40614-021-00323-w. Jarmolowicz, D. P., Reed, D. D., Schneider,T. D., Smith, J.,Thelen, J., Lynch, S., … Bruce, J. M. (2020). Behavioral economic demand for medications and its relation to clinical measures in multiple sclerosis. Experimental and Clinical Psychopharmacology, 28(3), 258–264. https://doi.org/10.1037/pha0000322. Jarvis, B. P., & Dallery, J. (2017). Internet-based self-tailored deposit contracts to promote smoking reduction and abstinence. Journal of Applied Behavior Analysis, 50(2), 189–205. https://doi.org/10.1002/jaba.377. Javed, A. (2022). Islamaphobia: Using data science to explore connections between ­anti-Islamic incidents and verbal behavior on twitter. In Association for Behavior Analysis International 48th annual convention. Boston, MA. Jech, T. (1978). Set theory. Springer. https://link.springer.com/content/pdf/10.1007/3-54044761-X.pdf. Ji, S., Pan, S., Li, X., Cambria, E., Long, G., & Huang, Z. (2019). Suicidal ideation detection: A review of machine learning methods and applications. arXiv:1910.12611v1. Jones,V. M., Mendes Batista, R. J., Bults, R. G. A., op den Akker, H.,Widya, I. A., Hermens, H. J., … Vollenbroek-Hutten, M. M. R. (2011). Interpreting streaming biosignals: In search of best approaches to augmenting mobile health monitoring with machine learning for adaptive clinical decision support. In Paper presented at workshop on learning from medical data streams, LEMEDS 2011, Bled, Slovenia. Kahng, S.W., Chung, K. M., Gutshall, K., Pitts, S. C., Kao, J., & Girolami, K. (2010). Consistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 43, 35–45. https:// doi.org/10.1901/jaba.2010.43-35. Kaplan, B. A., & Koffarnus, M. N. (2019). Timeline followback self-reports underestimate alcohol use prior to successful contingency management treatment. Alcohol and Alcoholism, 54(3), 258–263. https://doi.org/10.1093/alcalc/agz031. Keijsers, G. P. J., Schaap, C. P. D. R., & Hoogduin, C. A. L. (2000). The impact of interpersonal patient and therapist behavior on outcome in cognitive-behavior therapy: A review of empirical studies. Behavior Modification, 24(2), 264–297. https://doi. org/10.1177/0145445500242006. Khan, A. A., Laghari, A. A., & Awan, S. A. (2021). Machine learning in computer vision: A review. EAI Transactions on Scalable Information Systems, e4. https://eudl.eu/doi/10.4108/ eai.21-4-2021.169418. Kim, Y. B. (2018). The scalable neural architecture behind Alexa’s ability to select skills. In Conversational AI/natural-language processing. Retrieved from: https://www.amazon. science/blog/the-scalable-neural-architecture-behind-alexas-ability-to-select-skills.



Data recording and analysis

243

Kurti, A. N., & Dallery, J. (2013). Internet-based contingency management increases walking in sedentary adults. Journal of Applied Behavior Analysis, 46(3), 568–581. https://doi. org/10.1002/jaba.58. Kwan, M. P., Wang, J., Tyburski, M., Epstein, D. H., Kowalczyk, W. J., & Preston, K. L. (2019). Uncertainties in the geographic context of health behaviors: A study of substance users’ exposure to psychosocial stress using GPS data. International Journal of Geographical Information Science, 33(6), 1176–1195. https://doi.org/10.1080/13658816.2018.1503276. Lanovaz, M. J., Giannakakos, A. R., & Destras, O. (2020). Machine learning to analyze ­single-case data: A proof of concept. Perspectives on Behavior Science, 43, 21–38. https:// doi.org/10.1007/s40614-020-00244-0. Lanovaz, M. J., & Hranchuk, K. (2021). Machine learning to analyze single-case graphs: A comparison to visual inspection. Journal of Applied Behavior Analysis, 54(4), 1541–1552. https://doi.org/10.1002/jaba.863. LeBlanc, L. A., Raetz, P. B., Sellers, T. P., & Carr, J. E. (2015). A proposed model for selecting measurement procedures for the assessment and treatment of problem behavior. Behavior Analysis in Practice, 9(1), 77–83. https://doi.org/10.1007/s40617-015-0063-2. Lerman, D. C., Valentino, A., & LeBlanc, L. A. (2016). Discrete trial training. In R. Lang, T. Hancock, & N. Singh (Eds.), Early intervention for young children with autism spectrum disorder. Evidence-based practices in behavioral health (pp. 47–83). Springer. Li, J., So,Y., Xu, T., & Jiang, S. (2018). Deep convolutional neural network based ECG classification system using fusion and one-hot encoding techniques. Mathematical Problems in Engineering, 7354081. https://doi.org/10.1155/2018/7354081. Linstead, E., Dixon, D. R., Hong, E., Burns, C. O., French, R., Novack, M. N., & Granpeesheh, D. (2017). An evaluation of the effects of intensity and duration on outcomes across treatment domains for children with autism spectrum disorder. Translational Psychiatry, 7(9), e1234. https://doi.org/10.1038/tp.2017.207. Mace, F. C., & Lalli, J. S. (1991). Linking descriptive and experimental analyses in the treatment of bizarre speech. Journal of Applied Behavior Analysis, 24(3), 553–562. Mag, P. J. K. D., Titze, S., Oja, P., Jones, A., & Ogilvie, D. (2011). Use of global positioning systems to study physical activity and the environment: A systematic review. American Journal of Preventive Medicine, 41(5), 508–515. https://doi.org/10.1016/j. amepre.2011.06.046. Malik, M., Malik, M. K., Mehmood, K., & Makhdoom, I. (2021). Automatic speech recognition: A survey. Multimedia Tools and Applications, 80(6), 9411–9457. https://doi. org/10.1007/s11042-020-10073-7. Mathis, C. E., Johnson, D. F., & Collier, G. (1996). Food and water intake as functions of resource consumption costs in a closed economy. Journal of the Experimental Analysis of Behavior, 65(3), 527–547. Mazur, J. E. (1981). Optimization theory fails to predict performance of pigeons in a two-­ response situation. Science, 214(4522), 823–825. http://www.jstor.org/stable/1686991. Mazur, J. E. (1987). An adjusting procedure for studying delayed reinforcement. In M. L. Commons, J. E. Mazur, J. A. Nevin, & H. Rachlin (Eds.), Quantitative analyses of behavior: Vol. 5.The effect of delay and of intervening events on reinforcement value (pp. 55–73). Erlbaum. Merbitz, C. T., Merbitz, N. H., & Pennypacker, H. S. (2015). On terms: Frequency and rate in applied behavior analysis. The Behavior Analyst, 39(2), 333–338. https://doi. org/10.1007/s40614-015-0048-z. Mudford, O. C., Martin, N.T., Hui, J. K., & Taylor, S. A. (2009). Assessing observer accuracy in continuous recording of rate and duration:Three algorithms compared. Journal of Applied Behavior Analysis, 42(3), 527–539. https://doi.org/10.1901/jaba.2009.42-527. Myers, C., Zane, T.,Van Houten, R., & Francisco,V. T. (2022). The effects of pedestrian gestures on driver yielding at crosswalks: A systematic replication. Journal of Applied Behavior Analysis, 55(2), 572–583. https://doi.org/10.1002/jaba.905.

244

Applied behavior analysis advanced guidebook

Nair, L. R., Shetty, S. D., & Shetty, S. D. (2018). Applying spark based machine learning to model on streaming big data for health status prediction. Computers & Electrical Engineering, 65, 393–399. https://doi.org/10.1016/j.compeleceng.2017.03.009. Nankervis, K., Ashman, A., Weekes, A., & Carroll, M. (2020). Interactions of residents who have intellectual disability and challenging behaviours. International Journal of Disability, Development and Education, 67(1), 58–72. Nefdt, N., Koegel, R., Singer, G., & Gerber, M. (2010). The use of a self-directed learning program to provide introductory training in pivotal response treatment to parents of children with autism. Journal of Positive Behavior Interventions, 12(1), 23–32. Ometov, A., Shubina, V., Klus, L., Skibinska, J., Saafi, S., Pascacio, P., … Lohan, E. S. (2021). A survey on wearable technology: History, state-of-the-art and current challenges. Computer Networks, 193(5), 108074. https://doi.org/10.1016/j.comnet.2021.108074. Partington, J. W., & Bailey, J. S. (1993). Teaching intraverbal behavior to preschool children. The Analysis of Verbal Behavior, 11, 9–18. Petursdottir, A. I., Carr, J. E., & Michael, J. (2005). Emergence of mands and tacts of novel objects among preschool children. The Analysis of Verbal Behavior, 21(1), 59–74. https:// doi.org/10.1007/BF03393010. Pew Research Center. (2021). Mobile fact sheet. Retrieved from: https://www.pewresearch. org/internet/fact-sheet/mobile/. Prapulla, S. B., Shobha, G., & Thanuja, T. C. (2015). Smart refrigerator using internet of things. Journal of Multidisciplinary Engineering Science and Technology, 2(7), 1795–1801. Provision Living.(2019).Smartphone screen time:Baby boomers and millenials.Retrieved from:https:// provisionliving.com/blog/smartphone-screen-time-baby-boomers-and-millennials/. Rabbi, M., Pfammatter, A., Zhang, M., Spring, B., & Choudhury, T. (2015). Automated personalized feedback for physical activity and dietary behavior change with mobile phones: A randomized controlled trial on adults. Journal of Medical Intervention Research mHealth and uHealth, 3, e42. https://doi.org/10.2196/mhealth.4160. Rachlin, H. (2006). Notes in discounting. Journal of the Experimental Analysis of Behavior, 85(3), 425–435. https://doi.org/10.1901/jeab.2006.85-05. Rader, A. E., Young, M. E., & Leaf, J. B. (2021). A quantitative analysis of accuracy, reliability, and bias in judgments of functional analyses. Journal of the Experimental Analysis of Behavior, 116(2), 166–181. https://doi.org/10.1002/jeab.711. Raj, S. (2019). Building chatbots with Python. Apress. ISBN: 978-1-4842-4095-0. Reddy, G. T., Reddy, M. P. K., Lakshmanna, K., Kaluri, R., Rajput, D. S., Srivastava, G., & Baker, T. (2020). Analysis of dimensionality reduction techniques on big data. IEEE Access, 8, 54776–54788. https://doi.org/10.1109/ACCESS.2020.2980942. Ricciardi, J. N., Luiselli, J. K., & Camare, M. (2006). Shaping approach responses as intervention for specific phobia in a child with autism. Journal of Applied Behavior Analysis, 39(4), 445–448. https://doi.org/10.1901/jaba.2006.158-05. Roane, H. S., Vollmer, T. R., Ringdahl, J. E., & Marcus, B. A. (1998). Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis, 31(4), 605–620. Rodriguez, P., Bautista, M. A., Gonzalez, J., & Escalera, S. (2018). Beyond one-hot encoding: Lower dimensional target embedding. Image and Vision Computing, 75, 21–31. https:// doi.org/10.1016/j.imavis.2018.04.004. Rogers, I. (2002). The Google pagerank algorithm and how it works. Retrieved from: https://cs.wmich.edu/gupta/teaching/cs3310/lectureNotes_cs3310/Pagerank%20 Explained%20Correctly%20with%20Examples_www.cs.princeton.edu_∼chazelle_ courses_BIB_pagerank.pdf. Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. Basic Books. Sigafoos, J., Carnett, A., O'Reilly, M. F., & Lancioni, G. E. (2019). Discrete trial training: A structured learning approach for children with ASD. In S. G. Little, & A. Akin-Little (Eds.), Behavioral interventions in schools: Evidence-based positive strategies (pp. 227–243). American Psychological Association. https://doi.org/10.1037/0000126-013.



Data recording and analysis

245

Simons, R., Koordeman, R., de Looff, P., & Otten, R. (2021). Physiological measurements of stress preceding incidents of challenging behavior in people with severe to profound intellectual disabilities: Longitudinal study protocol of single-case studies. JMIR Research Protocols, 10(7), 224911. https://doi.org/10.2196/24911. Skinner, B. F. (1938). The behavior of organisms. Appleton-Century-Crofts. Skinner, B. F. (1981). Selection by consequences. Science, 213, 501–504. Sosine, J., & Cox, D. J. (2022). Identifying trends in the open-access behavior analytic literature via computational analyses (I): Simple descriptions of text. In The Analysis of Verbal Behavior. in press. SQAB (n.d.). Society for the quantitative analysis of behavior. https://www.sqab.org/index.html. Statista. (2022). Daily social media usage worldwide 2012–2022. Retrieved from: https://www. statista.com/statistics/433871/daily-social-media-usage-worldwide/#:∼:text=How%20 much%20time%20do%20people,minutes%20in%20the%20previous%20year. Strang, G. (2003). Introduction to linear algebra. Third Edition: Wellesley-Cambridge Press, ISBN:0961408898. Trevisan, D. F., Becerra, L., Benitez, P., Higbee, T. S., & Gois, J. P. (2019). A review of the use of computational technology in applied behavior analysis. Adaptive Behavior, 27(3), 183–196. https://doi.org/10.1177/1059712319839386. Twitter-scraper. (2020). Scrape the Twitter frontend API without authentication. https://pypi.org/ project/twitter-scraper/. Vahabzadeh, M., Mezghanni, M., Lin, J., Epstein, D. H., & Preston, K. L. (2010). PGIS: Electronic diary data integration with GPS data initial application in substance-abuse patients. In IEEE 23rd international symposium on computer-based medical systems (CBMS) (pp. 474–479). https://doi.org/10.1109/CBMS.2010.6042691. Van Dantzig, S., Bulut, M., Krans, M.,Van der Lans, A., & De Ruyter, B. (2018). Enhancing physical activity through context-aware coaching. In Proceedings of pervasive health conference (pervasive health 2018) ACM, New York. https://doi.org/10.1145/3240925.3240928. Van Dantzig, S., Geleijnse, G., & Halteren, A. T. (2013). Toward a persuasive mobile application to reduce sedentary behavior. Personal and Ubiquitous Computing, 17, 1237–1246. https://doi.org/10.1007/s00779-012-0588-0. Walsh, C. G., Ribeiro, J. D., & Franklin, J. C. (2017). Predicting suicide attempts over time through machine learning. Clinical Psychological Science, 5, 457–469. https://doi. org/10.1177/2167702617691560. Walsh, C. G., Ribeiro, J. D., & Franklin, J. C. (2018). Predicting suicide attempts in adolescents with longitudinal clinical data and machine learning. The Journal of Child Psychology and Psychiatry, 59, 1261–1270. https://doi.org/10.1111/jcpp.12916. Wang, J., Fan, Y., Palacios, J., Chai, Y., Guetta-Jeanrenaud, N., Obradovich, N., … Zheng, S. (2022). Global evidence of expressed sentiment alterations during the COVID-19 pandemic. Nature Human Behaviour, 6, 349–358. https://doi.org/10.1038/ s41562-022-01312-y. Weiss, M. J. (2001). Expanding ABA intervention in intensive programs for children with autism:The inclusion of natural environment training and fluency based instruction. The Behavior Analyst Today, 2(1), 182–186. Ye, Z., Li,Y., Fathi, A., Han,Y., Rozga, A., Abowd, G. D., & Rehg, J. M. (2012). Detecting eye contact using wearable eye-tracking glasses. In Proceedings of the 2012 ACM conference on ubiquitous computing (pp. 699–704). https://doi.org/10.1145/2370216.2370368. Yoshino, T., & Reed, P. (2008). Effect of tone-punishment on choice behaviour under a closed economy. European Journal of Behavior Analysis, 9(1), 43–52. https://doi.org/10.1 080/15021149.2008.11434294. Zhou, K. (2020). Scrape tweets using twitter package. Rstudio. Retrieved from: https://rpubs. com/Kyleen1991/594933. Zuvić-Butorac, M. (2006). Characteristics of categorical data analysis. Acta Medica Croatica, 60(1), 63–79. https://pubmed.ncbi.nlm.nih.gov/16526308/.

This page intentionally left blank

CHAPTER 10

Behavior analytic supervision conducted remotely Lisa N. Brittona and Tyra P. Sellersb a

Britton Behavioral Consulting, Pinole, CA, United States TP Sellers, LLC, Highlands Ranch, CO, United States

b

Whereas most behavior analysts would agree that in-person supervision is preferred, remote supervision is becoming a common alternative for trainees who are unable to obtain in-person supervision within their immediate work environment. This sentiment is supported by the Board Certified Behavior Analyst (BCBA) Handbook which indicates that in-person, onsite observation is preferred. However, observations may be conducted using asynchronous (i.e., recorded video) or synchronous (i.e., live video conference) formats (Behavior Analyst Certification Board, 2021). Trainees cite several reasons for seeking remote supervision including the lack of BCBAs within their work environment, living in a location in which there are few BCBAs (Britton & Cicoria, 2019), and working in environments in which the BCBAs’ caseloads are too large to allow for fieldwork supervision for trainees. In addition, recent events such as the COVID-19 pandemic have highlighted the need to be flexible with the delivery of both services and the supervision of those services. In cases where barriers prevent all supervision activities occurring in-person, a hybrid of both in-person and remote supervision can be an effective solution to provide optimal support for trainees. In fact, even when supervision could occur exclusively in-­ person, targeted use of remote supervision may be leveraged to enhance the overall supervision process. In less common situations (e.g., significant geographic distance) all the supervision activities may occur remotely. Simmons, Ford, Salvatore, and Moretti (2021) highlighted potential advantages for remote supervision including flexibility in scheduling, reduced travel, access to supervision in rural areas, and decreased reactivity on the part of clients during observations. These authors also highlighted disadvantages such as session flow disruptions while delivering feedback, missed opportunities to deliver necessary feedback, technical challenges, Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00010-6

Copyright © 2023 Elsevier Inc. All rights reserved.

247

248

Applied behavior analysis advanced guidebook

and ­difficulties with establishing a rapport between supervisor and trainee. In this study, Simmons et al. (2021) sent a survey to trainees accruing supervised fieldwork experience hours during the COVID-19 pandemic with the goal of identifying preference for in-person, hybrid, and remote supervision. They also attempted to identify which mode of supervision seemed to be the most effective.The results of that survey revealed that the majority of respondents preferred in-person followed by hybrid supervision. In addition, they found that in-person supervision was reported as most effective (Simmons et al., 2021). It is important to note that the sudden shift to remote supervision during the pandemic could have affected the quality of supervision, thereby impacting the opinions expressed by respondents. Despite its challenges, it is clear that remote supervision will be a necessary component for service delivery within the field of behavior analysis. The purpose of this chapter is to identify and highlight best practices in the use of remote supervision with a focus on strategies to address common challenges experienced.

Systems to promote best practices There are challenges and requirements unique to providing remote supervision that may not be readily apparent to those who have exclusively contacted in-person supervision. Therefore, prior to starting remote supervision, a supervisor must develop specific systems and practices that will allow them to be successful. Some key areas to consider include the type of technology to use, how to foster the supervision relationship, a scope and sequence of topics covered, the delivery of content, a method of ensuring competency, and a process for evaluating the effectiveness of supervision. We will cover these topics with an eye toward how to develop these systems within the scope of remote supervision.

Technology A necessary component for remote supervision is identifying the type of videoconferencing software that the supervisor will use to engage with the trainee (e.g., Zoom, GoToMeeting, WebEx). When selecting a platform, it is critical to identify systems that will align with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and Family Educational Rights and Privacy Act of 1974 (FERPA) requirements. When referencing HIPAA, a supervisor needs to consider how to protect the privacy of clients’ protected health information (PHI) and ensure that all information is stored



Behavior analytic supervision conducted remotely

249

and shared securely (US Department of Health and Human Services, 2022). PHI consists of individually identifiable health information regardless of the method that it is maintained and transmitted. If a trainee works with clients whose funding source is health insurance, HIPAA requirements will apply during discussions about those clients as well as the maintenance and storage of documents related to those clients. Whereas with FERPA, a supervisor is protecting student records (US Department of Education, 2022). It is important to note that FERPA does not just apply to the students served by the trainee. If a supervisor is working with a trainee through a university sponsored fieldwork experience, documents and videos obtained through that experience become part of the trainee’s educational record and FERPA rules apply (Cavalari, Gillis, Kruser, & Romanczyk, 2014). It will be important to have a conversation at the outset of the supervision relationship to have a full understanding of whether HIPAA and FERPA apply. Alternatively, a supervisor can assume that both apply in all situations and secure content accordingly. In either case, individuals certified through the BACB are required to protect the confidentiality of clients, supervisees, and trainees as highlighted in the Ethics Code for Behavior Analysts (standards 2.03, 3.07, 3.10, 4.05, 5.02, and 6.05; Behavior Analyst Certification Board, 2020). When a supervisor and a trainee are meeting through a videoconferencing software, those meetings are subject to the Privacy Rule. The Privacy Rule gives an individual certain rights regarding their PHI and prohibits the disclosure of such information without their consent (Center for Disease Control and Prevention, 2017). Care should be taken to avoid using full names or other PHI within this context (Cavalari et  al., 2014). Cavalari et al. (2014) make recommendations for auditing one’s own practices for ongoing compliance through the use of a monitoring checklist that they provide.We recommend reading this article in its entirety to gain additional recommendations regarding compliance with these regulations, as this will be a key component when providing remote supervision. When evaluating the security features of a videoconferencing software, a supervisor can verify the security standards for the system on the company’s website (Britton & Cicoria, 2019). For example, by going to the Zoom website, one can click on solutions and one of the options is Zoom for healthcare (Zoom, 2021). It is also important to note that using a Wi-Fi that is not encrypted will compromise the security features that are provided within the software (Cavalari et al., 2014). Care should be taken to ensure that neither the supervisor nor the trainee are logging into meetings on an unsecure or public network.

250

Applied behavior analysis advanced guidebook

Additional technology that will be necessary includes the hardware to run the videoconferencing software (e.g., laptop, tablet), web camera, bugin-the-ear/blue tooth technology, and a method of recording client observations if these activities are occurring asynchronously. Zoder-Martell, Markelz, Floress, Skriba, and Sayyah (2020) provided a review of three technologies to support observations during the telehealth process. These authors examined web cameras, Swivl, and telepresence robots (Zoder-Martell et  al., 2020). From our experience, web cameras provide a sufficient level of detail to deliver effective feedback regarding interactions with clients.When conducting synchronous observations, it will be helpful for the trainee to use a bug-inthe-ear or blue tooth technology to reduce reactivity (Cooper, Heron, & Heward, 2020).When a trainee is recording their interactions with clients for later review, remind them of their HIPAA and FERPA requirements related to the collection and storage of these videos.Work with them to identify the equipment that should be used for such purposes. Fig. 1 includes a checklist for supervisors to provide to trainees regarding the technology requirements and necessary protections to consider ensuring that the trainee has the resources necessary for an effective supervision experience. In addition to ensuring that technology needs have been addressed, there are some other important considerations to address proactively. The trainee should ensure that consent has been provided for the supervisor to observe clients if the supervisor is not employed by the same company. Consent to record clients will also need to be obtained if observations are taking place asynchronously. Care should be taken to ensure that consent is obtained for any client or student that comes into view during the observation/­ recording process. Therefore, the trainee needs to maintain a list of clients for whom consent is obtained and only conduct observations/recordings when those clients are present.This caveat is especially important for observations that occur in a classroom setting as there are typically more students present within that context and it can be more difficult to control who is recorded during the session. Both the supervisor and the trainee will need to be thoughtful about the location in which meetings take place. We already highlighted the concerns about using an open Wi-Fi and how that can compromise security. Another factor is ensuring that meetings take place in a location in which the screen and audio are not accessible to others such as a private office, sitting with back to the wall, and using headphones. Supervisors should take the time to develop job aids for the technology used. This will give trainees the tools necessary to navigate the



Behavior analytic supervision conducted remotely

251

Technology Considerations Checklist for Trainees Fill in the specific information in the “Specific Technology or Considerations” column relevant to the remote supervision context so that your trainee can ensure they have access to, or have time to get, the needed technology. The “Questions/Notes” section can be usedby the trainee to indicate any questions they have for follow-up to factors they need to document (e.g., they have ordered something, they are looking into something). This should be stored in a shared location. Ask the trainee to return it at least 48 hours before the pre-meeting or first supervision meeting. Trainee Name: Specific Technology or Actions

Confirm

Questions/Notes

Video Conferencing Platform Security Features Enabled Confirm HIPAA compliance Confirm FERPA compliance Secure Wi-Fi Access Appropriate Internet Speed Laptop or tablet Web camera Ear buds Secure recording application Tripod Secure cloud-based storage Other Other Other Other

Fig. 1  Technology considerations checklist for trainees.

equipment and software effectively. Finally, the supervisor and trainee should have a conversation ahead of time to troubleshoot alternative plans for when technology does not work. This may include flexible scheduling to reschedule a meeting or using an alternative form of communication when technology related issues arise. It is a good idea to have a premeeting that allows for introductions, reviewing the job aids, and testing all technology and back up plans if connection is lost. Fig. 2 includes a checklist for supervisors to stay organized during the supervision process. It is divided into three sections: (a) preparing for remote supervision, (b) presupervision meeting, and (c) after supervision meetings.

252

Applied behavior analysis advanced guidebook

Supervision relationship Once a supervisor and trainee have made a commitment to work together, the first step is for the supervisor to develop a contract for review with the trainee. Garza, McGee, Schenk, and Wiskirchen (2018) recommend reviewing each section of the contract during the first meeting, pausing to answer questions after each section, and initialing each section.This is also the time to highlight the expectations that both the supervisor and trainee have related to the structure of supervision, for example, mode and frequency of communication, amount of time to respond to emails, and who is responsible for developing the agenda. Remote Supervision Checklist Use this checklist throughout the remote supervisory relationship to ensure that steps are not missed. PREPARING FOR REMOTE SUPERVISION Decide on remote meeting platform and ensure it complies with any needed requirements related to HIPAA, FERPA, etc. Identify any other needed technology for self and supervisee/trainee (e.g., lapel mic, ear buds, tripod stand) Identify expectations for self and supervisee/trainee (e.g., back-upplan if connection is lost, timeliness regarding start time and length of grace period, file storing and sharing) Prepare any needed job aids for technology use or trouble shooting Create contract including necessary components related to remote supervision (e.g., clear indication that modality of supervision is remote, affirmation that requirements for confidentiality and privacy will be met as appropriate) Identify supervisor and supervisee/trainee have appropriate setting for secure viewing Plan to have pre-meeting to test all technology and trouble shoot any issues (e.g., firewalls, needed log ins, audio and video settings) Create agenda Gather and organize documents (e.g., contract, evaluation forms, feedback tracking system, hours tracking system, job aids) Send supervisee/trainee Technology Consideration Checklist before “pre-meeting” and request any other documents (e.g., syllabi from completed courses, required confidentiality or consent forms) PRE-SUPERVISION MEETING This meeting could be a 15min stand-alone meeting or could occur 15min before the start of the first supervision meeting.

Brief introductions Review contract Test needed features of the meeting platform (e.g., mic, speakers, camera, muting or pausing mic/speakers/camera, chat feature, record option) and connectivity

Fig. 2  Remote supervision checklist. (Continued)



Behavior analytic supervision conducted remotely

253

Test any additional technology in use (e.g., lapel or external mic, external webcam, ear buds, tripod or other stand) If supervisee/trainee will be expected to move around during observations, test out optimal placements for the laptop/tablet; consider having supervisee/trainee mark the locations for easy placement during sessions (e.g., stickie note or painter’s tape) Assess trainee’s comfort level using bug-in-ear by having them talk out loud as if interacting with a client or caregiver while you simultaneously deliver instructions, praise, and corrective feedback; conduct any needed practice sessions to increase comfort level and ability Review and address any needs related to file sharing/storage Discuss and test back-up plan for technology issues (e.g., exiting and re-entering, turning off camera to support connectivity, switching to phone call) AFTER SUPERVISION MEETINGS Complete any documentation (e.g., evaluation and feedback forms, hours tracking) If appropriate, carry out post-session debriefing or ensure a follow-up meeting is scheduled soon after (e.g., within 48 hours) Ensure that required forms are completed, signed, and shared in a timely manner Ensure that any recordings or images are properly stored for later review; schedule date for deletion

Fig. 2, Cont’d

We recommend sending a copy of the contract to the trainee at least three business days prior to the meeting to give them sufficient time to review the contract ahead of time. At the time of the meeting, the supervisor should share their screen to review the contract section by section.They can stop to answer questions and engage in some comprehension checks along the way. This is especially important as it may be difficult to assess cues for understanding that are more readily observable when meeting in-person. Edits can be made during this process resulting in a finalized document that is ready for signatures. The supervisor and trainee will need to determine the method of signing the contract that is unique to the remote supervision process. They may choose to print, sign, scan, and send the document or utilize an electronic signature format. We recommend starting each supervision meeting with a check in to see how the trainee is doing. This can be an informal way to establish rapport with the trainee by showing interest in them, an important step especially during challenging times such as the COVID-19 pandemic. Certain s­tressors run the risk of impacting service delivery and it is helpful to identify these issues prior to them having an effect. It is also recommended to solicit feedback informally at the conclusion of the meeting (LeBlanc, Sellers, & Ala’i,

254

Applied behavior analysis advanced guidebook

2020) by asking what is working well and what the trainee would like to see more of during the supervision process.This is a critical consideration during remote supervision because the supervisor will not have the advantage of informal check-ins that occur frequently when two people are working within the same environment. In addition, the videoconferencing context may be a barrier to noticing indicators of potential issues such as affect, body language, and appearance (e.g., looking disheveled). LeBlanc et al. (2020) wrote an entire book on developing an effective supervisory relationship. While these authors did not write the book from the perspective of remote supervision, many of the recommendations made can be adapted to a remote context. A strong emphasis was placed on the importance of developing a collaborative supervision relationship (i.e., feedback should be bidirectional). This can be done by highlighting the importance of the supervisor receiving feedback from the trainee from the beginning. LeBlanc et al. (2020) identify the importance of asking the trainee how they would like to deliver this feedback (e.g., via email, vocally at the end of the meeting). The trainee may find it easier to provide this feedback in a remote context compared to an in-person supervision situation. Care should be taken to cultivate this aspect of the relationship by soliciting feedback in the method identified by the trainee. The supervisor can have visual reminders (e.g., a sticky note) that they maintain to increase the probability of gaining this information from the trainee on a regular basis. Culture is another area of focus when fostering the supervisory relationship. LeBlanc et al. (2020) listed several activities designed to increase the supervisor’s cultural awareness, assess one’s place of privilege, and explore one’s perspective. To gain cultural awareness, the authors provide a structured interview to use in which the supervisor starts by answering the questions and it progresses to the point of interviewing a supervisee or colleague from a different culture. It will be important to have web cameras in use while conducting this interview. The purpose of exploring one’s place of privilege is to identify ways for your trainee to feel safe and included in the same ways as other trainees who are more privileged. Examples of this include providing diverse scenarios and using nonbinary pronouns within trainings. By exploring one’s perspective, the supervisor has an opportunity to realize that while some perspectives are easily apparent, we will not fully understand other perspectives without being told directly. LeBlanc et  al. (2020) suggest that supervisors ask trainees to complete a cultural satisfaction survey as well and we recommend using an online survey process such



Behavior analytic supervision conducted remotely

255

as Google Forms or SurveyMonkey to gain this information. While these topics can be challenging to discuss, they are of critical importance for improving the cultural responsiveness of a supervisor. In any relationship, professional or otherwise, mistakes will be made, and conflict will occur. Our ability to respond effectively when this happens is important. When a supervisor observes avoidance behavior of the trainee or themselves, we advocate taking a functional approach to determine why these behaviors are occurring (Sellers, LeBlanc, & Valentino, 2016). Supervisors will be much more effective at addressing concerns when they understand the underlying function as an issue with the supervisory relationship or a skill deficit. If the concern is related to something that the supervisor has done that was off putting, acknowledge the event and indicate what changes will be made to prevent it from happening again. Notably, these challenging conversations may be easier to navigate remotely than an in-person interaction. In addition, Sellers, LeBlanc, et  al. (2016) make recommendations for addressing organization, time management, and interpersonal skills as well as accepting performance feedback. They highlight maintaining a satisfying supervision relationship if possible as it will be an opportunity to build these skills for the trainee as well as strengthen the supervisor’s skills. We recommend that if there appears to be a strain in the supervision relationship, that the supervisor addresses this concern during a synchronous meeting in a quiet, private location free of distractions and with web cameras in use.This will be the best way to gauge verbal and other cues regarding these concerns. Written communication can occur as a follow up to the discussion but should not be the sole or first mode of communication. There is a risk that the written word can be misinterpreted and does not allow for easy elaboration and clarification.

Scope and sequence When considering the areas of focus during the supervision process, it can be overwhelming to consider all the skills that a competent BCBA needs. Later in the chapter, we discuss methods of building these skills through the behavioral skills training (BST) process. Remote supervision limits BST, for example, it will be difficult to engage in role-play activities together with shared materials. Also, supervisors are unable to provide physical prompting during the instructional process and have to rely on prerecorded video models and remote observations to build skills. Alternatively, it is helpful to use a job model that highlights the roles and responsibilities associated with a BCBA position at the trainee’s place

256

Applied behavior analysis advanced guidebook

of employment (Garza et  al., 2018). A job model can provide an efficient way to identify the key skills necessary to excel in a specific position within that organization. For example, if a trainee is working at a school serving students with an emotional/behavioral disorder, it might not make sense to spend a significant amount of time training them how to conduct discrete trial teaching (DTT) to competency. However, DTT would be an area of focus for a trainee who is providing early intervention autism services. After identifying key skills, the next step is to determine which of those skills will lend themselves to instruction through a remote supervision format. Certain skills are necessary regardless of the location or population served by the BCBA, notably measurement, delivery of positive reinforcement, and BST. Developing a scope and sequence related to these key skills ensures that the supervisor is building these skills for all trainees with whom they work. Britton and Cicoria (2019) provide some recommendations regarding a scope and sequence that can be of value to new and experienced supervisors.

Delivery of content/ensuring competency BST is a performance-based training that ensures a trainee can implement a strategy at a predetermined level of competence. Parsons, Rollyson, and Reid (2012) outlined a 6-step process for delivering training: (a) explain the topic including a rationale for why it is important, (b) provide a succinct written description of the topic, (c) model what the skill looks like, (d) offer opportunity for rehearsal, (e) present performance feedback, and (f) continue the modeling, rehearsal, and performance feedback sequence until a predetermined level of competency is achieved, usually 100% demonstration of trained skills. When conducting BST in a remote supervision format, use didactic training to introduce a topic to trainees through the applicable videoconferencing technology. A platform that allows the supervisor to share their screen will improve the interactive features of this training. Often, this can occur in a group supervision format. The didactic instruction should conclude with a competency check that includes multiple exemplars of scenarios in which the trainees type their answers into the chat feature sent only to the supervisor or responds to polls developed by the supervisor in the videoconferencing platform. For example, the supervisor can provide didactic training on the different types of group contingencies. Training would conclude with several scenarios in which the trainees identify



Behavior analytic supervision conducted remotely

257

which type of group contingency is present. The supervisor should set a ­predetermined level of competency to determine mastery and trainees who do not meet criterion can continue to build these skills in individual supervision sessions. We also recommend providing guided notes for the trainees to follow along during the didactic training. Guided notes help trainees focus their attention on the relevant material that the supervisor has identified as critically important (Austin, Gilbert Lee, Thibeault, Carr, & Bailey, 2002). The supervisor should provide the guided notes ahead of time so the trainees can print them prior to the meeting or have them available on their screen for note taking purposes. The next step is to demonstrate what the skill should look like. We find that using a prerecorded video model allows for a clear and consistent demonstration of the skill that incorporates all the components the supervisor wants to include. Video models are well suited for a remote supervision context because it can be challenging to show a live model with all the needed materials during a meeting. In addition, a supervisor can make multiple examples through a prerecorded video model to ensure that they demonstrate all the relevant components during that recording. For example, if a supervisor is demonstrating how to conduct DTT, they can ensure that they provide examples on how to prompt a response, implement error corrections, and differentially reinforce independent responses. The effectiveness of video modeling in training staff how to implement interventions with clients is well established (Catania, Almeida, Liu-Constant, & DiGennaro Reed, 2009; Collins, Higbee, & Salzberg, 2009). The rehearsal component of BST can take place synchronously through a live demonstration of the skill or asynchronously through a recording of the skill. Synchronous demonstrations of skills are difficult in a remote supervision context because it can be challenging to catch the right opportunity to practice a specific skill at the time identified for observation. For example, if the plan was to observe a DTT session with a client, the client may cancel the session for that day and will no longer be available to practice that skill. Alternatively, a trainee can invite a peer or friend to conduct a role-play of the skill when these unexpected events occur. If the skill in question is especially complex (e.g., implementing functional analysis conditions), we suggest rehearsing the skill in a role-play situation first before implementing the skill with a client. It is best practice to use a fidelity checklist or rubric to determine if the trainee implemented the skill to an appropriate level of fidelity (Britton & Cicoria, 2019).

258

Applied behavior analysis advanced guidebook

During the feedback process, the supervisor highlights what behavior the trainee performed well and what the supervisor would like the trainee to do ­differently next time (Parsons et al., 2012). The more specific that the supervisor is with their feedback, the greater the chances that they will observe improved performance during the next observation. Providing another model of the skill is also helpful at this point. Once a trainee has reached competency related to a specific skill, there should be a plan to promote maintenance and generalization of that skill (Garza et al., 2018). One strategy is to develop a schedule for the trainee to record specific skills at regular intervals for review. This ongoing evaluation of multiple exemplars will increase the likelihood of generalization and maintenance. Fig. 3 is a sample schedule that supervisors can use for the purpose of continual evaluation of specific skills for trainees. This system will allow for supervisors to ensure the generalization and maintenance of skills taught through the supervision process.

Trainee Skill Recording Schedule

Supervisee Name: Supervision Period: Instructions: Please use the schedule below to plan to make recordings of specific skills or tasks. Ensure that the recordings and any resulting products (e.g., completed procedural integrity checks, data) are uploaded to the shared folder 48 hours before your next scheduled 1:1 supervision meeting. Your supervisor will enter the Action Code(s) for you. The Action Codes are likely to change across supervision periods. 1. review the video independently and provide you with written feedback 2. review the video with you and discuss 3. ask you to review the video and complete a procedural integrity check 4. ask you to review the video and collect data and then calculate IOA using a master set of data

Task/Skill to Capture Duration Frequency EXAMPLE: Discrete trial instruction 5 min 3 samples across 2 clients each week EXAMPLE: Manding session 10 min 2 samples across 2 clients each week

Fig. 3  Trainee skill recording schedule.

Action 2&3 2&4



Behavior analytic supervision conducted remotely

259

Evaluating supervision effectiveness A key component for supervision is to develop a process for evaluating the effectiveness of said supervision.The BST model discussed previously is one step in that process. Once that step is completed, a supervisor can evaluate how many observations are necessary for the trainee to achieve a level of competency related to a particular skill.These data determine if the instructional strategies that the supervisor is using are having the level of impact that they desire. LeBlanc et al. (2020) also recommend using client improvement as another indicator of effective supervision. In other words, the goal of fieldwork supervision is to build the skills necessary for a trainee to improve the lives of their clients. If clients are making an appropriate level of progress, the supervisor is meeting this mission. Supervisors can ask trainees to share number of targets mastered for clients as a metric.They can review a graphical representation of this progress as a cumulative record monthly during individual supervision sessions by asking the trainee to share their screen with the graphical display. If progress plateaus or is highly variable for a particular skill or across skills, this can trigger a problem-solving discussion to identify adjustments to improve progress. Social validity measures can also be used to assess the effectiveness of supervision. One area of social validity is to assess whether the changes in the client’s behavior are socially significant from the perspective of parents and other stakeholders (Turner, Fisher, & Luiselli, 2016). If the supervisor does not have direct contact with parents, the social validity questionnaire can be sent via Google Forms or SurveyMonkey. Social validity can also be assessed to determine if the trainee is satisfied with the supervision process (LeBlanc et  al., 2020). This can occur using the same methods identified previously or by asking the trainee to provide feedback at the conclusion of a meeting while the supervisor takes notes. The method for receiving this information should align with the way that the trainee requested to deliver the feedback during the start of the supervision relationship. If a trainee provides corrective feedback during this process, the supervisor should initiate changes as quickly as possible and follows up with the trainee. The supervisor runs the risk of putting the delivery of feedback on extinction if a change is not made. Finally, we recommend that supervisors start small and implement one process for evaluating supervision at a time. As the supervisor implements one strategy, they can start the process of developing the next one. Over time, they will have a robust method for assessing the ongoing supervision process.

260

Applied behavior analysis advanced guidebook

Future research The practice of providing remote supervision is likely increasing due to practical reasons related to community health needs, geographic distance, and competing contingencies such as caseload numbers, drive time, and available billable hours. Our profession has responded by providing recommendations to support taking a thoughtful and structured approach to providing remote supervision and service delivery (Ninci et al., 2021; Simmons et al., 2021; Zoder-Martell et al., 2020) drawing from the broad literature base and experience. However, there is little empirical research focusing specifically on examining the overall effectiveness and acceptability of remote supervision or evaluating the utility of different remote technologies. Many authors have emphasized a committed and positive supervisor relationship by building rapport (LeBlanc et  al., 2020; Sellers, LeBlanc, et al., 2016; Sellers,Valentino, & LeBlanc, 2016). Researchers have demonstrated a positive impact from brief in-person rapport-building ­interactions on task completion and discretionary effort (Curry, Gravina, Sleiman, & Richard, 2019). There is preliminary evidence suggesting that trainees perceive that rapport is better established through in-person supervision activities. Therefore, it may be useful to explore the effects of structured rapport-building activities via teleconferencing platforms and to compare them with in-person rapport-building strategies. It may also be useful to explore if specific types of rapport-building activities and technologies (e.g., laptops, tablets, telepresence robots) are more effective or preferred when conducting remote supervision to ensure the supervisor is selecting strategies most aligned with the remote modality. Opportunities exist for research related to different types of technologies and skills required to maximize the experience. For example, researchers could compare different types of technology to support remote supervision and observation. The different technologies discussed by Zoder-Martell et al. (2020) could each be evaluated for effectiveness across the most common activities carried out during remote supervision and then compared to one another. Researchers could also explore trainee social validity related to different modalities and types of technology. Finally, practicing supervisors might find it helpful to know if some technology training approaches are better than others. For example, the use of bug-in-ear technology can be helpful during supervision and training, as it ensures that the observing supervisor can deliver vocal instructions, praise, and feedback discreetly and immediately. However, this strategy may require some pretraining to be



Behavior analytic supervision conducted remotely

261

effective and reduce the distraction or frustration on the part of the trainee. Researchers could investigate the most effective and efficient method of teaching a trainee to receive vocal instruction or feedback while simultaneously interacting with a client and/or caregiver so that supervisors could pretrain trainees before using bug-in-ear technology within sessions. In addition, it could be instructive to evaluate the timing of feedback provided during remote supervision. For example, some researchers have demonstrated that providing in-person feedback prior to a teaching session was more effective at producing the desired performance change than feedback delivered at the end of a teaching session (Aljadeff-Abergel, Peterson, Wiskirchen, Hagen, & Cole, 2017). Another area ripe for research is how to address the barriers present with remote supervision related to implementing critical components of BST, such as modeling and role-play of skills, particularly those requiring materials or another person. Trainee respondents in the study by Simmons et  al. (2021) collectively indicated that in-person supervision was better for tasks like observing their supervisor with a client, role-play, and receiving support with problem behavior. Researchers could explore the use of ­expert-video models, side-by-side video comparisons, and structured video review with procedure fidelity checklists to support modeling and practice activities during remote supervision. Researchers could investigate the use of virtual reality technology as an effective tool for more realistic role-play and practice opportunities that may also increase generalization to the service setting.

Summary The purpose of this chapter was to examine practice recommendation related to remote supervision, identify and address barriers to effective supervision strategies when engaging in remote supervision, and identify future areas of research to improve upon the delivery of supervision when delivered remotely. Whereas most BCBAs agree that in-person supervision is ideal, many factors contribute to the need for remote supervision for many within the field. These factors may include the location in which the trainee lives, limited access to BCBAs within the work context, and large caseloads for the BCBAs working within the company. In addition, even when trainees receive in-person supervision as their primary modality, this supervision can be augmented with remote supervision to maximize the supports provided.

262

Applied behavior analysis advanced guidebook

We encourage supervisors to develop systems prior to starting the process of conducting remote supervision to ensure the quality of the supervision provided. Some key areas include technology planning, nurturing the supervision relationship, determining a scope and sequence of topics, delivering content, ensuring competency on the part of the trainee, and evaluating the effectiveness of supervision practices. We encourage researchers to continue evaluating specific applications of remote supervision and components that will enhance its effectiveness or minimize the impacts of barriers. Conducting behavior analytic research remotely has not been researched extensively, therefore greater inquiry will benefit practice on many levels.

References Aljadeff-Abergel, E., Peterson, S. M., Wiskirchen, R. R., Hagen, K. K., & Cole, M. L. (2017). Evaluating the temporal location of feedback: Providing feedback following performance vs. prior to performance. Journal of Organizational Behavior Management, 37(2), 171–195. Austin, J. L., Gilbert Lee, M., Thibeault, M. D., Carr, J. E., & Bailey, J. S. (2002). Effects of guided notes on university students’ responding and recall of information. Journal of Behavioral Education, 11(4), 243–254. http://www.jstor.org/stable/41824287. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://www.bacb. com/wp-content/uploads/2020/11/Ethics-Code-for-Behavior-Analysts-210902.pdf. Behavior Analyst Certification Board. (2021). Board Certified Behavior Analyst® handbook. https://www.bacb.com/wp-content/uploads/2021/09/BCBAHandbook_211228.pdf. Britton, L. N., & Cicoria, M. J. (2019). Remote fieldwork supervision for BCBA® trainees. Academic Press. Catania, C. N., Almeida, D., Liu-Constant, B., & DiGennaro Reed, F. D. (2009).Video modeling to train staff to implement discrete-trial instruction. Journal of Applied Behavior Analysis, 42(2), 387–392. https://doi.org/10.1901/jaba.2009.42-387. Cavalari, R. N. S., Gillis, J. M., Kruser, N., & Romanczyk, R. G. (2014). Digital communication and records in service provision and supervision: Regulation and practice. Behavior Analysis in Practice, 8(2), 176–189. https://doi.org/10.1007/s40617-014-0030-3. Center for Disease Control and Prevention. (2017). CDC—Privacy legislation and regulations. OSI, OADS. Collins, S., Higbee, T. S., & Salzberg, C. L. (2009). The effects of video modeling on staff implementation of a problem-solving intervention with adults with developmental disabilities. Journal of Applied Behavior Analysis, 42(4), 849–854. https://doi.org/10.1901/ jaba.2009.42-849. Cooper, J. O., Heron,T. E., & Heward,W. L. (2020). Applied behavior analysis (3rd ed.). Pearson. Curry, S. M., Gravina, N. E., Sleiman, A. A., & Richard, E. (2019). The effects of engaging in rapport-building behaviors on productivity and discretionary effort. Journal of Organizational Behavior Management, 39(3–4), 1–14. https://doi.org/10.1080/0160806 1.2019.1667940. Garza, K. L., McGee, H. M., Schenk,Y. A., & Wiskirchen, R. R. (2018). Some tools for carrying out a proposed process for supervising experience hours for aspiring Board Certified Behavior Analysts®. Behavior Analysis in Practice, 11(1), 62–70. https://doi.org/10.1007/ s40617-017-0186-8. LeBlanc, L. A., Sellers, T. P., & Ala’i, S. (2020). Building and sustaining meaningful and effective relationships as a supervisor and mentor. Sloan Publishing.



Behavior analytic supervision conducted remotely

263

Ninci, J., Čolić, M., Hogan, A., Taylor, G., Bristol, R., & Burris, J. (2021). Maintaining effective supervision systems for trainees pursuing a Behavior Analyst Certification Board Certification during the COVID-19 Pandemic. Behavior Analysis in Practice, 14(4), 1047– 1057. https://doi.org/10.1007/s40617-021-00565-9. Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-based staff training: A guide for practitioners. Behavior Analysis in Practice, 5(2), 2–11. https://doi.org/10.1007/ BF03391819. Sellers, T. P., LeBlanc, L. A., & Valentino, A. L. (2016). Recommendations for detecting and addressing barriers to successful supervision. Behavior Analysis in Practice, 9(4), 309–319. https://doi.org/10.1007/s40617-016-0142-z. Sellers, T. P., Valentino, A. L., & LeBlanc, L. A. (2016). Recommended practices for individual supervision of aspiring behavior analysts. Behavior Analysis in Practice, 9(4), 274–286. https://doi.org/10.1007/s40617-016-0110-7. Simmons, C. A., Ford, K. R., Salvatore, G. L., & Moretti, A. E. (2021). Acceptability and feasibility of virtual behavior analysis supervision. Behavior Analysis in Practice, 14, 927–943. https://doi.org/10.1007/s40617-021-00622-3. Turner, L. B., Fisher, A. J., & Luiselli, J. K. (2016). Towards a competency-based, ethical, and socially valid approach to the supervision of applied behavior analytic trainees. Behavior Analysis in Practice, 9(4), 287–298. https://doi.org/10.1007/s40617-016-0121-4. US Department of Education. (2022). Family Educational Rights and Privacy Act (FERPA). US Department of Health and Human Services. (2022). HIPAA for Professionals. HHS.gov. Zoder-Martell, K. A., Markelz, A. M., Floress, M. T., Skriba, H. A., & Sayyah, L. E. N. (2020). Technology to facilitate telehealth in applied behavior analysis. Behavior Analysis in Practice, 13(3), 596–603. https://doi.org/10.1007/s40617-020-00449-4. Zoom. (2021). Secure Video for Telehealth & Collaboration. Zoom.

This page intentionally left blank

CHAPTER 11

Teleconsultation to service settings Evan H. Dart, Nicolette Bauermeister, Courtney Claar, Ashley Dreiss, Jasmine Gray, and Tiara Rowell Department of Educational and Psychological Studies, College of Education, University of South Florida, Tampa, FL, United States

Consultation is an indirect service delivery model in which a consultant (e.g., a behavior analyst) works with a consultee (e.g., a caregiver) to plan and oversee the implementation of behavioral interventions to support a client. This consultative model has the potential to maximize the impact of a consultant’s professional contributions by allowing their services to reach many clients through one or more consultees. In other words, by giving away effective behavior change principles to a consultee (e.g., classroom teacher), a behavior analyst may indirectly drive positive change in many more clients than if they had worked directly with a single client in the same amount of time (Fischer & Bloomfield, 2019). For this reason, consultation has been embraced in organizational settings such as public schools and residential facilities where highly trained staff (e.g., behavior analysts) are often limited. Consultative services are in especially high demand in these settings, creating a need for more efficient service delivery. One way in which service providers have attempted to meet the ever-­ increasing demand for consultative services is through the use of technology. In teleconsultation, consultants meet with consultees through various technological applications to support clients’ needs.The American Psychological Association (APA) defines teleconsultation as the: provision of consultation services using telecommunication technologies, [with] telecommunication technologies including but not limited to telephone, mobile devices, interactive videoconferencing, email, chat, text, and Internet. American Psychological Association (2013, p. 3)

Although the terms telehealth and teleconsultation are sometimes used interchangeably, these terms refer to distinct concepts. Whereas teleconsultation refers specifically to technologies to provide indirect consultative Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00011-8

Copyright © 2023 Elsevier Inc. All rights reserved.

265

266

Applied behavior analysis advanced guidebook

services, experts define telehealth technologies to be any “electronic information and telecommunications technology used to support and improve clinical health services, health administration, patient information, public health, and professional education and supervision” (Baker & Bufka, 2011, p. 405). Hence, the scope of activities encompassed under the telehealth umbrella extends beyond consultation and includes additional client-centered services, including direct service delivery. Thus, the focus of this chapter is limited to the application of telehealth technology to the delivery of consultative services. Teleconsultation is a rapidly growing area of research and practice. One of the greatest benefits associated with a teleconsultation model is that it enables professionals to deliver services over large geographical distances in real-time (Boisvert, Lang, Andrianopoulos, & Boscardin, 2010). Teleconsultation may be a particularly useful method when consultees and clients reside in rural areas or other areas where geographic barriers prevent or impede in-person, face-to-face service delivery. Still other populations may reside in more geographically accessible areas but lack the means of accessing transportation to meet with consultants. Families with significant health challenges may also encounter additional difficulties accessing consultative services in-person and may particularly benefit from teleconsultative services. Furthermore, since the COVID-19 pandemic, quarantine restrictions have created an unprecedented demand for teleconsultation services. Caregivers and service providers continued to need assistance with family members and clients in face-to-face and virtual environments while operating under strict quarantine conditions. Now more than ever, the benefits of teleconsultation make it an important practice for behavioral health providers to study and understand. What follows is a brief review of the teleconsultation literature, a guide to teleconsultation practice, and considerations for ethical practice.

Review of the literature As mentioned previously, teleconsultation, and consultation in general, is an indirect form of service delivery whereby change in the client’s behavior is driven through the consultant’s interaction with a third party, the consultee (e.g., Erchul & Martens, 2002).This triadic model of service delivery was pioneered first by Gerald Caplan (Erchul, 2009), a psychologist providing counseling services to youth after World War II. Recognizing that direct service delivery with such a large number of clients was not possible, Caplan began



Teleconsultation to service settings

267

consulting with personnel at the residential facilities in which his clients lived, providing them with support and strategies to address their needs. Although Caplan primarily operated from a psychoanalytic approach, the indirect model of psychological service delivery he introduced gained extraordinary popularity, spreading to many other fields including education and behavior analysis. Within a consultative arrangement, behavior analysts are tasked with working through another individual to drive behavioral change in a client. This individual will vary depending on the consultative context but will often be the client’s caregiver (Swanson et al., 2020), classroom teacher (Carter & Van Norman, 2010) or other health service professionals. Other fields such as school psychology have identified this indirect service delivery model as paradoxical, claiming that to serve students effectively they must concentrate their efforts and expertise on adults (Gutkin & Conoley, 1990). Others have noted a reluctance by practitioners to use their knowledge of effective behavior change principles to change consultee behavior in order to better serve their clients (Erchul, Grissom, & Getty, 2014). In contrast, behavior analysts are familiar with indirect service delivery roles and often find themselves working through other professionals (e.g., registered behavior technicians) to produce outcomes in clients; however, unlike the behavior analyst-technician relationship, in which a clear hierarchy and supervision structure is in place, consultative arrangements are assumed to be nonhierarchical, collaborative, and supportive of the consultee’s autonomy (Erchul et al., 2014). Despite these differences, effective consultation begins with the consultant’s decision to adopt an evidence-based model that is used to structure the consultation process. Without a clear roadmap of the consultation process, consultants risk spending time in meetings with the consultee that are not goal-directed and purposeful, potentially prolonging the consultation process unnecessarily or reducing the effectiveness of services delivered to the client. Although there are a number of different consultative process models in the literature (e.g., consultee-centered consultation), only one has been repeatedly identified as effective. Problem-solving consultation (Frank & Kratochwill, 2014), formerly known as behavioral consultation (Bergan & Kratochwill, 1990), is a four-step model that has been used for nearly 50 years to deliver effective intervention services to clients through consultees. Systematic reviews and meta-analyses of problem-solving consultation consistently identify it as an evidence-based model (Medway & Updyke, 1985; Sheridan, Welch, & Orme, 1996). Most recently, Hurwitz, Kratochwill, and Serlin (2015) found that “consultants [using problem-­ solving consultation] consistently produced positive effect sizes on average

268

Applied behavior analysis advanced guidebook

across their cases” (p. 174). Notably, this meta-analysis included a large sample of consultant triads (i.e., 124 consultants working through 302 consultees to address concerns of 452 individual students) and examined effects across a range of outcome variables. Although it is beyond the scope of this chapter to provide a thorough description of the problem-solving consultation model, a brief summary will suffice. First, consultants engage the consultee in a problem identification interview in which the consultee’s primary concerns about the client are identified and operationally defined. Typically, consultants will also solicit indirect functional data from the consultee (e.g., likely antecedents and typical consequences) during this meeting along with an estimate of current performance and initial goal setting. Consultees are often asked to collect more formal data between the problem identification interview and the next meeting, the problem analysis interview. During this meeting, consultants review the results of any assessments they have conducted since the initial meeting such as additional functional assessments and direct observations as well as review the consultee’s data.Together, the consultee and consultant develop an intervention plan based on these data and the consultee commits to implementing the plan with the support of the consultant. Finally, a problem evaluation interview occurs following a period of plan implementation wherein the consultant and consultee review the effectiveness of the plan and discuss any modifications, if necessary. It is possible that the dyad may revert to a previous step in the problem-solving model if the plan was ineffective or the consultee’s primary concern has changed. On the other hand, if the plan was successful, consultation is typically terminated with the idea that the consultant will use the strategies learned during the problem-solving process to address similar concerns in the future independent of the consultant’s support. Practical recommendations related to teleconsultation using the problem-solving model are provided later in the chapter. Readers are encouraged to reference other more comprehensive sources for additional information about the problem-solving model in general (Erchul & Ward, 2016; Frank & Kratochwill, 2014). What follows is a brief review of the literature supporting the problem-solving model within the context of teleconsultation.

Teleconsultation using the problem-solving model One of the first experimental examinations of teleconsultation using the problemsolving model was conducted by Bice-Urbach and Kratochwill (2016).



Teleconsultation to service settings

269

In this study, the authors used a concurrent multiple-baseline design across elementary school teacher consultees to examine the effects of idiosyncratic function-based behavior interventions developed through the problemsolving process to address a variety of student behaviors. Using a partial interval recording procedure to assess students’ disruptive behavior, the authors conducted systematic direct observation through telehealth technology to document the effects of the interventions implemented by the six different consultees. A reduction in student disruptive behavior was observed across all six cases and teachers generally rated the process as effective. Further, the teacher consultees reported that teleconsultation was both an acceptable and feasible method through which they could address students’ behavior concerns within their classroom; however, perhaps most important was the finding that teachers implemented the behavior interventions developed through teleconsultation with a high degree of fidelity (M = 94.3% of components implemented), suggesting that the transition to a telehealth format does not reduce a consultant’s ability to promote consultee’s adherence to treatment protocols. Similarly, Fischer et al. (2017) used a problem-solving teleconsultation model to promote implementation of differential reinforcement procedures among three elementary school teachers across schools in Utah and Mississippi. Consultants in each state engaged in the problem-solving process with teacher participants in the corresponding state to identify their primary concern with the student client and develop an idiosyncratic behavior intervention plan to address the concern. All three teachers implemented the plans with fidelity with one requiring performance feedback. As a result, positive client outcomes were documented across all three cases and the teacher consultees rated the teleconsultation process as acceptable and feasible. The long distance between consultants and consultees in this study highlights the promise of teleconsultation to deliver effective services to clients who may not be physically proximal to the consultant. Following the publication of these two studies, teleconsultation gained popularity, particularly in school settings, and was described as the “new frontier” in educational and psychological consultation (Fischer, Erchul, & Schultz, 2018; Fischer, Schultz, Collier-Meek, Zoder-Martell, & Erchul, 2018). Teleconsultation also began to see more use as a service delivery framework in other settings. For example, Bloomfield, Fischer, Clark, and Dove (2019) used teleconsultation to treat avoidant/restrictive food intake disorder in an 8-year-old client. By establishing a telehealth connection between the consultant’s office and the client’s home, the authors consulted

270

Applied behavior analysis advanced guidebook

with the client’s caregivers to develop and implement a changing criterion fixed ratio schedule of reinforcement to promote consumption of nonpreferred foods. Caregiver implementation fidelity averaged 94% throughout the duration of the study and client consumption of nonpreferred food increased in step with the fixed ratio schedule, maintaining at a 4-month follow-up. The caregivers rated the teleconsultation process as highly acceptable and anecdotally reported “easier family meals” (p. 39) after implementation of the intervention (Bloomfield et al., 2019). Tomlinson, Gore, and McGill (2018) conducted a systematic review of studies using telehealth technology to teach individuals how to implement applied behavior analytic procedures.Across 20 studies, at least 27 consultants worked with 113 consultees through telehealth technology to teach them how to implement a variety of assessments (e.g., functional analysis) and intervention (e.g., differential reinforcement) techniques. Unfortunately, most of the studies were not reviewed favorably in terms of client or consultee outcomes; however, this may be due to poor specification of a consultative model to structure delivery of services. For example, whereas Fischer et al. (2017) implemented a problem-solving consultation model, other studies did not describe a structured model (e.g., 3 15-min training sessions; HayHansson & Eldevik, 2013; 3-h group training; Alnemary, Wallace, Symon, & Barry, 2015). These findings highlight the need for consultants to be intentional about the format and scope of their consultative services to promote the highest level of consultee and client outcomes. Notably, when social validity of the teleconsultation process was assessed, consultees reported very high levels with the exception of one study, Alnemary et al. (2015), wherein participants cited technical difficulties as a barrier. As the teleconsultation literature continues to grow, more interesting applications are being explored. For example, Fischer, Bloomfield, Clark, McClelland, and Erchul (2019) used telepresence robots rather than traditional smartphones and tablet-laptop computers to provide teleconsultants with mobility around the space in which they were consulting. Despite a higher cost relative to the more traditional options, telepresence robots may improve a teleconsultant’s adaptability and ability to respond to changes in the consultative environment without the need for consultee assistance (e.g., requesting a consultee to rotate a device to provide a better view). Others have collected data to support the use of telehealth technology to conduct systematic direct observations (Fischer, Dart, Lehman, Polakoff, & Wright, 2019; King, Bloomfield, Fischer, Dart, & Radley, 2021), providing consultants with evidence to support the equivalency of this practice compared



Teleconsultation to service settings

271

to traditional in-vivo systematic direct observation. Finally, researchers are asking about which variables are most influential in modifying the acceptability of teleconsultation for consultants. Schultz et al. (2018) found that school psychologists viewed teleconsultation more favorably when commute times were more than 30 min, when they were more familiar with the consultee, and when the client concern to be addressed was not severe. Most recently, King, Bloomfield, Wu, and Fischer (2021) conducted a systematic review of the school-based teleconsultation literature, identifying 13 studies that had been published on the topic. They characterized this body of literature along a number of dimensions such as participant characteristics, intervention, and evidence standards and in general concluded that although small, the evidence supports teleconsultation as an effective and acceptable strategy.The authors also highlighted the variability in consultant-­consultee physical distance within the teleconsultative relationship (i.e., range = 7–305 miles), citing literature supporting a distance of 22 miles as the minimum at which teleconsultation offers more benefits for its cost compared to face-to-face service delivery (Ruskin et al., 2004). Physical distance is just one consideration that might impact a service provider’s decision to use teleconsultation; however, there are a number of other practical considerations that warrant discussion prior to adoption.

Rapport building in teleconsultation Rapport building is the “the spontaneous, conscious, feeling of harmonious responsiveness that promotes the development of a constructive therapeutic alliance” (Goldstein & Glueck, 2016, p. 205). Further, therapeutic alliance can be defined as the relationship that is formed between a provider and a consultee or client with mutual agreement to collaborate toward beneficial outcomes for the individual receiving treatment (Goldstein & Glueck, 2016). Building rapport is imperative, particularly in teleconsultation, because it creates a foundation of trust between the service provider and consultee, improves the likelihood of individuals returning for follow-up or future sessions, and the increase in session comfortability allows for a more active consultee role in their own care (Dang,Westbrook, Njue, & Giordano, 2017). Building the foundation of trust allows for the service provider-­ consultee relationship to be fostered and creates space for open engaging conversations, increases consultee compliance, supports health literacy, and results in better outcomes and overall greater levels of consultee satisfaction in the therapeutic service delivery process (Dawson-Rose et al., 2016;

272

Applied behavior analysis advanced guidebook

Nouri, Khoong, Lyles, & Karliner, 2020). Notably, many first-time consultee clients require follow-up appointments and sessions and building a positive rapport improves the likelihood of the individuals returning for those appointments (Dang et al., 2017). Developing a warm and positive relationship between consultant and consultees from a distance can be difficult. In order to increase the trust and a positive relationship between the provider and client, service providers can engage in various actions such as using visuals during consultation sessions to connect with clients, exaggerated facial cues when speaking to ensure clear nonverbal communication, potentially rely more on verbal communication and use less subtle nonverbal cues during conversations, and make “eye contact” through the camera rather than the screen while in the teleconsultation session (Matheson, Bohon, & Lock, 2020). Although the process of building rapport to foster therapeutic alliance with clients has often been viewed as challenging, professionals delivering services through a teleconsultation may rely on strong consultee rapport to drive effective services to their clients. Therefore, it is recommended that consultants become intentional about assessing their rapport with consultees. The process can be done formally, using assessments such as the Therapeutic Alliance Scale for Caregivers and Parents (Accurso, Hawley, & Garland, 2013) or modified to be used for noncaregiver consultees. Consultants may choose to use this during the consultation process or as a summative assessment to inform future rapport building efforts. Additionally, discuss matters of culture with consultees and suggest practices and intervention strategies that align with their cultural values and the clients whom they serve. A proactive discussion with consultees before beginning the teleconsultation process would be beneficial for identifying any strengths and weaknesses that can be used to enhance the effectiveness of the teleconsultation relationship. Beaulieu and Jimenez‐Gomez (2022) proposed a useful framework for culturally responsive practice and encouraged behavior analysts to engage in self-assessment of cultural factors that may impact service delivery.

Confidentiality and privacy in teleconsultation The terms “confidentiality” and “privacy” are often used interchangeably in professional practice but are distinct. Confidentiality is the right to maintain private information divulged in the course of a professional relationship whereas privacy refers to the legal right to control access to any personal



Teleconsultation to service settings

273

information (Folkman, 2000). As such, consultants are responsible for both the confidentiality and privacy of the information shared by their consultee that they serve through any means of service delivery. For the purposes of this chapter, we have reviewed the Ethics Code for Behavior Analysts of the Behavior Analyst Certification Board (BACB, 2020; hereafter referred to as “BACB Code”) to incorporate the most up to date professional standards. Behavior analysts have a duty to conduct professional and ethical practice in alignment with the BACB Code. With the COVID-19 pandemic, unique ethical challenges emerged, especially in regards to the fields of teleconsultation, telehealth, and other forms of virtual service delivery. This change led professional organizations such as the National Association of Social Workers, the National Association of School Psychologists, and the American Psychological Association) to revise their respective codes of ethics and provide specific practice guidelines. The most recent BACB Code, which replaced the Professional and Ethical Compliance Code for Behavior Analysts (Behavior Analyst Certification Board, 2014), was also updated to reflect these changes and includes the following statement: The BACB recognizes that behavior analysts may have different professional roles. As such, behavior analysts are required to comply with all applicable laws, licensure requirements, codes of conduct/ethics, reporting requirements (e.g., mandated reporting, reporting to funding sources or licensure board, self-reporting to the BACB, reporting instances of misrepresentation by others), and professional practice requirements related to their various roles.

As such, all practitioners who are providing teleconsultation services as a Board Certified Behavior Analyst (BCBA), Board Certified Assistant Behavior Analyst (BCaBA), or those who have applied for such certifications, should adhere to the BACB Code. However, it is recommended that any individual providing ABA services should adhere to the BACB Code as well as any other respective professional codes, for example, a school psychologist providing behavioral services should adhere to both the BACB Code and the National Association of School Psychologists Standards. Note that the BACB Code includes four Foundational Principles which provide a framework for the ethical principles. Both confidentiality and privacy are mentioned in regard to the second core principle, “Treat Others with Compassion, Dignity, and Respect.” The Code also explains that the four Foundational Principles, apply to all service delivery settings, and modes of communication (e.g., in person; in writing; via phone, email, text message, video conferencing; BACB, 2020).

274

Applied behavior analysis advanced guidebook

A behavior analyst must maintain confidentiality and privacy beginning the moment a client discloses any private information. As a consultant, you must be cautious, especially when using video conferencing software or electronic communication, and ensure that any data shared is encrypted and the connection is private in order to maintain confidentiality (AACAP, 2008). Specific organizations or service settings may have their own guidelines or codes regarding confidentiality. The scope of confidentiality includes all service delivery, documentation and data, and verbal, written, or electronic communication. For example, when providing teleconsultation services to a consultee in the school setting, personally identifiable information and educational records for the target student cannot be discussed with your consultee unless parental consent has been obtained (FERPA, 1974). With parental consent, it would be acceptable to exchange student identifying information, assessment scores, or specific individual data. Without parental consent, the specific student should remain anonymous and any identifying information should not be shared in any method of delivery. In teleconsultation within school settings, you may work with a teacher to address a classroom management issue or work with school administrators to address a school-wide behavior concern.Your focus is on supporting the teacher, or consultee, providing the intervention which may or may not be in relation to a specific student. As privacy is a legal matter, you must always adhere to the laws of your professional community and practicing state. This includes specific governance regarding the protection of personal information, namely encrypted email communication and security requirements for technology used in virtual service delivery. In order to be in compliance with the Health Insurance Portability and Accountability Act of 1996 (U.S. Congress, 1996) as required when providing behavior-analytic services, you must use specific videoconferencing and messaging software. Readers are referred to Dart, Whipple, Pasqua, and Furlow (2016) for another account of the legal and ethical issues present in telehealth practice.

Recommendations to protect consultee privacy and confidentiality For electronic communication, the BACB Code states that a behavior analyst should prevent the accidental or inadvertent disclosure of confidential information (BACB, 2020; refer to “2.03 Protecting Confidential Information” in the BACB Code), hence all communication in teleconsultation. Any emails with specific information or data should be encrypted



Teleconsultation to service settings

275

or password-protected to prevent manipulation or illegal access to your consultees’ personal information (Rios, Kazemi, & Peterson, 2018). Skype, Google Hangouts, and FaceTime are not HIPAA compliant and should not be used in teleconsultation. Software such as Zoom or Microsoft Teams are often considered HIPAA-compliant due to their willingness to establish business authorization agreements (BAAs) made between HIPAA-covered entities (e.g., service providers) and their business partners that specifies each party’s responsibility in how protected health information (PHI) covered under HIPAA will be handled. However, keep in mind that the federal government does not review nor approve entities as HIPAA-compliant. This is merely a term that organizations adopt to indicate that they have taken the necessary steps to comply with the federal law. Ultimately, it is the responsibility of the behavior analyst to ensure that any business associate interfacing with PHI does in fact comply. For example, two popular cloud storage platforms, Google Drive and Box, allow users to obtain a BAA if requested (Rios et  al., 2018). As mentioned previously, this agreement helps prevent the unwanted sharing of any confidential information that is transmitted between the consultant and consultee through methods such as email, file sharing, and videoconferencing. Beyond technological confidentiality and privacy concerns, behavior analysts operating in a teleconsultation role may want to ensure that their consultees have a quiet and private space in which to meet. It is good practice to begin meetings by asking if other people are in the room with the consultee before sharing information. It is also helpful to establish a safe word you can both remember that, if said, will indicate that someone has entered the room and no additional information should be shared. Consultants and consultees attempting to join meetings from public places (e.g., a coffee shop) should be asked to move to a private setting to ensure that no breach of confidentiality occurs. The use of headphones does not address this issue as both parties will need to share their own information which could be heard by those nearby.

Recommendations for teleconsultation service delivery Before offering teleconsultation services, the consultant must ensure their own suitability as well as the suitability of the consultee for such a service. The consultant and consultee must both be knowledgeable about the software and confident in their ability to provide the virtual service. Likewise, a consultant must determine whether teleconsultation is an appropriate

276

Applied behavior analysis advanced guidebook

method of service delivery for a particular consultee. For example, a consultee with no internet access or a consultee without access to a video camera may prevent teleconsultation from being effective. Also, as mentioned previously, you want to ensure that the environment is distraction-free and private (American Psychological Association, 2013). With the breadth of information available regarding equipment and software, preparing for consultation sessions, and adhering to ethical codes of professional practice in teleconsultation, we have provided practical recommendations for problem-solving teleconsultation service delivery. These recommendations are not an exhaustive list, and we encourage you to consult the ethical codes for your respective professional community as well as any applicable laws governing your practice as a professional. It is important to note that in any situation, you must act in the best interest of your client and to the best of your ability, adhere to both laws and ethical guidelines, and seek supervision in the event of an ethical dilemma or legal concern.

Equipment and software Noted earlier, there are specific requirements for video conferencing and storage software when using them in the provision of teleconsultation to protect the privacy and confidentiality of protected health information. Other considerations are that the consultant and consultee have access to a reliable internet connection, a device equipped with a camera and microphone that is able to access the internet, and preferred videoconferencing software. Fischer, Erchul, and Schultz (2018) and Fischer, Schultz, et al. (2018) conducted a critical review of the videoconferencing software used in school-based teleconsultation and identified five acceptable options: Adobe Connect, Cisco WebEx, FaceTime, Polycom, and Skype. Notably, Zoom, a very popular videoconferencing software option, was not identified in the review but is likely used widely for this purpose. The review selected Adobe Connect and Polycom, two less popular software options, as being the most comprehensive in terms of device compatibility, integration of cloud storage and other features such as chat and screen sharing. It is important for readers to keep in mind how rapidly technological options are added, altered, and lost. The review by Fischer et al., despite being less than 10 years old, may not accurately reflect the landscape of current videoconferencing options. Readers are encouraged to use the considerations presented here and in other sources to identify the most appropriate options currently available to them.



Teleconsultation to service settings

277

The hardware in most laptops and some desktop computers include a built-in video camera as a standard feature; however, if this is not the case, cost effective standalone cameras that can be clipped to the computer screen are widely available. Tablet computers (e.g., iPad) also are usually equipped with the requisite hardware as are most smartphones currently available on the market. Headphones often come with an embedded microphone as well. It will be helpful to ask your consultee about the availability of these resources before considering teleconsultation or any telehealth service. As for software, we recommend Box or Google Drive (with a BAA obtained). As a tip, the entire Microsoft Suite (e.g., Outlook,Teams, OneDrive) is considered to be HIPAA-compliant so if your organization has access to these platforms, we recommend using them. While Google Drive is free, a BAA must be requested and obtained for information to be secure and prevent unauthorized access. Being that all communication with your consultee will be electronic, it is imperative that you invest in quality software for email, videoconferencing, and data storage whenever possible but note that there are free HIPAA-compliant options available such as Zoom and Microsoft Teams. Be aware that with the free versions there are limitations such as a limited number of participants on a call or a time limit on a call. Be sure to check for these limitations prior to scheduling any consultation meeting. Technology can be unpredictable, and problems may arise. Aside from ensuring that you have a strong and secure internet connection, we recommend testing all equipment and familiarizing yourself with your software before beginning teleconsultation services. It may also be helpful to spend some time in your first meeting with a consultee to make sure they are familiar with the features of the software and expectations regarding electronic communication, mainly (a) screen sharing functions, (b) sharing files, (c) giving control of your screen to the consultee, (d) controlling audio and video feeds, (e) using a chat box, (f) waiting room function, and (g) using virtual backgrounds. Such preparation reduces the likelihood that either party will experience technological issues and will promote more effective and efficient meetings. In the event a technological issue occurs, getting familiar with the software functions reduces time spent fixing these issues and allows you to assist your consultee with any technology issues as needed. If you must use any virtual assessment techniques, be sure to practice beforehand to reduce the likelihood of technology issues. With many video conferencing software including screen share and screen control functions, most assessments can be modified to be delivered virtually. For example,

278

Applied behavior analysis advanced guidebook

if you would like the consultee to complete a measure of intervention acceptability (i.e., Usage Rating Profile-Intervention Revised; URP-IR; Chafouleas, Briesch, Neugebauer, & Riley-Tillman, 2011), you may either email the file, share it in the chat box, or share your screen and give the consultee control of your screen to select their answers.You may also choose to share a measure of consultation acceptability using the methods mentioned above.

Scheduling and planning sessions Within a teleconsultation framework, we must rely on the consultee to remember to log into the video conferencing software on the correct day at the correct time. However, situations may arise and sometimes sessions may be missed by accident. We recommend problem-solving with your consultee ways to help remember scheduled sessions. For example, you can send a text reminder in advance of the session with a time agreed upon by both parties, an email reminder which can be scheduled to be sent automatically, or some other means of reminding the consultee and consultant about the scheduled session. When scheduling sessions, try to identify a time that is free from distractions and competing responsibilities. Allow a cushion of time before and after the session to ensure that if the meeting runs late, you or your consultee will not have to rush to your next engagement. Preparing an agenda may help to keep sessions on time and avoid any conflicts. Sending the agenda before the meeting allows the consultee to review and add items for discussion. Sticking to the four-step problem-solving model will help to keep sessions organized and purposeful.We recommend making video records of each session so that you are able to review the meeting as well as monitor the integrity of the consultation process as it progresses with your consultee.

Communicating with consultees We emphasize again that using technology in service delivery requires being mindful of sending personally identifiable information and data through the internet. It is important to communicate to your consultee what technology you plan to use for communication (e.g., Google Voice, email, Microsoft Teams) as well as if there are specific times that you may be reached and your preferred contact method. Establishing a Google Voice number or a similar software to use for professional practice is a good idea. This ensures that personal and professional contacts are separated and reduces the possibility of inadvertently disclosing confidential information.



Teleconsultation to service settings

279

In addition, ­consider the recommendations presented earlier in this chapter of encrypting emails and/or password protecting any files related to your service delivery. It may be helpful to establish a shared folder in a data storage software where the consultant and consultee can deposit important documents as well as data collection. An example of this would be to share a Box folder, invite the consultee to collaborate, and store any resources needed, notes from previous meetings, and copies of any worksheets or checklists that may have been completed by the consultant or the consultee. This method may be an alternative to sending emails; however, to ensure the privacy and confidentiality of data, you may choose to password protect documents within the shared folder.

Establishing roles and responsibilities When beginning a new consultative relationship, take some time during the first meeting to establish roles and responsibilities for both the consultant and the consultee, allowing the consultee to verbalize their expectations for teleconsultation. A few examples of roles and responsibilities of a consultant are sending an email reminder about meetings, preparing an agenda in advance of a meeting, conducting observations or assessments of teacher or student behaviors, and acting as a coach to the consultee to promote acquisition of a skill or solution to a problem. For a consultee, roles and responsibilities may include open and honest communication, cooperation with trying proposed strategies or interventions, collecting data between consultation meetings, and remaining open to the consultation process. We recommend recording responses in a shared document so that they can be referenced as needed. Establishing clear and well-defined roles and responsibilities at the onset of the consultative relationship diminishes the possibility of confusion over expectations and disagreements or misinformation regarding the goals of teleconsultation. At the end of each session, you should review anything discussed with the consultee and re-state any action steps for both consultant and consultee, for example, the consultant will conduct an observation and the consultee is responsible for collecting self-monitoring data. Finally, with the field of teleconsultation expanding in more recent years, the recommendations above are not meant to be exhaustive.The recommendations will evolve and change as new concerns arise and as organizations update their ethical codes and standards. To help put these recommendations into practice, we discuss an example of using the problem-­solving model in teleconsultation.

280

Applied behavior analysis advanced guidebook

Conclusion This chapter defined teleconsultation, reviewed the evidence supporting problem-solving teleconsultation, provided practical recommendations, and outlined considerations for ethical practice. Consultation is an indirect form of service delivery in which change in the client’s behavior is driven through the consultant’s interaction with a third party, the consultee (Erchul & Martens, 2002). Consultation has been extensively researched and has supporting evidence from systematic reviews and meta-­analyses that have consistently identified problem-solving consultation as an evidence-­based practice (Hurwitz et  al., 2015; Medway & Updyke, 1985; Sheridan et al., 1996). Teleconsultation includes the same definition as consultation apart from the service delivery format being via telehealth. Through teleconsultation, services are provided via mobile devices, video conferencing, and email. Teleconsultation allows consultants to reach many consultees they otherwise would not be able to serve due to barriers such as health challenges, geographical location, and lengthy transportation to and from settings. However, since teleconsultation requires access to technology, this modality would be excluded for families from low socioeconomic backgrounds and schools centers without necessary resources.

References Accurso, E. C., Hawley, K. M., & Garland, A. F. (2013). Psychometric properties of the therapeutic alliance scale for caregivers and parents. Psychological Assessment, 25, 244–252. https://doi.org/10.1037/a0030551. Alnemary, F. M., Wallace, M., Symon, J. B., & Barry, L. M. (2015). Using international videoconferencing to provide staff training on functional behavioral assessment. Behavioral Interventions, 30, 73–86. https://doi.org/10.1002/bin.1403. American Academy of Child and Adolescent Psychiatry (AACAP). (2008). Practice parameter for telepsychiatry with children and adolescents. Journal of American Academy of Child and Adolescent Psychiatry, 47, 1468–1483. https://doi.org/10.1097/CHI.0b013e31818b4e13. American Psychological Association. (2013). Guidelines for the practice of telepsychology. https:// www.apa.org/practice/guidelines/telepsychology. Baker, D. C., & Bufka, L. F. (2011). Preparing for the telehealth world: Navigating legal, regulatory, reimbursement, and ethical issues in an electronic age. Professional Psychology: Research and Practice, 42, 405–411. https://doi.org/10.1037/a0025037. Beaulieu, L., & Jimenez‐Gomez, C. (2022). Cultural responsiveness in applied behavior analysis: Self‐assessment. Journal of Applied Behavior Analysis, 55, 337–356. Behavior Analyst Certification Board. (2014). Professional and ethical compliance code for behavior analysts. Littleton, CO: Author. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://www.bacb. com/wp-content/uploads/2022/01/Ethics-Code-for-Behavior-Analysts-220316-2.pdf.



Teleconsultation to service settings

281

Bergan, J. R., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York, NY: Plenum Press. Bice-Urbach, B. J., & Kratochwill, T. R. (2016). Teleconsultation: The use of technology to improve evidence-based practices in rural communities. Journal of School Psychology, 56, 27–43. https://doi.org/10.1016/j.jsp.2016.02.001. Bloomfield, B. S., Fischer, A. J., Clark, R. R., & Dove, M. B. (2019).Treatment of food selectivity in a child with avoidant/restrictive food intake disorder through parent teleconsultation. Behavior Analysis in Practice, 12, 33–43. https://doi.org/10.1007/s40617-018-0251-y. Boisvert, M., Lang, R., Andrianopoulos, M., & Boscardin, M. L. (2010). Telepractice in the assessment and treatment of individuals with autism spectrum disorders: A systematic review. Developmental Neurorehabilitation, 13, 423–432. https://doi.org/10.3109/17518 423.2010.499889. Carter, D. R., & Van Norman, R. K. (2010). Class-wide positive behavior support in preschool: Improving teacher implementation through consultation. Early Childhood Education Journal, 38, 279–288. https://doi.org/10.1007/s10643-010-0409-x. Chafouleas, S. M., Briesch, A. M., Neugebauer, S. R., & Riley-Tillman, T. C. (2011). Usage rating profile—Intervention (Revised). Storrs, CT: University of Connecticut. Dang, B. N., Westbrook, R. A., Njue, S. M., & Giordano, T. P. (2017). Building trust and rapport early in the new doctor-patient relationship: A longitudinal qualitative study. BMC Medical Education, 17, 1–10. https://doi.org/10.1186/s12909-017-0868-5. Dart, E. H., Whipple, H. M., Pasqua, J. L., & Furlow, C. M. (2016). Legal, regulatory, and ethical issues in telehealth technology. In J. K. Luiselli, & A. J. Fischer (Eds.), Computerassisted and web-based innovations in psychology, special education, and health (pp. 339–364). New York, NY: Elsevier/Academic Press. Dawson-Rose, C., Cuca,Y. P., Webel, A. R., Báez, S. S. S., Holzemer, W. L., Rivero-Méndez, M., … Lindgren, T. (2016). Building trust and relationships between patients and providers: An essential complement to health literacy in HIV care. Journal of the Association of Nurses in AIDS Care, 27, 574–584. https://doi.org/10.1016/j.jana.2016.03.001. Erchul, W. P. (2009). Gerald Caplan: A tribute to the originator of mental health consultation. Journal of Educational and Psychological Consultation, 19, 95–105. https://doi. org/10.1080/10474410902888418. Erchul, W. P., Grissom, P. F., & Getty, K. C. (2014). Studying interpersonal influence within school consultation: Social power base and relational communication perspectives. In W. P. Erchul, & S. M. Sheridan (Eds.), Handbook of research in school consultation (pp. 305–334). Routledge. Erchul, W. P., & Martens, B. K. (2002). School consultation: Conceptual and empirical bases and practices (2nd ed.). New York: Kluwer Academic/Plenum. Erchul, W. P., & Martens, B. K. (2002). School consultation: Conceptual and empirical bases and practices (2nd ed.). New York: Kluwer Academic/Plenum. Erchul, W. P., & Ward, C. S. (2016). Problem-solving consultation. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention (pp. 73–86). New York, NY: Springer. FERPA. (1974). Family Educational Rights and Privacy Act of 1974, 20 U.S.C. § 1232g. U.S. Department of Education. Fischer, A. J., & Bloomfield, B. S. (2019). Using technology to maximize engagement and outcomes in family–school partnerships. In S. A. Garbacz (Ed.), Establishing family–­school partnerships in school psychology. New York, NY: Routledge. https://doi. org/10.4324/9781138400382-9. Fischer, A. J., Bloomfield, B. S., Clark, R. R., McClelland, A. L., & Erchul, W. P. (2019). Increasing student compliance with teacher instructions using telepresence robot problem-­solving teleconsultation. International Journal of School and Educational Psychology, 7, 158–172. https://doi.org/10.1080/21683603.2018.1470948.

282

Applied behavior analysis advanced guidebook

Fischer, A. J., Dart, E. H., Lehman, E., Polakoff, B., & Wright, S. J. (2019). A comparison of invivo and videoconference momentary time sampling observations of on-task behavior. Assessment for Effective Intervention,45,3–13.https://doi.org/10.1177/1534508418777846. Fischer, A. J., Dart, E. H., Radley, K. C., Richardson, D., Clark, R., & Wimberly, J. (2017). An evaluation of the effectiveness and acceptability of teleconsultation. Journal of Educational and Psychological Consultation, 27(4), 437–458. https://doi.org/10.1080/10474412.2016.1235978. Fischer, A. J., Erchul, W. P., & Schultz, B. K. (2018). Teleconsultation as the new frontier of educational and psychological consultation: Introduction to the special issue. Journal of Educational and Psychological Consultation, 28, 249–254. https://doi.org/10.1080/10474 412.2018.1425880. Fischer, A. J., Schultz, B. K., Collier-Meek, M. A., Zoder-Martell, K. A., & Erchul, W. P. (2018). A critical review of videoconferencing software to support school consultation. International Journal of School and Educational Psychology, 6, 12–22. https://doi.org/10.10 80/21683603.2016.1240129. Folkman, S. (2000). Privacy and confidentiality. In B. D. Sales, & S. Folkman (Eds.), Ethics in research with human participants (pp. 49–57). American Psychological Association. Frank, J. L., & Kratochwill,T. R. (2014). School-based problem-solving consultation: Plotting a new course for evidence-based research and practice in consultation. In W. P. Erchul, & S. M. Sheridan (Eds.), Handbook of research in school consultation (2nd ed., pp. 19–39). New York, NY: Routledge. Goldstein, F., & Glueck, D. (2016). Developing rapport and therapeutic alliance during telemental health sessions with children and adolescents. Journal of Child and Adolescent Psychopharmacology, 26, 204–211. https://doi.org/10.1089/cap.2015.0022. Gutkin, T. B., & Conoley, J. C. (1990). Reconceptualizing school psychology from a service delivery perspective: Implications for practice, training, and research. Journal of School Psychology, 28, 203–223. https://doi.org/10.1016/0022-4405(90)90012-V. Hay-Hansson, A. W., & Eldevik, S. (2013). Training discrete trials teaching skills using videoconference. Research in Autism Spectrum Disorders, 7, 1300–1309. https://doi. org/10.1016/j.rasd.2013.07.022. Hurwitz, J. T., Kratochwill, T. R., & Serlin, R. C. (2015). Size and consistency of problem-­ solving consultation outcomes: An empirical analysis. Journal of School Psychology, 53, 161–178. https://doi.org/10.1016/j.jsp.2015.01.001. King, H. C., Bloomfield, B., Fischer, A. J., Dart, E., & Radley, K. (2021). A comparison of digital observations of students from video cameras and aerial drones. Journal of Educational and Psychological Consultation, 31, 360–381. https://doi.org/10.1080/10474412.2020.1744446. King, H. C., Bloomfield, B. S., Wu, S., & Fischer, A. J. (2021). A systematic review of school teleconsultation: Implications for research and practice. School Psychology Review, 51, 237–256. https://doi.org/10.1080/2372966X.2021.1894478. Matheson, B. E., Bohon, C., & Lock, J. (2020). Family‐based treatment via videoconference: Clinical recommendations for treatment providers during COVID‐19 and beyond. International Journal of Eating Disorders, 53, 1142–1154. https://doi.org/10.1002/ eat.23326. Medway, F. J., & Updyke, J. F. (1985). Meta-analysis of consultation outcome studies. American Journal of Community Psychology, 13(5), 489–505. Nouri, S., Khoong, E. C., Lyles, C. R., & Karliner, L. (2020). Addressing equity in telemedicine for chronic disease management during the Covid-19 pandemic. NEJM Catalyst Innovations in Care Delivery, 1. https://doi.org/10.1056/CAT.20.0123. Rios, D., Kazemi, E., & Peterson, S. M. (2018). Best practices and considerations for effective service provision via remote technology. Behavior Analysis: Research and Practice, 18, 277–287. https://doi.org/10.1037/bar0000072. Ruskin, P. E., Silver-Aylaian, M., Kling, M. A., Reed, S. A., Bradham, D. D., Hebel, J. R., … Hauser, P. (2004). Treatment outcomes in depression: Comparison of remote treatment through



Teleconsultation to service settings

283

t­elepsychiatry to in-person treatment. American Journal of Psychiatry, 161, 1471–1476. https:// doi.org/10.1176/appi.ajp.161.8.1471. Schultz, B. K., Zoder-Martell, K. A., Fischer, A., Collier-Meek, M. A., Erchul, W. P., & Schoemann, A. M. (2018). When is teleconsultation acceptable to school psychologists? Journal of Educational and Psychological Consultation, 28, 279–296. https://doi.org/10.108 0/10474412.2017.1385397. Sheridan, S., Welch, M., & Orme, S. (1996). Is consultation effective? A review of outcome research. Remedial and Special Education, 17, 341–354. https://doi. org/10.1177/074193259601700605. Swanson, M., MacKay, M.,Yu, S., Kagiliery,A., Bloom, K., & Schwebel, D. C. (2020). Supporting caregiver use of child restraints in rural communities via interactive virtual presence. Health Education & Behavior, 47, 264–271. https://doi.org/10.1177/1090198119889101. Tomlinson, S. R., Gore, N., & McGill, P. (2018). Training individuals to implement applied behavior analytic procedures via telehealth: A systematic review of the literature. Journal of Behavioral Education, 27, 172–222. https://doi.org/10.1007/s10864-018-9292-0. U.S. Congress. (1996). Health insurance portability and accountability act of 1996. Washington, DC: U.S. Government Printing Office.

This page intentionally left blank

CHAPTER 12

Telehealth-delivered family support☆ Kelly M. Schieltza, Matthew J. O’Briena, and Loukia Tsamib a

The University of Iowa Stead Family Children’s Hospital, Carver College of Medicine, Stead Family Department of Pediatrics, Iowa City, IA, United States b University of Houston, Clear Lake, Center for Autism and Developmental Disabilities, Houston, TX, United States

Introduction Since the late 1800s, various forms of technology have been utilized to provide healthcare support from a distance. Telephones were used in 1879 between physicians and families to determine the need for in-person medical house calls (“Practice by Telephone”, 1879), radios were used in the 1920s between Alaskan Native villages and medical providers in Anchorage to determine the need to travel across days to obtain in-person medical care (Native Voices, n.d.), and remote monitoring of astronaut physiological responses was used by the National Aeronautics and Space Administration (NASA) during Project Mercury space flights in the 1960s to determine the effects of varying durations of space flight on astronaut functioning (Carpentier et al., 2018). As illustrated by these examples, remote healthcare has evolved as technology has evolved. Past predictions, which may have seemed like science fiction at the time, are seemingly coming true. For example, inventor and publisher, Hugo Gernsback predicted in the 1920s that a physician would have the ability, in the future, to feel the patient at a distance (Gernsback, 1925). More specifically, Gernsback foretold that a physician would observe the patient through a television, manipulate the controls of a device found in the patient’s room, and obtain measures such as sound and heat to assist with diagnosis of the patient. Although the use of technology to provide healthcare support does not currently work in the manner that Gernsback predicted, many aspects of his prediction are observed ☆

The authors express their appreciation to Margaret Uwayo, Ph.D., for providing feedback on the international and cultural considerations subsection during manuscript preparation.

Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00012-X

Copyright © 2023 Elsevier Inc. All rights reserved.

285

286

Applied behavior analysis advanced guidebook

with additional technological advances including the computer, Internet, and cellular networks. These advances have afforded healthcare support via technology to expand from acute conditions to chronic conditions and to migrate service delivery from satellite clinics to homes (Dorsey & Topol, 2016). With advances in technology leading to expanded remote healthcare services, telehealth as a service modality has been applied across most healthcare specialties. Prior to the COVID-19 pandemic, telehealth was a significant and rapidly growing component of healthcare in the United States. For one group of insured enrollees, telehealth visits grew from 205 annually in 2005 to over 200,000 in 2017 (Barnett, Ray, Souza, & Mehrotra, 2018). Additionally, 66% of consumers indicated willingness to use telehealth for their healthcare needs (American Well, 2019) and all state Medicaid plans allowed for at least some services to be delivered via telehealth (Chu, Peters, De Lew, & Sommers, 2021). When the COVID-19 pandemic resulted in stay-at-home orders, the use of telehealth to provide and maintain healthcare service delivery grew exponentially (Chu et al., 2021) following relaxed regulatory requirements by the Office of Civil Rights at the US Department of Health and Human Services (2020). As the COVID-19 pandemic shifts course and the public health emergency is lifted, some regulatory agencies and organizations have made permanent changes in support of the use of telehealth (e.g., Centers for Medicare & Medicaid Services, 2022). Like most specialties within healthcare, telehealth was used in behavior analysis prior to the COVID-19 pandemic (e.g., Barretto, Wacker, Harding, Lee, & Berg, 2006). However, its use was limited leaving many behavior analysts with uncertainties on how to provide behavior analytic services via telehealth at the outset of the pandemic (Schieltz & Wacker, 2020). Since the start of the pandemic, the use of telehealth was rapidly and widely adopted as a service delivery option for individuals and families with medical necessity (Colombo, Wallace, & Taylor, 2020). In this chapter, we provide a summary of the telehealth literature in behavior analysis, followed by a discussion of critical practice considerations based on the existing evidence that supports the use of telehealth within a clinical context.

Research evidence supporting the use of telehealth in behavior analysis The first published demonstration of the use of telehealth in behavior analysis was by Barretto and colleagues at the University of Iowa in 2006 who



Telehealth-delivered family support

287

showed that functional analyses of challenging behavior displayed by two young children with developmental disabilities could be successfully completed via videoconferencing. This demonstration was made possible by two factors: (a) a 2800-mile fiber-optic telecommunications system built across Iowa in 1987 that allowed for high-speed data transmission across educational, healthcare, and government agencies, and (b) a multiyear grant funded in 1996 by the National Library of Medicine that was awarded to The University of Iowa for the purpose of evaluating the impact of telehealth on the three pillars of healthcare: enhanced care, improved health and access, and reduced costs (Berwick, Nolan, & Whittington, 2008). During the four years of the Barretto et al. project (1997–2000), more than 75 telehealth evaluations were conducted with the majority comprised of followup consultations following in-person clinical evaluations. With the increase in telehealth evaluations, in-person follow-up consultations steadily decreased during this same period. Subsequently, in 2005, over 200 telehealth descriptive assessments and initial screenings were conducted by the clinical team at The University of Iowa, resulting in cancellations of in-person clinic visits for almost 50% of cases because challenging behavior concerns were effectively addressed via the telehealth service option. Thus, at this initial juncture, the use of telehealth in behavior analysis for challenging behavior concerns appeared to be effective and efficient. Since the initial study by Barretto et  al. (2006), research on the use of telehealth as a service delivery modality for challenging behavior has steadily increased. Prior to the COVID-19 pandemic, most of the behavior analytic research focused on demonstrating the efficacy of telehealth to address challenging behavior displayed by children and adolescents with developmental disabilities (see Schieltz & Wacker, 2020 for a brief summary). Specifically, the majority of children were six years of age or younger with a diagnosis of autism spectrum disorder (ASD). Multiple topographies of challenging behavior were frequently targeted for each child, often including some combination of aggression, destruction, self-injury, and tantrums. Assessment and treatment primarily included the combination of functional analyses (FA; Iwata, Dorsey, Slifer, Bauman, & Richman, 1994) and functional communication training (FCT; Carr & Durand, 1985) that targeted socially maintained challenging behavior. Although the home was the most common remote site in prepandemic studies, a clinic local to the child and their family was also commonly used for service. Parents (mostly mothers) were the primary implementers of assessment and treatment with behavior analysts providing coaching in real time. Except for a few exceptions (see

288

Applied behavior analysis advanced guidebook

Schieltz et al., 2018), treatment resulted in positive outcomes such as reductions in challenging behavior and concomitant increases in functional communication and compliance. Since the COVID-19 pandemic began, research on the use of telehealth in behavior analysis skyrocketed. In fact, an informal Google Scholar search of the terms “telehealth” and “behavior analysis” across 10 behavior analytica and pediatricb journals prior to 2020 and between 2020 and March 2022 yielded almost twice as many publications during the latter period (approximately 94 publications prior to 2020; approximately 186 publications between 2020 and March 2022). In these last two years, research has maintained its primary focus on remote coaching of parent-implemented behavior assessment and treatment procedures to address the challenging behavior needs of their children (e.g., Andersen, Hansen, Hathaway, & Elson, 2021; Davis et al., 2022; Edelstein, Becraft, Gould, & Sullivan, 2022; Gerow et al., 2021; Lindgren et al., 2020; Monlux, Pollard, Bujanda Rodriguez, & Hall, 2022). Additionally, research has expanded to studies on coaching and training of caregivers to conduct behavior analytic procedures to increase their child’s adaptive behavior skills (e.g., Drew et al., 2022; Ferguson, Dounavi, & Craig, 2022; Gerow, Radhakrishnan, Akers, McGinnis, & Swensson, 2021) and the provision of direct therapeutic services to improve learning and increase basic early academic skills (e.g., Cihon et al., 2022; Ferguson et al., 2020; Mootz, Lemelman, Giordano,Winter, & Beaumont, 2022; Pellegrino & DiGennaro Reed, 2020;Tang, Falkmer, Chen, Bӧlte, & Girdler, 2021). In response to COVID-19, a large proportion of the research has focused on providing guidelines for transitioning services to telehealth (e.g., Araiba & Čolić, 2022; Bergmann et al., 2021; Gingles, 2022; Nohelty, Hirschfeld, & Miyake, 2021; Rodriguez, 2020;Yi & Dixon, 2021), documenting outcomes following the transition of services to telehealth (e.g., Awasthi et al., 2021; Britwum, Catrone, Smith, & Koch, 2020; Coon, Bush, & Rapp, 2022; Crockett, Becraft, Phillips, Wakeman, & Cataldo, 2020; Pollard, LeBlanc, Griffin, & Baker, 2021), and providing comparisons between in-person and telehealth services (e.g., Baumes, Čolić, & Araiba, 2020; Estabillo et al., 2022; Peterson et  al., 2021). Although COVID-19 provided an impetus for determining how best to use telehealth for the provision of direct therapeutic services a

Journal of Applied Behavior Analysis, Journal of Behavioral Education, Behavior Analysis in Practice, Behavior Analysis: Research and Practice, Journal of Physical and Developmental Disabilities, Behavior Modification, Behavioral Interventions, Education & Treatment of Children. b Pediatrics, Journal of Autism and Developmental Disorders.



Telehealth-delivered family support

289

that commonly target skill acquisition, this literature is in its infancy. Thus, time will tell whether a portion of these types of services will continue via telehealth or whether some of the challenges of direct service provision (e.g., the ability to maintain one-on-one therapy without additional adult support) will preclude greater adoption of telehealth. Given that the efficacy of the telehealth literature continues to lie within the area of providing coaching and training to caregivers via telehealth to address the challenging behavior needs of their children, this will be the focus of the remaining sections of this chapter and will be informed by our own experiences and the Iowa telehealth model.

Development of the Iowa telehealth model for challenging behavior The development of the Iowa telehealth model involved multiple decisions related to the type of service, equipment needs, required behavior analyst competency and skills, and factors related to the child and caregiver(s) who would receive services. Wacker et  al. (2016) provided a decisionmaking model for in-person versus telehealth services. In this model, considerations included maintaining safety during visits, accessibility of necessary equipment, reimbursement for services, and determining the necessity of in-person support. Other considerations include the procedures likely to be implemented, the timing and dosage of services, and other factors necessary for the provision of equitable services (e.g., access to interpreters). The Iowa team had a long history of behavior analysts coaching caregivers in-vivo to implement FA and FCT procedures with their young child in their home (Wacker et al., 1998, 2017). Thus, when transitioning services to telehealth, the initial goal was to replicate the invivo in-home model to demonstrate telehealth as a viable service delivery modality. Young children with socially maintained challenging behavior were targeted because it was determined that safety was more likely to be maintained. A lending library of equipment (e.g., laptops, webcams) was developed and services were provided at no cost to the families because the initial projects received federal research funding. In-person support was provided in the initial feasibility study (Wacker et  al., 2013a, 2013b) and was determined on an individual basis for subsequent studies (Lindgren et al., 2020). Across the majority of the Iowa telehealth studies, services were provided in 1-h weekly appointments for a few months and up to one to two years. All participating families were English speaking; thus, interpreters were not used.

290

Applied behavior analysis advanced guidebook

Following enrollment in the Iowa telehealth projects, individual needs and modifications were determined and made for each family. However, procedures generally followed those outlined by Wacker et al. (2016) and O’Brien, Schieltz, Berg, McComas, and Wacker (2021) with several weeks devoted to determining equipment needs, testing and troubleshooting technology connections and issues, preparing the environment, and previewing expectations for subsequent appointments with the caregivers. During appointments where FA and FCT procedures occurred, behavior analysts provided direct coaching to the caregivers on how to conduct the procedures with their child. Coaching followed the general structure described by Larsen, Schieltz, Barrett, and O’Brien (2022) with the goals being to (a) provide ongoing instructions to the caregivers on implementing the FA procedures and (b) gradually fade instructions to the caregivers on implementing FCT procedures for the purpose of demonstrating caregiver independence with only the treatment procedures. Since the initial development of the Iowa telehealth model, some variations have been developed to evaluate its feasibility with a briefer service model and other populations. First, Suess, Wacker, Schwartz, Lustig, and Detrick (2016) modified the Iowa telehealth model to fit within the confines of a 90 min outpatient clinical model. Modifications included only one 1-h preparation meeting, one 1-h FA appointment, and three 15-min FCT appointments. Behavioral functions were identified for four of the five children and reductions in challenging behavior were observed following the implementation of FCT.These results demonstrated that a briefer telehealth service model can be effective. However, it was conducted as a research study; thus, questions remained about the feasibility of this approach in a fee for service model. Second,Tsami, Lerman, and Toper-Korkmaz (2019) implemented the Iowa telehealth model internationally, demonstrating successful telehealth services across many more countries (see Schieltz, O’Brien, Tsami, Call, & Lerman, 2022). Within this population of international families, interpreters were used for some families, and the results showed that access to this service did not degrade treatment effects. Taken together, these results continue to show the effectiveness of telehealth as a service delivery model in providing supports to caregivers to address their children’s challenging behaviors. To ensure or increase the likelihood that a telehealth service for challenging behavior will be successful and ethical, multiple decisions and considerations are warranted. In the following section, these considerations are outlined and discussed in relation to when a component is clinically indicated based on the research evidence, contraindicated, and somewhere in between.



Telehealth-delivered family support

291

Critical practice considerations for developing and providing services via telehealth Telehealth offers the behavior analyst, the client, and the client’s family many benefits that may not exist with in-vivo services. For example, a clinicto-home telehealth model requires far less time and costs much less than in-vivo clinic services because it does not require the same travel and clinic space as an in-vivo model (Lindgren et al., 2016). However, there are also many limitations to using telehealth. For example, telehealth does not afford the behavior analyst the same environmental control that is present in a clinical setting, which may make it more difficult to evaluate the environmental events that contribute to challenging behavior and to control challenging behavior when needed during treatment. Thus, behavior analysts who are in a position to decide between telehealth and in-vivo services must engage in a cost–benefit analysis to determine which approach is most likely to result in the most ethical and effective care for their clients. Moreover, the benefits and limitations are likely to vary based on each prospective client’s situation. For example, consider a client who has excellent internet connectivity versus another client whose internet is spotty, which results in poor audio and video quality. Also, conditions may fluctuate over the course of service such as a client who has developed new challenging behaviors that are too severe to serve remotely. There are many critical practice considerations that behavior analysts should consider prior to engaging in telehealth, and additional considerations that should be addressed during the provision of telehealth service. Tables 1 and 2 offer lists of the preservice and clinical service considerations and possible mitigation strategies when faced with limitations to providing telehealth service.

Preservice considerations Client suitability: Behavior analysts should evaluate the suitability of a potential client prior to initiating telehealth services Numerous studies have demonstrated the effectiveness of FA and FCT procedures delivered via telehealth for young children with ASD; however, with only a few exceptions (i.e., Machalicek et  al., 2016; Shawler, Clayborne, Nasca, & O’Connor, 2021; Tsami et al., 2019), studies using telehealth to conduct behavioral assessment and treatment have overwhelmingly focused on children under 12 years of age (Andersen et al., 2021; Davis et al., 2022; Edelstein et al., 2022; Gerow, Radhakrishnan, Davis, et al., 2021; Monlux et al., 2022; Schieltz & Wacker, 2020). Age is important because young children are naturally smaller and consequently may be easier for a caregiver to

Table 1  Critical preservice practice considerations and proposed mitigation strategies for developing behavior analytic services via telehealth. Preservice considerations Critical care area

Questions to consider

Considerations when the answer is “Yes”

Client Suitability

Is the behavior analyst working with an older or larger child that may pose safety risks?* Are the client’s target behaviors high risk to health and safety, or difficult to block?* Are the client’s target behaviors likely to be automatically maintained? Is the client unlikely to stay in the viewing area of the camera during sessions?

Yes

No

Consider additional adult support at the originating site. Consider additional adult support at the originating site. Consider in-vivo services to confirm function.

Yes

No

Yes

No

Yes

No

Consider adjusting the setting, such as using a smaller room or creating barriers to areas outside of viewing. Consider clinic-to-clinic telehealth or in-vivo services.

Has the client’s caregiver explicitly expressed preference for in-vivo services or discomfort with telehealth services?* Does the client’s caregiver have a disability, such as a hearing impairment or poor receptive language skills? Does the client’s caregiver have little or no working knowledge of the technology to be used for telehealth?

Yes

No

Consider in-vivo services.

Yes

No

Consider in-vivo services.

Yes

No

Consider preservice technology training and a technology troubleshooting manual.

Does the family have unstable or poor connectivity?*

Yes

No

Does the family lack the necessary hardware (e.g., tablet, smartphone, laptop, webcam and microphone) for telehealth?*

Yes

No

Consider clinic-to-clinic telehealth or in-vivo services. Consider a lending library, clinic-to-clinic telehealth, or in-vivo services.

Caregiver Characteristics and Preferences

Technology

Behavior Analyst Competence and Setting

Does the behavior analyst have little or no working knowledge of the technology to be used for telehealth?* Is the behavior analyst inexperienced in coaching others to conduct the assessments and treatments to be used via telehealth? Does the behavior analyst lack a suitable space to conduct telehealth sessions (i.e., possible distractions or inability to maintain client privacy)?

Yes

No

Consider technology training or in-vivo services. Consider in-vivo services.

Yes

No

Yes

No

Consider conducting sessions from a private clinic setting.

Yes

No

Consider a clinic-to-clinic telehealth model.

Yes

No

Consider in-vivo services.

Yes

No

Conduct in-vivo services.

Yes

No

Yes Yes

No No

Refer to a provider within the client’s state of residence. Conduct in-vivo services. Conduct in-vivo services.

Service Delivery Models

For a client who may need additional adult support or a more suitable space to conduct procedures, is there a regional clinic that may be available to partner with for clinic-to-clinic telehealth? Is the plan to offer appointments that last longer than 60 min? Legal and Professional Boundaries

Does the behavior analyst’s state prohibit the provision of behavior analysis via telehealth?* Does the client live in a state in which the behavior analyst is not licensed?* Is telehealth excluded from the client’s insurance plan?* Is there a Business Associate Agreement (BAA) that ensures HIPAA-compliant security?

Continued

Table 1  Critical preservice practice considerations and proposed mitigation strategies for developing behavior analytic services via telehealth—cont’d. Preservice considerations Critical care area

Questions to consider

Considerations when the answer is “Yes”

International and Cultural Considerations

Does the family have unstable or poor connectivity?*

Yes

No

Does the family have limited access to electricity?*

Yes

No

Does the behavior analyst lack awareness and foundation of the family’s culture, beliefs, and lived experiences?

Yes

No

Does the family have a preference for a culturally matched behavior analyst (i.e., gender, language)?*

Yes

No

Is the family in need of or requesting an interpreter?*

Yes

No

Note: Considerations with an * are those that may disqualify telehealth when the answer is “yes”.

Conduct speed tests. Consider other locations (i.e., schools, community centers) with more sufficient internet access. Consider allocating or raising funds to create an equipment lending library or upgrade internet services. Refer to a local provider. Conduct services around predictable outages. Determine back-up electricity options. Conduct research on the family’s culture. Engage in a discussion with the family to learn about their culture, beliefs, and experiences. Accommodate preference, as available. Discuss options with family when preference cannot be accommodated. Provide access to interpreter. Consider recruitment of nonprofessional interpreters for non–fee-for-services. For fee-for-service, provide access to certified interpreters.

Table 2  Critical clinical service practice considerations and proposed mitigation strategies for providing behavior analytic services via telehealth. Clinical service considerations Critical care area

Questions to consider

Considerations when the answer is “Yes”

Client Suitability

Has the client demonstrated unsafe behaviors toward self or caregiver that cannot be adequately prevented or blocked?

Yes

No

Has the client been difficult to maintain on camera?

Yes

No

Is the client’s behavior reactive to the behavior analyst’s voice/video?

Yes

No

Has the client’s caregiver engaged in emotional responding that makes it difficult to coach?

Yes

No

Has the client’s caregiver expressed displeasure or discomfort with telehealth?* Has the client’s caregiver failed to follow your coaching directives?

Yes

No

Yes

No

Consider additional adult support at the originating site, protective gear for the caregiver, modifying assessment or treatment procedures, or in-vivo services. Consider adjusting the setting, such as using a smaller room or creating barriers to areas outside of viewing. Consider clinic-to-clinic telehealth or in-vivo services. Conduct a series of free play sessions until the child is no longer reactive. Consider turning off the camera and using bug-in-the ear technology.

Caregiver Characteristics and Preferences

Consider in-session time-outs (i.e., require parent to step away from session temporarily) or in-vivo services. Conduct in-vivo services. Create procedural task analyses, conduct presession training, and utilize in-session time-outs. Continued

Table 2  Critical clinical service practice considerations and proposed mitigation strategies for providing behavior analytic services via telehealth—cont’d Clinical service considerations Critical care area

Questions to consider

Considerations when the answer is “Yes”

Other Challenges During Telehealth

Has the connectivity been unstable or audio-visual disrupted regularly?*

Yes

No

Is the plan to implement novel procedures (i.e., assessment or treatment approaches the behavior analyst has not conducted previously)? Has there been disruption to sessions by other family members or pets?

Yes

No

Yes

No

Consider scheduling appointments when other family members are not present. Consider placing pets in a nonadjacent room.

Yes

No

Resume services when available.

Yes

No

Yes

No

Yes

No

Yes

No

Research and learn how to speak a few common words/phrases in family’s native language. Research and discuss current events in the family’s local community. Pause services during the family’s holiday and religious observances. Consider conducting services outside typical business hours (e.g., evenings, weekends).

Consider an upgrade to internet services, minimize other concurrent household internet usage, or invivo services. Consider in-vivo services until the caregivers and behavior analyst are comfortable.

International and Cultural Considerations

Has the family experienced intense weather conditions, political instability, etc. recently?* Does the behavior analyst lack a working knowledge of words/phrases in the family’s native language? Does the behavior analyst lack a working knowledge of the family’s local current events? Does the family have upcoming holiday or religious observances? Is the family located in a time zone that is significantly different from the behavior analyst?

Note: Considerations with an * are those that may disqualify telehealth when the answer is “yes”.



Telehealth-delivered family support

297

physically intervene (e.g., neutral blocking) to maintain safety in the case of self-injury or physical aggression. However, even young children display frequent and/or intense challenging behavior that can pose safety risks when a caregiver is not able to safely intervene. Thus, without the option to provide physical support in person, behavior analysts should consider the age and size of the potential client, as well as the topographies of the behaviors targeted for assessment and treatment. Younger children with particularly frequent and/or intense self-injury or aggression, as well as older children, adolescents, and adults who are likely to be stronger and therefore more difficult to physically intervene may not be suitable clients for telehealth. When a client’s target behaviors are particularly intense, frequent, or a safety risk, there are mitigation strategies that may still allow for telehealth services. When possible, including additional adult assistance at the location where the client is receiving services (called the originating site) may increase safety by providing additional support for physical intervention. Additionally, protective gear (e.g., Kevlar sleeves, chest protector, shin guards) for the caregiver and/or the client may make it safer to conduct assessments that evoke challenging behavior and treatments that sometimes lead to more intense episodes of challenging behavior. When safety mechanisms are not available, in-vivo services should be considered. In addition to focusing largely on young children, studies using telehealth to assess and treat challenging behavior have primarily emphasized individuals with neurodevelopmental disabilities, and particularly ASD (Schieltz & Wacker, 2020). Behavior analytic studies on individuals without an intellectual or developmental disability (i.e., individuals considered “neurotypical”) are lacking. There are no apparent reasons to believe that individuals without an intellectual or developmental disability would not be good candidates for telehealth, but behavior analysts should be aware that little research exists on whether specific diagnoses or absence of a diagnosis are more amenable to telehealth. Many challenging behaviors have been successfully treated using telehealth, including self-injury, aggression, tantrums, and destruction (e.g., Benson et al., 2018; Frieder, Peterson, Woodward, Craine, & Garner, 2009; Hoffmann, Bogoev, & Sellers, 2019; Machalicek et al., 2009; Monlux et al., 2022).With few exceptions (e.g., Schieltz et al., 2018), these behaviors were maintained by social reinforcement. Prior to initiating telehealth services with a particular client, behavior analysts should consider the potential that the client’s target behavior is maintained by automatic reinforcement because it is possible that such a client would not be suitable for telehealth for

298

Applied behavior analysis advanced guidebook

at least two reasons. First, the standard approach to determining whether a behavior is maintained by automatic reinforcement is to conduct an alone or ignore test condition whereby the client is left without access to items or attention. However, conducting this type of condition in the client’s home without the presence of someone to intervene as needed may be impractical and dangerous. Second, automatically maintained behaviors are generally much more difficult to treat (LeBlanc, Patel, & Carr, 2000; Rooker, Bonner, Dillon, & Zarcone, 2018) and it is unclear if the types of treatment approaches most often used (e.g., response-blocking) can be effectively implemented using telehealth. When it is unclear whether a target behavior is automatically maintained, one option for behavior analysts is to conduct the initial assessment or screening in-vivo and then proceed with treatment via telehealth after confirming a social function. Caregiver characteristics and preferences: Behavior analysts should consider caregiver characteristics and preferences prior to initiating telehealth services Studies evaluating the delivery of behavior analysis via telehealth have found high caregiver acceptability (Ferguson, Craig, & Dounavi, 2019); however, when given the option to transition from in person to telehealth, many caregivers decline to move to telehealth (Aranki,Wrightt, Pompa-Craven, & Lotfizadeh, 2022). Although more research is needed to determine why caregivers may prefer one service modality over another, it is clear that some may not be as comfortable with telehealth than in-person services and behavior analysts must ensure not to initiate telehealth services if a caregiver expresses discomfort with it. Moreover, caregiver preference for service delivery modality may change over time and behavior analysts would be wise to frequently check in with caregivers regarding any concerns or changes in preference with telehealth service. In addition to preference for service modality, there are other caregiver characteristics that may make telehealth services more challenging than in-person services. For example, caregivers who have a hearing impairment, demonstrate poor receptive language skills, or learn better with visual stimuli may be better served by in-person services.Telehealth services rely heavily on vocal instructions, resulting in limited possibilities to visually model techniques (e.g., neutral blocking for head banging) and procedures (Larsen et al., 2022). Thus, for caregivers who are deaf or hard of hearing, as well as caregivers who learn better through visual strategies, the ability to follow behavior analyst coaching directives may be compromised, leading to poor



Telehealth-delivered family support

299

procedural integrity and/or creating safety risks. For these reasons, in-vivo services would be preferred. Most caregivers are likely to have at least some experience with the computer, smartphone, and tablet technology required for telehealth services but even those who are considered “technology savvy” may find the hardware and software used for telehealth complicated. Caregivers with previous telehealth experience may be more comfortable and may have learned effective troubleshooting strategies. However, caregivers who require frequent technology support or who are easily frustrated with technology troubleshooting, may necessitate preservice training with the hardware and/or software to be used and may benefit from a troubleshooting manual with tips and strategies to address common technology challenges. Technology: Behavior analysts should consider whether their clients have sufficient connectivity and hardware required for telehealth service For some families, technology barriers continue to make obtaining telehealth services a challenge. Although the majority of people in the US have access to the equipment necessary for telehealth (e.g., laptop, smartphone, tablet; Pew Research Center, 2021a), there are still some families that do not and slightly fewer than one fourth of all Americans still do not have high-speed broadband services (Pew Research Center, 2021b). Strategies to overcome these barriers, such as creating a lending library for the necessary equipment (e.g., Wacker et al., 2016), have been used, but there may still be cases where the quality of the equipment or connectivity are not sufficient to provide telehealth services. Even when a behavior analyst has the training and comfort to provide behavior analytic services via telehealth, there may be technological barriers that would preclude offering this type of service. Having access to telecommunications software, hardware, and good internet connectivity is not sufficient. Technical issues, such as blurry or freezing video and audio, frequent software updates, and hardware failures, can be frustrating for the behavior analyst and the client and the behavior analyst should have enough basic knowledge and technological experience to troubleshoot issues that arise (see Lee et al., 2015; Lerman et al., 2020). For behavior analysts practicing outside of a school setting, the Health Insurance Portability and Accountability Act (HIPAA) governs activities where personal health information (PHI) must be protected, including when using telehealth. HIPAA-compliant telehealth requires secure equipment, but also secure

300

Applied behavior analysis advanced guidebook

teleconferencing software that is backed by a Business Associate Agreement or BAA, which ensures all PHI will be held private (Bassan, 2021). Without a BAA, a behavior analyst should only provide in-vivo services. Behavior analyst competence and practice setting: Behavior analysts should consider whether their own training, comfortability, and setting are sufficient prior to initiating telehealth services To date, no research has evaluated the characteristics that make a behavior analyst competent to deliver telehealth services. However, like caregivers, there are reasons why some behavior analysts may be better suited for telehealth than others. A behavior analyst’s training and experiences providing in-vivo services is vital to success when transitioning to telehealth services. Behavior analysts should establish proficiency not only in conducting the procedures they plan to implement via telehealth, but also have experience in coaching others to implement those procedures. Behavior analysts who are used to conducting behavior assessment and treatment procedures may find it challenging to serve as a “coach”. Coaching caregivers requires strong verbal skills and patience because the behavior analyst cannot physically intervene or provide visual models as easily as can be done in-vivo. Thus, behavior analysts who are either uncomfortable with the intended behavioral procedures or their ability to coach caregivers effectively should first establish comfort with the procedures and coaching techniques in-vivo before transitioning to telehealth service. With an increasing number of behavior analysts providing telehealth training, research has begun to focus on coaching via telehealth (e.g., Larsen et  al., 2022) and on establishing effective training for behavior analysts interested in conducting telehealth service (Neely, Tsami, Graber, & Lerman, 2022). Although this research is still in its infancy, behavior analysts who plan to offer telehealth services may benefit from early findings and identify areas where they may need to develop or enhance their skills before attempting telehealth. One of the benefits of a telehealth delivery model is that it does not require the behavior analyst to provide the physical space to conduct services. Behavior analysts who lack clinic space or who practice in a physical space with hours that are not conducive to a particular client (e.g., 9 am–5 pm business hours) may still be able to provide telehealth services. However, behavior analysts must still consider whether their site of practice (called a distant site) is suitable for telehealth. In addition to the technology requirements mentioned previously, behavior analysts practicing from home must ensure that their space has adequate lighting and acoustics and that



Telehealth-delivered family support

301

others living within the home are neither a distraction nor a threat to client privacy or confidentiality. When the practice setting is not distraction free or cannot guarantee client privacy and/or confidentiality, the behavior analyst is ethically obligated to refrain from providing telehealth services. Additionally, while there is little research on the “optimal” environmental conditions for telehealth, behavior analysts should be aware of details like lighting, noise, and the professional appearance of the practice setting and how they may impact the perception (positively or negatively) of the client and client’s family toward the behavior analyst (Duane et al., 2022). Service delivery models: Behavior analysts should consider the telehealth model that best supports the client and the client’s family Not all telehealth services utilize the same model. There are many different telehealth models that may be tailored to the specific needs of the client and client’s family. One important distinction across models is where the client receives services—called the originating site. Several models that differ in the originating site have been shown to be feasible and effective for behavioral treatments. In the earliest demonstration of telehealth for behavior analytic services, Barretto et  al. (2006) were successful in providing telehealth to participants located in schools and social service agencies. Wacker et  al. (2013a, 2013b) demonstrated success with a clinic-to-clinic model where regional health clinics served as the originating site and Lindgren et al. (2016) demonstrated success with a clinic-to-home model where the participant’s home served as the originating site. In the former study, participants had the benefit of a safe clinic space with support staff who could assist with procedures as needed. In the latter study, participants had no travel requirements for services and were able to receive treatment in the comfort of their own home. Each model offers specific benefits and limitations that may be more or less suited to target clients. One aspect of telehealth that has garnered less attention in the research literature is the timing and dosage of service. Many of the studies using telehealth to provide behavioral assessment and treatment have utilized a standard timing, usually once per week, and a standard dosage of 60-min sessions. Although this approach has proven successful, in-vivo clinic models offering behavior analytic services range from intensive all-day programs (e.g., Call, Parks, & Reavis, 2013) to brief single-visit services (e.g., Wacker, Schieltz, & Romani, 2015). There are obvious challenges to providing telehealth services in larger doses (e.g., all-day service) in terms of the technology (e.g., maintaining connectivity and battery charged devices) and client

302

Applied behavior analysis advanced guidebook

stamina to remain in front of a camera for long periods, but, as previously described, Suess et al. (2016) provided a successful demonstration of a truncated clinic-to clinic telehealth model. In their study, telehealth services for 5 young children with disruptive behavior entailed a combination of a single 60-min assessment and three 15-min treatment visits. Demonstrations of telehealth services provided at other timing and doses are lacking. While telehealth is often considered a more flexible approach to service provision, behavior analysts who intend to provide services using alternative timing and dosage should consider both the feasibility and acceptability of those options. Legal and professional boundaries: Behavior analysts should consider feasibility of telehealth services for each client based on the rules and regulations outlined by state and federal laws, professional regulation boards, and payer policies prior to initiating telehealth services Whether providing in-vivo or telehealth services, the practice of behavior analysis is governed by many rules, regulations, and standards. Given the relative novelty of telehealth practice and the recent influence of the COVID-19 pandemic, the rules, regulations, and standards of telehealth practice have evolved substantially over the past decade and are likely to continue to evolve with the support of more research. Professional service and credentialing organizations such as the Association for Behavior Analysis International (ABAI), American Psychological Association (APA), Association of Practicing Behavior Analysts (APBA), and Behavior Analyst Certification Board (BACB) have generally supported the use of telehealth for provision of behavior analytic services and standards of practice have been developed by these and other organizations (e.g., APBA, BACB, Behavioral Health Center of Excellence [BHCOE], Council of Autism Service Providers [CASP]) to ensure quality services are delivered (e.g., Council of Autism Service Providers, 2021). Behavior analysts considering telehealth services should be familiar with the standards set by professional organizations and with the unique ethical challenges associated with telehealth services (e.g., Pollard, Karimi, & Ficcaglia, 2017; Romani & Schieltz, 2017). Prior to the COVID-19 pandemic, laws governing telehealth practice varied greatly across states, with some states allowing for great flexibility with regards to the originating site (e.g., home, school, clinic), others limiting the originating site to the clinic setting (i.e., clinic-to-clinic telehealth), and many states fully restricting telehealth services (Chu et al., 2021). As a result of the pandemic, state and federal health service provider rules and



Telehealth-delivered family support

303

regulations were relaxed and all states allowed for the temporary provision of behavior analytic services via telehealth. Additionally, many states as well as CMS relaxed interstate practice for telehealth, which allowed behavior analysts to serve clients across state lines regardless of the provider’s state of licensure (Telehealth.HHS.gov, 2022). Owing to the large demand for telehealth services, particularly in large rural states with many underserved geographic areas, telehealth continues to be allowable in most US states. However, behavior analysts must stay informed of current rules and regulations for their respective states as there is no guarantee that telehealth will remain a viable service provision option (US Department of Health and Human Services, 2022). Notably, some of the allowances introduced at the outset of the pandemic have already been rescinded in some states, such as interstate practice. In addition to lawful practice under state and federal guidelines, behavior analysts may be limited in their ability to practice telehealth by public and private insurance payers. At the outset of the COVID-19 pandemic, many payers temporarily changed their policies to allow reimbursement for behavior analytic services via telehealth to mitigate concerns about in-person services (ABA Billing Codes Commission, 2020). While this allowance is likely to continue for many payers, in states without mandates requiring coverage for telehealth service, behavior analysts should carefully examine the policies of their clients to make sure it is a covered service. Much like the rules that govern professional practice within each state, there may be different payer policies for codes often billed by behavior analysts and those billed by licensed psychologists. Again, it is incumbent upon the provider to examine payer policies that may outline those differences.

Clinical service considerations Even after a behavior analyst has initiated telehealth-based services with a client and the client’s family, some of the factors that led to the decision to engage in telehealth may change and require reconsideration. All of the characteristics that made a client and the client’s family good candidates for telehealth require continued monitoring long after the initiation of telehealth services. Moreover, sometimes revelations about a client, the client’s caregiver or the procedures a behavior analyst plans to implement may necessitate a change from telehealth to in-vivo services.What follows are some of the areas that should be monitored and concerns that may require a change in service modality (e.g., in-vivo services).

304

Applied behavior analysis advanced guidebook

Client suitability: Behavior analysts should monitor for client behaviors that may diminish their appropriateness for telehealth services It is not uncommon for a child’s challenging behavior to increase in frequency or intensity during behavioral assessments and treatments. It is also not uncommon for new topographies of challenging behavior to emerge or maintaining functions to shift unexpectedly. For example, during the early stages of behavioral treatment, particularly when using extinction procedures, extinction bursts and response variation may occur. Additionally, behavioral function may shift from social contingencies to automatic contingencies (see Schieltz et al., 2018 for an example). When these changes in frequency, intensity, or topography place the client or the client’s caregiver at increased risk of injury, behavior analysts should consider whether telehealth remains a safe therapy option. Strategies to mitigate risk, such as adding adult support, use of protective equipment, or modifying treatment procedures (e.g., eliminating extinction) may allow for the continuation of telehealth services. However, a behavior analyst should also consider changing to an in-vivo service model, even if only temporarily to ensure safety. Even when safeguards are in place to maintain physical safety, other child behaviors may emerge that could make telehealth a poor option for service. Some children are exceptionally active and may not stay within the view of the client’s camera. When this occurs often, the behavior analyst is not able to ensure the safety of the client or evaluate the client’s response to the assessment or treatment procedures being implemented. Modifications to the client’s therapy space, such as using a smaller room or creating natural barriers (e.g., placement of a couch) to areas outside of the viewing area, may remedy this situation. Behavior analysts may also find that a child is reactive to their voice or video, resulting in a change in responding that interferes with the assessment or treatment (e.g., distraction from a caregiver’s directives). Simple solutions include running a series of free play sessions until the child gets used to the behavior analyst’s audio and video, turning off the camera, and/or using bug-in-theear technology which allows the caregiver to hear the behavior analyst through a headphone via Bluetooth technology (see Lerman et al., 2020 for other potential solutions). Should modifications to the environment or technology fail to reduce interfering behaviors, in-vivo services may be warranted.



Telehealth-delivered family support

305

Caregiver characteristics and preferences: Behavior analysts should monitor caregiver preference for telehealth and a caregiver’s response to procedures and coaching In a telehealth model, caregivers serve as their child’s therapist and are expected to implement assessment and treatment procedures with only the benefit of behavior analyst coaching. This arrangement can be difficult for some caregivers, resulting in poor assessment or treatment fidelity (see Schieltz et  al., 2018 for an example). Additionally, dealing with a child’s challenging behavior may be frustrating or upsetting for a caregiver who may respond emotionally by yelling or crying. Unlike in-vivo services when a caregiver does not follow coaching directives or is frustrated and engaging in emotional responding, a behavior analyst is not able to step in for the caregiver and maintain assessment or treatment integrity. Consistently poor compliance with a behavior analyst’s directives or frequent emotional responding may preclude some clients from participating in a telehealth service model. A behavior analyst may be able to overcome these challenges in several ways. For the caregiver who frequently displays emotional responding, a behavior analyst should introduce “time-out” procedures whereby the parent or behavior analyst can signal that a break from the session is needed to cool down. During the time-out, the behavior analyst may simply give the caregiver a break from therapy or may engage the caregiver in a supportive role with encouragement and guidance.Time-outs may also be helpful for the caregiver who fails to follow the behavior analyst’s coaching. Rather than the calming period, this time-out approach would be used to clarify the reason(s) that the caregiver is not following directives and address any questions or concerns the caregiver may have. In addition to time-out procedures, a parent who is not following coaching advice may benefit from presession coaching on procedures and/or a task analysis that breaks down the assessment or treatment procedures into smaller and oftentimes more manageable steps. When time-out procedures, presession coaching, and task analyses are not effective, in-vivo services should be considered. As noted previously, a caregiver’s preference for service modality is imperative. Although some caregivers may find telehealth to be appealing initially, experiencing telehealth services and some of the challenges may lead some caregivers to decide that in-vivo services are more suitable. Thus, behavior analysts should continually monitor the caregiver’s preference and comfort for telehealth services and move to in-vivo services when telehealth has fallen out of favor.

306

Applied behavior analysis advanced guidebook

Other challenges during telehealth: Behavior analysts should monitor for other challenges that may make telehealth services unsuitable In addition to client and caregiver variables, there are many other challenges a behavior analyst may encounter that would make telehealth services untenable. Anyone who has used telecommunications software has undoubtedly encountered video or audio freezing due to poor or unstable connectivity. In some cases, multiple appointments with clear audio and video may be followed by appointments where poor connectivity precludes conducting any sessions. There are many reasons for poor, unstable, and variable connectivity, including slow network speeds, inclement weather, and excessive household internet usage. Minimizing concurrent internet usage during appointments by making sure others in the household are not using the internet at the same time and/or upgrading to faster network speeds may prevent on-going internet disruptions; however, for some clients and their families, internet service options may be limited by geography and the ability to minimize concurrent internet usage may be limited by other family members’ needs. In such cases, behavior analysts should consider moving to an in-vivo model or, when possible, a clinic-to-clinic model. Another common challenge with clinic-to-home telehealth is disruption caused by siblings, other caregivers, and family pets in the home. Especially during assessment procedures and early in treatment, it is important to minimize intrusions and extraneous noises that may influence the client’s behavior and/or make it difficult for the caregiver to receive coaching from the behavior analyst. Scheduling appointments at times when other family members are not present at home and placing pets outside or in nonadjacent rooms are mitigation strategies behavior analysts can use for these situations. Noted previously, most research studies using telehealth for behavioral assessment and treatment have focused on FA and FCT procedures (Schieltz & Wacker, 2020). These are powerful assessment and treatment tools, but they are hardly the only tools used by behavior analysts. Behavior analysts who choose to use other assessments and treatments should be aware of the possible limitations when using approaches novel to telehealth. For example, treatment protocols using time-out from reinforcement (particularly exclusionary time-out) have not been demonstrated via telehealth and may be challenging to implement with a telehealth model. Although assessments and treatments that have not been evaluated in the research literature may still be safe and effective when conducted via telehealth, behavior analysts should consider implementing those procedures in-vivo until the caregiver is comfortable before returning to a telehealth model.



Telehealth-delivered family support

307

Using a hybrid service model Choosing between telehealth and in-vivo services is not an either-or situation. Although there may be reasons why one option must be excluded such as a family who cannot travel for in-vivo services or a family without internet service, in situations where both options are possible, a behavior analyst may consider a hybrid model that incorporates both in-vivo and telehealth services. A hybrid approach may be appealing for multiple reasons. When a behavior analyst is initially hesitant to utilize telehealth due to unfamiliarity with a client and the client’s family, beginning services invivo is an opportunity to “screen” the client and family for suitability with telehealth. Beginning with in-vivo services may also be helpful during the initial stages of therapy where modeling and the opportunity to provide caregivers with physical support are especially important. Even when there is a successful demonstration of treatment in the clinic setting, following that demonstration with telehealth to the home setting provides the opportunity to practice and evaluate generalization in a more natural setting. One option to optimize a hybrid model is to conduct all assessment and treatment procedures via telehealth except during the initial stages of treatment, when novel procedures are being introduced, and when telehealth is no longer considered safe.

International and cultural considerations Telehealth has reduced geographic barriers, resulting in the provision of behavior analytic services to individuals in communities around the globe (Schieltz et al., 2022; Tsami et al., 2019, 2022). Similar to the behavior analytic telehealth research conducted in the US, global demonstrations of telehealth have focused on an FA + FCT model with behavior analysts located in the US and the families that they are working with located in places such as Central America, Europe, the Middle East, and Asia (Tsami et  al., 2019, 2022). In these studies, culturally matched interpreters born and raised in the client’s country were used when the behavior analysts spoke a different language from the family. The results showed that when interpreters were used there were no differences in reduction of challenging behavior, caregiver procedural fidelity, and caregiver ratings on acceptability than when interpreters were not used. These results are important because many regions of the world (e.g., European Union) are increasingly utilizing cross-border healthcare services (Glass, Schlachta, Hawel, Elnahas, & Alkhamesi, 2022). Although the US has been slow to allow cross-border healthcare, successful demonstrations of international telehealth may make

308

Applied behavior analysis advanced guidebook

international service provision likely. Thus, behavior analysts interested in providing telehealth outside of the US borders should consider some of the differences in providing international services. Additionally, while one in every five US residents speak a language other than English (American Community Survey, 2020), it is expected that one in five individuals will be “foreign-born” by 2060 with the diversity of children changing more rapidly than the diversity of adults (Colby & Ortman, 2015). Furthermore, because the majority of behavior analytic providers are Caucasian and female (Behavior Analyst Certification Board, 2022) and the likelihood of providing services in diverse settings is increasing, it is imperative that behavior analysts are adequately prepared to serve clients in culturally meaningful ways (Beaulieu, Addington, & Almeida, 2018; Fong, Catagnus, Brodhead, Quigley, & Field, 2016; Miller, Cruz, & Ala'i-Rosales, 2019), which aligns with the BACB ethical code (code 1.07, Behavior Analyst Certification Board, 2020). To engage in this practice, behavior analysts must practice with cultural humility and responsiveness (Vargas Londono, Lim, Barnett, Hampton, & Falcomata, 2022). Cultural humility includes continuous self-reflection to develop a clinician-client partnership that reduces power imbalances and demonstrates respect for another’s cultural background (Tervalon & Murray-Garcia, 1998), whereas cultural responsiveness uses “the cultural characteristics, experiences, and perspectives of ethnically diverse students as conduits for teaching them more effectively” (Gay, 2002, p.106). Practicing with cultural humility and responsiveness requires that the role of culture be considered continuously during the provision of telehealth services to ensure socially meaningful practice (Bernal, Bellido, & Bonilla, 1995; Jimenez-Gomez & Beaulieu, 2022). For example, Jimenez-Gomez and Beaulieu (2022) recommended that behavior analysts engage in continuous self-assessment, include assessments of social validity, become aware of the clients’ histories and lived environments, plan and program for generalization, collaborate with caregivers during the provision of services, demonstrate cultural humility by creating conditions under which the caregiver has opportunities to establish power and exhibit respect for the family’s culture, ask open-ended questions, and engage with community members. Of these recommendations, creating conditions that respect the family’s culture is the most common adaptation utilized when behavior analysts provide services to clients in other countries (Sivaraman & Fahmie, 2020). Specifically, Sivaraman and Fahmie (2020) found that the most common cultural adaptations have included development and translations of training materials in the clients’ language, rapport building with



Telehealth-delivered family support

309

the caregivers, and culturally matching the caregiver and therapist with regards to gender, ethnicity, place of birth, and language. In the following sections, we discuss these strategies and other considerations for practice when working with culturally diverse clients and families via telehealth. Preservice considerations: Behavior analysts should become aware of the client and family’s access, cultural history, lived experiences, and preferences When considering telehealth service provision across cultural and national boundaries, the behavior analyst must consider several variables that may impact access, as well as variables that may influence whether telehealth service provision is feasible. First, as of 2020, 40% of the world population does not use the internet (The World Bank, n.d.), which is a major barrier to providing telehealth services globally. For those who have access to internet service, many are unlikely to have broadband access (i.e., download speeds of at least 25 Mbps and upload speeds of at least 3 Mbps; Federal Communications Commission, 2016), which provides the best opportunity for telehealth service. Therefore, the behavior analyst may need to evaluate the family’s internet speeds (e.g., https://www.speedtest.net/) to determine the feasibility of providing services via telehealth. Second, even if a family has access to the internet, in certain regions of the world, insufficient infrastructures and limited resources may result in frequent power outages. For example, in some areas, power outages may be a regular occurrence during specific times of the day or may occur unexpectedly for several consecutive days. To address this issue, some families may own or rent generators or have access to a rechargeable modem. When immediate alternatives for accessing the internet are not feasible, the behavior analyst may need to refrain from or postpone the provision of telehealth services to determine if alternatives for accessing the internet can be obtained, namely securing funds to loan equipment or identifying alternative locations with internet access such as a school or community center. If alternative options cannot be identified, the behavior analyst should attempt to connect the client’s family to other local providers. In addition to ensuring adequate and reliable internet and power capabilities, the behavior analyst must consider additional factors related to the family’s culture, lived experiences, and preferences. Specifically, the behavior analyst should develop an awareness of the family’s environment, history, beliefs, and some of the conditions under which the family is functioning, all of which may guide the development of goals during the assessment and treatment processes. Additionally, the behavior analyst should assess family

310

Applied behavior analysis advanced guidebook

preferences for practitioner matching along one or more cultural dimensions (e.g., gender, language, attire), as well as the use of an interpreter when the languages spoken by the family and behavior analyst differ. To demonstrate the importance of culturally matched practitioners, in some cultures, a conversation between a male practitioner and a mother would be considered unacceptable and inconsistent with the values of their culture (Alnemary, Wallace, Symmon, & Barry, 2015). If these preferences can be accommodated for (e.g., female behavior analyst with similar competence is available to take a new client), the behavior analyst should consider providing access to a culturally matched behavior analyst. However, it may not always be feasible to provide a culturally matched behavior analyst because those available do not match the cultural dimension of preference or an available behavior analyst that is matched to preferred cultural dimension does not have the competence to provide the needed services. In such cases, discussions with the family may be required to determine how best to proceed. Relative to the use of interpreters, Tsami et al. (2019) evaluated the efficacy of noncertified interpreters for families who did not speak the same language as the behavior analyst. The interpreters were university students or community members who were born and raised in the same country as the family they served.They had little to no knowledge on how to conduct the FA and FCT procedures. Following a 1 h training prior to each procedural phase (i.e., FA, FCT), the interpreters were instructed to interpret the behavior analyst’s instructions word-for-word to the family. Interpreters were in one of three locations: the same location as the behavior analyst (the university clinic), the same location as the family (in a private learning center), or in a separate location (a three-way teleconference). Results showed that the effects on child challenging behavior, caregiver procedural fidelity, and caregiver acceptability ratings did not differ significantly when compared to the results obtained for families who did not use an interpreter. It should be noted that although the results when using noncertified interpreters was positive, behavior analysts engaged in fee-for-service practice are bound by ethical and legal requirements to use only certified interpreters (Basu, Costa, & Jain, 2017; Behavior Analyst Certification Board, 2020; Dowdy, Obidimalor, Tincani, & Travers, 2021). Clinical service considerations: Behavior analysts should stay abreast and practice cultural responsiveness When clinical services are provided via telehealth across cultural and national boundaries, the behavior analyst should stay abreast on a number



Telehealth-delivered family support

311

of factors that may impact the continuation of telehealth service delivery. For example, many countries are regularly vulnerable to inclement weather conditions (e.g., monsoon seasons, extreme heat and cold conditions, volcanic activity), which may disrupt and delay service provision for extended periods of time temporarily or altogether. Similarly, political instability and government monitoring may impact the consistency of telehealth services. Maintaining an effective telehealth service across cultures also requires the behavior analyst to continually engage in cultural responsiveness (Fong et al., 2016). Behavior analysts should strive to demonstrate cultural awareness and humility, especially so when providing cross-cultural telehealth (Beaulieu et al., 2018; Miller et al., 2019).This may be represented by learning a few words or phrases in the family’s native language (e.g., “hello”, “how are you today?”), researching current events and activities that are local to the family (e.g., political changes, weather conditions, cultural news, art or sporting events), and adjusting telehealth schedules based on family observances of holidays, religious practices, and time zones.

Conclusion Many behavior analytic approaches for treating challenging behavior have been identified as “evidence-based” (Hume et al., 2021; National Autism Center, 2015), which suggests that there is a preponderance of research to support their use to reduce challenging behavior. However, there is no guarantee that a treatment deemed “evidence-based” will be successful and behavior analysts considering treatment options must consider external validity during treatment selection, that is, the degree to which the outcomes of research studies are generalizable and applicable to the target population and behavior. In general, when a behavior analyst employs a treatment in a context highly similar to the supporting research (e.g., similar population, setting, target behavior), there is greater confidence in a positive outcome than when the context deviates from the supporting research. The application of this paradigm to research and practice with telehealth reveals that a relatively narrow set of behavior analytic procedures, populations, and behaviors (i.e., treatment of disruptive behavior using FA + FCT with young children with ASD) have composed the majority of the telehealth research literature within behavior analysis to date (Schieltz & Wacker, 2020). Accordingly, behavior analysts planning to utilize telehealth for treating challenging behavior should be aware of the treatment contexts supported by the research literature to guide their treatment selection and increase

312

Applied behavior analysis advanced guidebook

their confidence in treatment success. Moreover, when considering the use of telehealth for a particular case, behavior analysts should assess the degree to which their case deviates from what has been documented in the research literature and consider whether the degree of deviation still allows for practically and ethically sound use of telehealth.

References ABA Billing Codes Commission. (2020). Important COVID-19 Update: Use of Telehealth. July 29 https://www.ababillingcodes.com/resources/important-covid-19-update-use-of-telehealth-2/. Alnemary, F. M., Wallace, M., Symmon, J. B. G., & Barry, L. M. (2015). Using international videoconferencing to provide staff training on functional behavior assessment. Behavioral Interventions, 30(1), 73–86. https://doi.org/10.1002/bin.1403. American Community Survey. (2020). Latest ACS 5-year estimates data profiles/social characteristics. https://www.census.gov/acs/www/about/why-we-ask-each-question/language/. American Well. (2019). Telehealth index: 2019 consumer survey. https://business.amwell.com/ resources/telehealth-index-2019-consumer-survey/. Andersen, A. S., Hansen, B. A., Hathaway, K. L., & Elson, L. A. (2021). A demonstration of caregiver-implemented functional analysis of inappropriate mealtime behavior via telehealth. Behavior Analysis in Practice, 14(4), 1067–1072. https://doi.org/10.1007/ s40617-021-00615-2. Araiba, S., & Čolić, M. (2022). Preliminary practice recommendations for telehealth direct applied behavior analysis services with children with autism. Journal of Behavioral Education. https://doi.org/10.1007/s10864-022-09473-6. Aranki, J.,Wrightt, P., Pompa-Craven, P., & Lotfizadeh,A. D. (2022).Acceptance of telehealth therapy to replace in-person therapy for autism treatment during COVID-19 pandemic:An assessment of patient variables. Telemedicine and e-Health. https://doi.org/10.1089/tmj.2021.0397. Awasthi, S., Aravamudhan, S., Jagdish, A., Joshi, B., Mukherjee, P., Kalkivaya, R., et al. (2021). Transitioning ABA services from in clinic to telehealth: Case study of an Indian organization’s response to COVID-19 lockdown. Behavior Analysis in Practice, 14(4), 893–912. https://doi.org/10.1007/s40617-021-00600-9. Barnett, M. L., Ray, K. N., Souza, J., & Mehrotra, A. (2018). Trends in telemedicine use in a large commercially insured population, 2005-2017. Journal of the American Medical Association, 320(20), 2147–2149. https://doi.org/10.1001/jama.2018.12354. Barretto, A., Wacker, D. P., Harding, J., Lee, J., & Berg, W. K. (2006). Using telemedicine to conduct behavioral assessments. Journal of Applied Behavior Analysis, 39(3), 333–340. https://doi.org/10.1901/jaba.2006.173-04. Bassan, S. (2021). Data privacy considerations for telehealth consumers amid COVID-19. Journal of Law and the Biosciences, 7(1), 1–12. https://doi.org/10.1093/jlb/lsaa075. Basu, G., Costa,V. P., & Jain, P. (2017). Clinicians' obligations to use qualified medical interpreters when caring for patients with limited English proficiency. AMA Journal of Ethics, 19(3), 245–252. https://doi.org/10.1001/journalofethics.2017.19.3.ecas2-1703. Baumes, A., Čolić, M., & Araiba, S. (2020). Comparison of telehealth-related ethics and guidelines and a checklist for ethical decision making in the midst of the COVID-19 pandemic. Behavior Analysis in Practice, 13(4), 736–747. https://doi.org/10.1007/ s40617-020-00475-2. Beaulieu, L., Addington, J., & Almeida, D. (2018). Behavior analysts' training and practices regarding cultural diversity: The case for culturally competent care. Behavior Analysis in Practice, 12(3), 557–575. https://doi.org/10.1007/s40617-018-00313-6.



Telehealth-delivered family support

313

Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb. com/wp-content/ethics-code-for-behavior-analysts/. Behavior Analyst Certification Board. (2022). BACB certificant data. January https://www. bacb.com/bacb-certificant-data/. Benson, S. S., Dimian, A. F., Elmquist, M., Simacek, J., McComas, J. J., & Symons, F. J. (2018). Coaching parents to assess and treat self-injurious behavior via telehealth. Journal of Intellectual Disability Research, 62(12), 1114–1123. https://doi.org/10.1111/jir.12456. Bergmann, S.,Toussaint, K. A., Niland, H., Sansing, E. M., Armshaw, G., & Baltazar, M. (2021). Adapting direct services for telehealth: A practical tutorial. Behavior Analysis in Practice, 14(4), 1010–1046. https://doi.org/10.1007/s40617-020-00529-5. Bernal, G., Bellido, C., & Bonilla, J. (1995). Ecological validity and cultural sensitivity for outcome research: Issues for the cultural adaptation and development of psychosocial treatments with Hispanics. Journal of Abnormal Child Psychology, 23(1), 67–82. https:// doi.org/10.1007/BF01447045. Berwick, D. M., Nolan, T. W., & Whittington, J. (2008). The triple aim: Care, health, and cost. Health Affairs, 27(3), 759–769. https://doi.org/10.1377/hlthaff.27.3.759. Britwum, K., Catrone, R., Smith, G. D., & Koch, D. S. (2020).A university-based social services parent-training model: A telehealth adaptation during the COVID-19 pandemic. Behavior Analysis in Practice, 13(3), 532–542. https://doi.org/10.1007/s40617-020-00450-x. Call, N.A., Parks, N.A., & Reavis,A. R. (2013).Treating severe problem behavior within intensive day-treatment programs. In D. Reed, F. DiGennaro Reed, & J. Luiselli (Eds.), Handbook of crisis intervention and developmental disabilities. Issues in clinical child psychology. New York, NY: Springer. https://doi-org.proxy.lib.uiowa.edu/10.1007/978-1-4614-6531-7_21. Carpentier, W. R., Charles, J. B., Shelhamer, M., Hackler, A. S., Johnson, T. L., Domingo, C. M. M., et al. (2018). Biomedical findings from NASA’s project mercury: A case series. npj Microgravity, 4(6), 1–6. https://doi.org/10.1038/s41526-018-0040-5. Carr, E. G., & Durand,V. M. (1985). Reducing behavior problems through functional communication training. Journal of Applied Behavior Analysis, 18(2), 111–126. https://doi. org/10.1901/jaba.1985.18-111. Centers for Medicare & Medicaid Services. (2022). CY2022 telehealth update medicare physician fee schedule. https://www.cms.gov/files/document/mm12549-cy2022-telehealthupdate-medicare-physician-fee-schedule.pdf. Chu, R. C., Peters, C., De Lew, N., & Sommers, B. D. (2021). State medicaid telehealth policies before and during COVID-19 public health emergency (issue brief no. HP-2021-17). Office of the Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services. https://www.aspe.hhs.gov/sites/default/files/2021-07/medicaid-telehealth-brief.pdf. Cihon, J. H., Ferguson, J. L., Lee, M., Leaf, J. B., Leaf, R., & McEachin, J. (2022). Evaluating the cool versus not cool procedure via telehealth. Behavior Analysis in Practice, 15(1), 260–268. https://doi.org/10.1007/s40617-021-00553-z. Colby, S. L., & Ortman, J. M. (2015). Projections of the size and composition of the US population: 2014 to 2060. Current Population Reports, P25–1143.Washington, DC: US Census Bureau. https:// www.census.gov/content/dam/Census/library/publications/2015/demo/p25-1143.pdf. Colombo, R. A., Wallace, M., & Taylor, R. (2020). An essential service decision model for ABA providers during crisis. Behavior Analysis in Practice, 13(2), 306–311. https://doi. org/10.1007/s40617-020-00432-z. Coon, J. C., Bush, H., & Rapp, J. T. (2022). Eight months of telehealth for a state-funded project in foster care and related services: Progress made and lessons learned. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-022-00682-z. Council of Autism Service Providers. (2021). Practice parameters for telehealth-implementation of applied behavior analysis (2nd ed.).Wakefield, MA:Author. https://casproviders1.wpengine. com/wp-content/uploads/2021/12/Final-Copy-Practice-Parameters-TelehealthABA-AMA-References-12.2.2199.pdf.

314

Applied behavior analysis advanced guidebook

Crockett, J. L., Becraft, J. L., Phillips, S. T., Wakeman, M., & Cataldo, M. F. (2020). Rapid conversion from clinic to telehealth behavioral services during the COVID-19 pandemic. Behavior Analysis in Practice, 13(4), 725–735. https://doi.org/10.1007/ s40617-020-00499-8. Davis, T. N., Gerow, S., Wicker, M., Cosottile, D., Exline, E., Swensson, R., et  al. (2022). Utilizing telehealth to coach parents to implement trial-based functional analysis and treatment. Journal of Behavioral Education. https://doi.org/10.1007/s10864-022-09468-3. Dorsey, E. R., & Topol, E. J. (2016). State of telehealth. The New England Journal of Medicine, 375(2), 154–161. https://doi.org/10.1056/NEJMra1601705. Dowdy, A., Obidimalor, K. C.,Tincani, M., & Travers, J. C. (2021). Delivering culturally sound and high-quality behavior analytic services when working with an interpreter. Behavior Analysis: Research and Practice, 21(1), 51–64. https://doi.org/10.1037/bar0000206. Drew, C. M., Machalicek, W., Crowe, B., Glugatch, L., Wei, Q., & Erturk, B. (2022). Parentimplemented behavior interventions via telehealth for older children and adolescents. Journal of Behavioral Education. https://doi.org/10.1007/s10864-021-09464-z. Duane, J. N., Blanch-Hartigan, D., Sanders, J. J., Caponigro, E., Robicheaux, E., Bernard, B., et al. (2022). Environmental considerations for effective telehealth encounters: A narrative review and implications for best practice. Telemedicine Journal and E-Health, 28(3), 309–316. https://doi.org/10.1089/tmj.2021.0074. Edelstein, M. L., Becraft, J. L., Gould, K., & Sullivan, A. (2022). Evaluation of a delay and denial tolerance program to increase appropriate waiting trained via telehealth. Behavioral Interventions, 37(2), 383–396. https://doi.org/10.1002/bin.1855. Estabillo, J. A., Moody, C. T., Poulhazan, S. J., Adery, L. H., Denluck, E. M., & Laugeson, E. A. (2022). Efficacy of PEERS® for adolescents via telehealth delivery. Journal of Autism and Developmental Disorders. https://doi.org/10.1007/s10803-022-05580-5. Federal Communications Commission. (2016). 2016 broadband progress report. January https:// docs.fcc.gov/public/attachments/FCC-16-6A1.pdf. Ferguson, J., Craig, E. A., & Dounavi, K. (2019). Telehealth as a model for providing behaviour analytic interventions to individuals with autism spectrum disorder: A systematic review. Journal of Autism and Developmental Disorders, 49(2), 582–616. https://doi. org/10.1007/s10803-018-3724-5. Ferguson, J., Dounavi, K., & Craig, E. A. (2022).The impact of a telehealth platform on ABAbased parent training targeting social communication in children with autism spectrum disorder. Journal of Developmental and Physical Disabilities. https://doi.org/10.1007/ s10882-022-09839-8. Ferguson, J. L., Majeski, M. J., McEachin, J., Leaf, R., Cihon, J. H., & Leaf, J. B. (2020). Evaluating discrete trial teaching with instructive feedback delivered in a dyad arrangement via telehealth. Journal of Applied Behavior Analysis, 53(4), 1876–1888. https://doi. org/10.1002/jaba.773. Fong, E. H., Catagnus, R. M., Brodhead, M. T., Quigley, S., & Field, S. (2016). Developing the cultural awareness skills of behavior analysts. Behavior Analysis in Practice, 9(1), 84–94. https://doi.org/10.1007/s40617-016-0111-6. Frieder, J. E., Peterson, S. M., Woodward, J., Craine, J., & Garner, M. (2009). Teleconsultation in school settings: Linking classroom teachers and behavior analysts through webbased technology. Behavior Analysis in Practice, 2(2), 32–39. https://doi.org/10.1007/ BF03391746. Gay, G. (2002). Preparing for culturally responsive teaching. Journal of Teacher Education, 53(2), 106–116. https://doi.org/10.1177/0022487102053002003. Gernsback, H. (1925). The radio teledactyl. Science and Invention, 12(10), 978. 1036. Gerow, S., Radhakrishnan, S., Akers, J. S., McGinnis, K., & Swensson, R. (2021). Telehealth parent coaching to improve daily living skills for children with ASD. Journal of Applied Behavior Analysis, 54(2), 566–581. https://doi.org/10.1002/jaba.813.



Telehealth-delivered family support

315

Gerow, S., Radhakrishnan, S., Davis, T. N., Zambrano, J., Avery, S., Cosottile, D. W., et  al. (2021). Parent‐implemented brief functional analysis and treatment with coaching via telehealth. Journal of Applied Behavior Analysis, 54(1), 54–69. https://doi.org/10.1002/ jaba.801. Gingles, D. (2022). Center the margin: Equity-based assessment and response strategies to reach underserved communities using a telehealth service delivery model. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-022-00685-w. Glass, L. T., Schlachta, C. M., Hawel, J. D., Elnahas, A. I., & Alkhamesi, N. A. (2022). Crossborder healthcare: A review and applicability to North America during COVID-19. Health Policy OPEN, 3, 100064. https://doi.org/10.1016/j.hpopen.2021.100064. Hoffmann, A. N., Bogoev, B. K., & Sellers, T. P. (2019). Using telehealth and expert coaching to support early childhood special education parent-implemented assessment and intervention procedures. Rural Special Education Quarterly, 38(2), 95–106. https://doi. org/10.1177/8756870519844162. Hume, K., Steinbrenner, J. R., Odom, S. L., Morin, K. L., Nowell, S. W., Tomaszewski, B., et al. (2021). Evidence-based practices for children, youth, and young adults with autism: Third generation review. Journal of Autism and Developmental Disorders, 51(11), 4013– 4032. https://doi.org/10.1007/s10803-020-04844-2. Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27(2), 197–209. https://doi.org/10.1901/jaba.1994.27-197. Jimenez-Gomez, C., & Beaulieu, L. (2022). Cultural responsiveness in applied behavior analysis: Research and practice. Journal of Applied Behavior Analysis, 55(3), 650–673. https:// doi.org/10.1002/jaba.920. Larsen, A., Schieltz, K. M., Barrett, A., & O’Brien, M. J. (2022). A retrospective analysis of therapists’ coaching behavior when directing parents to conduct behavioral assessments and treatments via telehealth. Behavior Modification. https://doi. org/10.1177/01454455221106127. LeBlanc, L. A., Patel, M. R., & Carr, J. E. (2000). Recent advances in the assessment of aberrant behavior maintained by automatic reinforcement in individuals with developmental disabilities. Journal of Behavior Therapy and Experimental Psychiatry, 31(2), 137–154. https://doi.org/10.1016/s0005-7916(00)00017-3. Lee, J. F., Schieltz, K. M., Suess, A. N., Wacker, D. P., Romani, P. W., Lindgren, S. D., et al. (2015). Guidelines for developing telehealth services and troubleshooting problems with telehealth technology when coaching parents to conduct functional analyses and functional communication training in their homes. Behavior Analysis in Practice, 8(2), 190–200. https://doi.org/10.1007/s40617-014-0031-2. Lerman, D. C., O’Brien, M. J., Neely, L., Call, N. A., Tsami, L., Schieltz, K. M., et  al. (2020). Remote coaching of caregivers via telehealth: Challenges and potential solutions. Journal of Behavioral Education, 29(2), 195–221. https://doi.org/10.1007/ s10864-020-09378-2. Lindgren, S., Wacker, D., Schieltz, K., Suess, A., Pelzel, K., Kopelman, T., et al. (2020). A randomized controlled trial of functional communication training via telehealth for young children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 50(12), 4449–4462. https://doi.org/10.1007/s10803-020-04451-1. Lindgren, S., Wacker, D., Suess, A., Schieltz, K., Pelzel, K., Kopelman, T., et  al. (2016). Telehealth expands access and reduces costs for treating challenging behavior in young children with autism spectrum disorders using applied behavior analysis. Pediatrics, 137(S2), S167–S175. https://doi.org/10.1542/peds.2015-28510. Machalicek, W., Lequia, J., Pinkelman, S., Knowles, C., Raulston, T., Davis, T., et al. (2016). Behavioral telehealth consultation with families of children with autism spectrum disorder. Behavioral Interventions, 31(3), 223–250. https://doi.org/10.1002/bin.1450.

316

Applied behavior analysis advanced guidebook

Machalicek,W., O’Reilly, M., Chan, J. M., Lang, R., Rispoli, M., Davis,T., et al. (2009). Using videoconferencing to conduct functional analysis of challenging behavior and develop classroom behavioral support plans for students with autism. Education and Training in Developmental Disabilities, 44(2), 207–217. http://www.jstor.org/stable/24233495. Miller, K. L., Cruz, A. R., & Ala'i-Rosales, S. (2019). “Inherent tensions and possibilities: Behavior analysis and cultural responsiveness”: Correction. Behavior and Social Issues, 28(1), 203. https://doi.org/10.1007/s42822-019-00013-y. Monlux, K. D., Pollard, J. S., Bujanda Rodriguez, A.Y., & Hall, S. S. (2022). Conducting inhome functional analyses of aggression and self-injury exhibited by boys with fragile X syndrome. Journal of Developmental & Behavioral Pediatrics, 43(4), e237–e245. https://doi. org/10.1097/DBP.0000000000001019. Mootz, C. A., Lemelman, A., Giordano, J., Winter, J., & Beaumont, R. (2022). Brief report: Feasibility of delivering the secret agent society group social skills program via telehealth during COVID-19: A pilot exploration. Journal of Autism and Developmental Disorders. https://doi.org/10.1007/s10803-022-05591-2. National Autism Center. (2015). Findings and conclusions: National standards project, phase 2. Randolph, MA: Author. Native Voices. (n.d.). 1922: Radio connects remote Alaska villages to medical device. https:// www.nlm.nih.gov/nativevoices/timeline/429.html. Neely, L., Tsami, L., Graber, J., & Lerman, D. C. (2022). Towards the development of a curriculum to train behavior analysts too provide services via telehealth. Journal of Applied Behavior Analysis, 55(2), 395–411. https://doi.org/10.1002/jaba.904. Nohelty, K., Hirschfeld, L., & Miyake, C. J. (2021). A measure for supporting implementation of telehealth direct therapy with treatment integrity. Behavior Analysis in Practice, 14(2), 422–433. https://doi.org/10.1007/s40617-020-00543-7. O’Brien, M. J., Schieltz, K. M., Berg,W. K., McComas, J. J., & Wacker, D. P. (2021). Delivering interventions via telehealth: Functional communication training with a child with autism as a case example. Research and Practice for Persons with Severe Disabilities, 46(1), 53–60. https://doi.org/10.1177/1540796920980452. Pellegrino, A. J., & DiGennaro Reed, F. D. (2020). Using telehealth to teach valued skills to adults with intellectual and developmental disabilities. Journal of Applied Behavior Analysis, 53(3), 1276–1289. https://doi.org/10.1002/jaba.734. Peterson, K. M., Ibañez, V. F., Volkert, V. M., Zeleny, J. R., Engler, C. W., & Piazza, C. C. (2021). Using telehealth to provide outpatient follow‐up to children with avoidant/ restrictive food intake disorder. Journal of Applied Behavior Analysis, 54(1), 6–24. https:// doi.org/10.1002/jaba.794. Pew Research Center. (2021a). Mobile fact sheet. https://www.pewresearch.org/internet/ fact-sheet/mobile/. Pew Research Center. (2021b). Internet/broadband fact sheet. https://www.pewresearch.org/ internet/fact-sheet/internet-broadband/. Pollard, J. S., Karimi, K. A., & Ficcaglia, M. B. (2017). Ethical considerations in the design and implementation of a telehealth service delivery model. Behavior Analysis: Research and Practice, 17(4), 298–311. https://doi.org/10.1037/bar0000053. Pollard, J. S., LeBlanc, L. A., Griffin, C. A., & Baker, J. M. (2021). The effects of transition to technician‐delivered telehealth ABA treatment during the COVID‐19 crisis: A preliminary analysis. Journal of Applied Behavior Analysis, 54(1), 87–102. https://doi. org/10.1002/jaba.803. Practice by Telephone. (1879). The Lancet, 114(2935), 819. https://doi.org/10.1016/ S0140-6736(02)47536-8. Rodriguez, K. A. (2020). Maintaining treatment integrity in the face of crisis: A treatment selection model for transitioning direct ABA services to telehealth. Behavior Analysis in Practice, 13(2), 291–298. https://doi.org/10.1007/s40617-020-00429-8.



Telehealth-delivered family support

317

Romani, P.W., & Schieltz, K. M. (2017). Ethical considerations when delivering behavior analytic services for problem behavior via telehealth. Behavior Analysis: Research and Practice, 17(4), 312–324. https://doi.org/10.1037/bar0000074. Rooker, G. W., Bonner, A. C., Dillon, C. M., & Zarcone, J. R. (2018). Behavioral treatment of automatically reinforced SIB: 1982–2015. Journal of Applied Behavior Analysis, 51, 974– 997. https://doi.org/10.1002/jaba.492. Schieltz, K. M., O’Brien, M. J., Tsami, L., Call, N. A., & Lerman, D. C. (2022). Behavioral assessment and treatment via telehealth for children with autism: From local to global clinical applications. International Journal of Environmental Research and Public Health, 19, 2190. https://doi.org/10.3390/ijerph19042190. Schieltz, K. M., Romani, P. W., Wacker, D. P., Suess, A. N., Huang, P., Berg, W. K., et  al. (2018). Single-case analysis to determine reasons for failure of behavioral treatment via telehealth. Remedial and Special Education, 39(2), 95–105. https://doi. org/10.1177/0741932517743791. Schieltz, K. M., & Wacker, D. P. (2020). Functional assessment and function-based treatment delivered via telehealth: A brief summary. Journal of Applied Behavior Analysis, 53(3), 1242–1258. https://doi.org/10.1002/jaba.742. Shawler, L. A., Clayborne, J. C., Nasca, B., & O’Connor, J. T. (2021). An intensive telehealth assessment and treatment model for an adult with developmental disabilities. Research in Developmental Disabilities, 111, 103876. https://doi.org/10.1016/j. ridd.2021.103876. Sivaraman, M., & Fahmie, T. A. (2020). A systematic review of cultural adaptations in the global application of ABA-based telehealth services. Journal of Applied Behavior Analysis, 53(4), 1838–1855. https://doi.org/10.1002/jaba.763. Suess, A., Wacker, D., Schwartz, J. E., Lustig, N., & Detrick, J. (2016). Preliminary evidence in the use of telehealth in an outpatient behavior clinic. Journal of Applied Behavior Analysis, 49(3), 686–692. https://doi.org/10.1002/jaba.305. Tang, J. S., Falkmer, M., Chen, N., Bӧlte, S., & Girdler, S. (2021). Development and feasibility of MindChip™: A social emotional telehealth intervention for autistic adults. Journal of Autism and Developmental Disorders, 51(4), 1107–1130. https://doi.org/10.1007/ s10803-020-04592-3. Telehealth.HHS.gov. (2022). Telehealth licensing requirements and interstate compacts. Retrieved from https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency/telehealth-licensing-requirements-and-interstate-compacts/. (Accessed 11 July 2022). Tervalon, M., & Murray-Garcia, J. (1998). Cultural humility versus cultural competence: A critical distinction in defining physician training outcomes in multicultural education. Journal of Health Care for the Poor and Underserved, 9(2), 117–125. https://doi. org/10.1353/hpu.2010.0233. TheWorld Bank (n.d.).Individuals using the internet (% of population).https://data.worldbank. org/indicator/IT.NET.USER.ZS?locations=TG-1W&most_recent_value_desc=true. Tsami, L., Lerman, D., & Toper-Korkmaz, O. (2019). Effectiveness and acceptability of parent training via telehealth among families around the world. Journal of Applied Behavior Analysis, 52(4), 1113–1129. https://doi.org/10.1002/jaba.645. Tsami, L., Nguyen, J.,Alphonso, N., Lerman, D., Matteucci, M., & Chen, N. (2022). Outcomes and acceptability of telehealth-based coaching for caregivers in Asian countries. Behavior Modification, 1454455221113560. https://doi.org/10.1177/01454455221113560. US Department of Health and Human Services. (2020). Notification of enforcement discretion for telehealth remote communications during COVID-19 nationwide public health emergency. https://public3.pagefreezer.com/content/HHS.gov/31-12-2020T08:51/https://www. hhs.gov/about/news/2020/03/17/ocr-announces-notification-of-enforcement-discretion-for-telehealth-remote-communications-during-the-covid-19.html.

318

Applied behavior analysis advanced guidebook

US Department of Health and Human Services. (2022). Telehealth policy changes after the COVID-19 public health emergency. June https://telehealth.hhs.gov/ providers/policy-changes-during-the-covid-19-public-health-emergency/policy-changes-after-the-covid-19-public-health-emergency/#:∼:text=The%20U.S.%20 Department%20of%20Health,COVID%2D19%20public%20health%20emergency. Vargas Londono, F., Lim, N., Barnett, M. R., Hampton, L. H., & Falcomata, T. S. (2022). Training culturally diverse caregivers to decrease their child’s challenging behaviors: A systematic review and meta-analysis. Journal of Autism and Developmental Disorders. https://doi.org/10.1007/s10803-022-05564-5. Wacker, D. P., Berg, W. K., Harding, J. W., Derby, K. M., Asmus, J. M., & Healy, A. (1998). Evaluation and longterm treatment of aberrant behavior displayed by young children with disabilities. Journal of Developmental and Behavioral Pediatrics, 19(4), 260–266. https:// doi.org/10.1097/00004703-199808000-00004. Wacker, D. P., Lee, J. F., Padilla Dalmau,Y. C., Kopelman, T. G., Lindgren, S. D., Kuhle, J., et al. (2013a). Conducting functional communication training via telehealth to reduce the problem behavior of young children with autism. Journal of Developmental and Physical Disabilities, 25(1), 35–48. https://doi.org/10.1007/s10882-012-9314-0. Wacker, D. P., Lee, J. F., Padilla Dalmau,Y. C., Kopelman, T. G., Lindgren, S. D., Kuhle, J., et al. (2013b). Conducting functional analyses of problem behavior via telehealth. Journal of Applied Behavior Analysis, 46(1), 31–46. https://doi.org/10.1002/jaba.29. Wacker, D. P., Schieltz, K. M., Berg, W. K., Harding, J. W., Padilla Dalmau, Y. C., & Lee, J. F. (2017). The long-term effects of functional communication training conducted in young children’s home settings. Education and Treatment of Children, 40(1), 43–56. https:// doi.org/10.1353/etc.2017.0003. Wacker, D. P., Schieltz, K. M., & Romani, P. W. (2015). Brief experimental analyses of problem behavior in a pediatric outpatient clinic. In H. S. Roane, J. L. Ringdahl, & T. S. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 151–177). Elsevier Inc. https://doi.org/10.1016/B978-0-12-420249-8.00007-1. Wacker, D. P., Schieltz, K. M., Suess, A. N., Romani, P. W., Padilla Dalmau, Y. C., Kopelman, T. G., et al. (2016). Telehealth. In N. N. Singh (Ed.), Handbook of evidence-based practices in intellectual and developmental disabilities (pp. 585–613). Springer International Publishing. Yi, Z., & Dixon, M. R. (2021). Developing and enhancing adherence to a telehealth ABA parent training curriculum for caregivers of children with autism. Behavior Analysis in Practice, 14(1), 58–74. https://doi.org/10.1007/s40617-020-00464-5.

SECTION 3

Professional development

This page intentionally left blank

CHAPTER 13

Diversity and multiculturalism Brian Connersa,b a

Brian Conners, BCBA, LLC, Pompton Lakes, NJ, United States Seton Hall University, South Orange, NJ, United States

b

The discussions around diversity and multiculturalism in the field of applied behavior analysis (ABA) are important. With the ever-growing diversity of the US population, the cultural matching of service providers to the clients that are receiving behavior analytic services should be a vital area of focus. Researchers have estimated that by 2044, the United States will become more diverse to the point where current groups identified as minorities will be the majority population in the country (Colby & Ortman, 2014). However, there is a cultural mismatch between the current trajectory of the US population and the demographics of the behavior analytic professionals providing services to this increasingly diverse client base (Luczaj, Cacciaguerra-Decorato, & Conners, 2020). Recent data from the Behavior Analyst Certification Board (Behavior Analyst Certification Board [BACB], 2022a) shows that the current demographics of behavior analysts are homogenous in terms of gender and race with most of the profession consisting of white/Caucasian females. This cultural mismatch can serve to present challenges for behavior analytic professionals if efforts are not made to diversify the ABA field and develop the skills of behavior analysts and other behavior analytic professionals in learning how to engage in culturally responsive behavior analytic treatment. This is valuable as culture permeates throughout the ABA service delivery model from the initial interactions with a client including intake, assessment, and treatment planning.Therefore, behavior analytic professionals must develop and apply the culturally responsive skills necessary to produce favorable outcomes for clients from diverse backgrounds and prevent clients and families from turning away from ABA entirely or terminating services early (Beaulieu, Addington, & Almeida, 2019; Betancourt, Green, Carrillo, & Owusu Ananeh-Firempong, 2016; Kodjo, 2009; Lo & Fung, 2003; Parette & Huer, 2002;Vandenberghe, 2008). Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00013-1

Copyright © 2023 Elsevier Inc. All rights reserved.

321

322

Applied behavior analysis advanced guidebook

Therefore, developing an understanding of the implications of culture within the context of ABA service delivery becomes valuable to behavior analytic professionals. Furthermore, this emphasizes the paramount need to address how to diversify the field and incorporate training on diversity, equity, and inclusion (DEI) into the graduate preparation of future generations of behavior analytic professionals along with the continued professional growth and development of existing practitioners. It is evident that matters of diversity and multiculturalism are integral to the helping professions including behavior analysis. Given the importance of DEI and its growing discussion in the field of ABA over recent years, this chapter focuses on providing a coherent review of current efforts around DEI and practice implications for the field and professionals. Additionally, this chapter presents educational, training, and practice priorities for advancing diversity and cultural awareness and sensitivity among professionals, supervisees, and students.

Current literature and clinical practice considerations The following section discusses the intersection of DEI and clinical practice for behavior analytic professionals, focusing on the current research literature that has addressed behavior analytic practice with culturally and linguistically diverse learners. Then, DEI in other areas of behavior analytic service delivery is examined including assessment and caregiver training. Lastly, the ethical obligations of behavior analytic professionals will be considered in the context of DEI.

Culturally and linguistically diverse learners Dennison et al. (2019) identified some potential barriers to providing behavior analytic services to culturally and linguistically diverse families that are important for practitioners to keep in mind. The first barrier is lack of diversity in research with culturally and linguistically diverse populations. The lack of research in this area has implications for clinical practice because behavior analysts rely on evidence-based practices for treatment. Furthermore, the authors mention the risk of cultural mismatch when conducting research or providing clinical services that can contribute to cultural misunderstandings that could negatively impact treatment recommendations. Another barrier is the lack of diversity among practitioners combined with the shortage of multilingual practitioners creates a risk of behavior analytic services being delivered in English even if it is not ­appropriate for the client. Finally, the attitudes and biases of a behavior analytic professional



Diversity and multiculturalism

323

regarding different cultures should be examined to overcome this barrier for culturally responsive treatment (Dennison et al., 2019). Other research on the use of verbal behavior analytic treatment of learners from culturally and linguistically diverse learners also highlighted important clinical implications (Brodhead, Duran, & Bloom, 2014). The authors mentioned the importance of behavior analysts recognizing that clients may need to learn to switch between different languages in different environments (e.g., home and school). If so, then learners will need to learn discriminate to contact reinforcement by using the language needed in that environmental context (Brodhead et al., 2014).

Assessment The diverse backgrounds of clients are an aspect that behavior analysts must be mindful of throughout the assessment and treatment process. Over the years, there have been publications that have provided some guidance on the importance of culture in the assessment process. Tanaka-Matsumi, Seiden, and Lam (1996) discuss how conducting a culturally-informed functional assessment may not only enhance the accuracy of the assessment, but also create more credibility of the ABA therapy process. The authors argue that this could possibly decrease attrition due to clients leaving ABA therapy. Salend and Taylor (2002) mention that often culture is still overlooked as a factor when conducting functional assessments and the impact of client diversity on the behavior and assessment process. Some guidelines provided more culturally sensitive functional assessment procedures by having a diverse multidisciplinary team involved in the assessment process, identifying and using ways of selecting and defining target behaviors that are culturally sensitive, and developing culturally responsive behavior intervention plans and evaluating their impact on an individual’s cultural perspectives (Salend & Taylor, 2002). More recently, Addington (2019) developed a Culture Interview Checklist that can be used for behavior analysts when working with families to better understand a client’s culture during the initial phases of assessment and treatment planning. Additionally, language must be considered when conducting a functional analysis. Rispoli et al. (2011) found that a learner had higher rates of problem behavior during a functional analysis when the functional analysis was conducted in English versus the learner’s primary language of Spanish. This study demonstrates the importance of taking the native language of learners into consideration when behavior analysts work with clients as it could influence the assessment and treatment planning process.

324

Applied behavior analysis advanced guidebook

Families and caregiver training Another primary clinical area of focus for behavior analytic professionals within the ABA service delivery model includes providing parent or caregiver training.Working with families in the home environment can provide cultural insights for behavior analytic professionals. Researchers have discussed that cultural differences exist between how family and professional relationships are viewed and how culture affects and shapes our behavior overall, which can be relevant when providing parent or caregiver training (Rodriguez & Williams, 2020; Wood & Eagly, 2002). Therefore, the ability of behavior analytic professionals to explore and understand the various cultural contingencies involved within families of diverse backgrounds is important. Notably, their ability to identify socially significant behaviors and design culturally responsive behavior analytic treatments, including parent or caregiver training programs, can not only improve quality of care but also establish a strong relationship between the family and behavior analytic professional (Aguilar & Clay, 2020). This can also be achieved by ensuring that behavior analytic professionals identify and include cultural preferences when developing treatment protocols (Fong, Catagnus, Brodhead, Quigley, & Field, 2016). Furthermore, families with diverse cultural backgrounds will experience an increased social validity of caregiver interventions if the behavior analytic professional makes the effort to include a specific family’s routine, values, culture, and language (Buzhardt, Rusinko, Heitzman-Powell, Trevino-Maack, & McGrath, 2016; Kuhn et al., 2019; Moes & Frea, 2002; Rodriguez, 2018; Rodriguez & Williams, 2020). Additionally, when working with families it is important to gather culturally relevant information that can inform the treatment process from the onset of clinical services. This will allow for cultural accommodations to be made as part of the treatment planning process. For example, a CulturallyAdapted Response Evaluation Survey (CARES) is a structured interview tool that can obtain culturally relevant information about clients and their families (Aguilar & Clay, 2020; Garcia, 2018). Specifically, CARES can be used to develop and evaluate the implementation of functional communication training with clients through a parent or caregiver training model (Aguilar & Clay, 2020; Garcia, 2018; Tiger, Hanley, & Bruzek, 2008). While more research needs to be conducted on the impact of culture on parent or caregiver training, these studies provide an initial foundation for understanding ways that behavior analytic professionals can integrate culturally responsive practices in working with families.



Diversity and multiculturalism

325

Ethics and DEI In the most recent Ethics Code for Behavior Analysts (BACB, 2020), revisions included different aspects of DEI in both clinical practice and supervision. Standard 1.07: Cultural Responsiveness and Diversity talks about the importance of providing culturally responsive behavior analytic treatment and supervision by participating in professional development and engaging in self-reflective practices to understand personal biases (BACB, 2020). Standard 1.10: Awareness of Personal Biases and Challenges further discusses the ethical obligation of ABA professionals to not only identify their biases, but to also evaluate if these biases impact their professional work and document steps taken to address these biases (BACB, 2020). These standards within the Code are imperative for ABA practitioners because as discussed in earlier sections of this chapter, there are numerous aspects of ABA service delivery from assessment to intervention that can be negatively impacted by a practitioner’s bias. Another code standard relevant to DEI and clinical work of ABA practitioners is Standard 1.12: Giving and Receiving Gifts.This standard highlights gifts being given as gratitude, which in some cultures may be viewed as an insult if a gift was not accepted (BACB, 2020).Therefore, allowing a small gift of low monetary value as being acceptable for an ABA professional to receive when working with families is a positive step in helping practitioners maintain culturally appropriate relationships with their clients and families. Next, within the context of ABA supervision, the Code addresses in Standard 4.07: Incorporating and Addressing Diversity that supervisors have responsibility both during training and supervision that DEI topics are included as part of the supervisory process (BACB, 2020). This is valuable to training future professionals in becoming culturally responsive practitioners. However, it may be challenging for some supervisors to discuss and train on DEI topics when they have not had training on DEI topics. Research has shown that most behavior analysts report not having any formalized training on multiculturalism and diversity issues (Beaulieu et al., 2019; Conners, Johnson, Duarte, Murriky, & Marks, 2019). Finally, Standard 1.08: Non-Discrimination encompasses both clinical and supervisory practices of behavior analytic professionals (BACB, 2020). This standard in the Code reflects that behavior analytic professionals cannot discriminate against individuals that they are working with whether a client, family, colleague, or supervisee (BACB, 2020). This is where self-­ assessment may be valuable for behavior analytic professionals to evaluate if they are inclusive and equitable in their professional practice.

326

Applied behavior analysis advanced guidebook

Recommendations for service delivery and research The following section covers areas where DEI can be incorporated into the field of behavior analysis from not only clinical service delivery but in the preparation of professionals through their education, training, and supervision. More specifically, DEI is explored in terms of how agencies and organizations providing ABA services can incorporate DEI efforts into their organizations. Next, the section examines DEI components within continuing education (CE), graduate training and supervision, and leadership development and mentorship for behavior analytic professionals. Lastly, a proposed agenda for furthering research on DEI in the areas of service delivery, graduate training and preparation, and supervision is presented.

ABA agencies and organizations Within ABA agencies, centers, clinics, and other organizations there is an opportunity to examine and control efforts around DEI. For example, ABA agencies can develop strategic plans with actionable steps to foster an environment that is inclusive, equitable, and just. Additionally, leadership within these organizations can look at the ways in which they are recruiting, hiring, training, and promoting diverse professionals in their company. A useful way for ABA organizations to do a deep dive into DEI within their organization would be to do a self-study or engage in an accreditation process. For instance, the Behavioral Health Center of Excellence (BHCOE) provides accreditation to ABA agencies, centers, clinics, and other organizations providing ABA treatment services. As part of their accreditation standards, the BHCOE’s 2021 full accreditation standards and 2022 training site accreditation standards both include DEI standards as part of the accreditation process (Behavioral Health Center of Excellence [BHCOE], 2022). These standards cover different areas such fair hiring practices, recruitment and retention strategies, having a diversity statement, providing cultural humility training, and more (BHCOE, 2022). These standards are just the first step for ABA agencies and organizations to address DEI topics. This process is not merely checking a box for organizations just to obtain accreditation, but rather an initial step in furthering their understanding of the topics and developing goals and priorities within their organization to actively work on creating an inclusive environment and engage in best practices in culturally responsive behavior analytic treatment.



Diversity and multiculturalism

327

Continuing education for professionals In the March 2022 newsletter by the BACB (2022b), it was announced that board certified behavior analysts will be required to have two continuing education units on DEI starting January 1, 2026. This is a positive step in the right direction for professionals to develop skills in DEI work and to further the repertoires of professionals who may have DEI experience from their graduate training. Notably, both ethics and supervision requirements are four continuing education units while DEI is only two units. It would be advisable for the DEI requirement to be increased to four units to stress the same level of importance as ethics and supervision are in the field.

University training and preparation Presently, there are not many graduate training programs in behavior analysis that incorporate topics of DEI and culturally responsive practices into their curricula (Conners et  al., 2019; Najdowski, Gharapetian, & Jewett, 2021). Behavior analysis is one of the few fields in comparison to psychology, counseling, and education that does not require multiculturalism and diversity training as part of its accreditation standards for graduate programs (Ortiz, Joseph, & Deshais, 2022). Fong and Tanaka (2013) proposed training standards for the field, which focus on ethics and values, self-awareness, cross-cultural application, diverse workforce, language diversity, professional education, and referrals. The proposal offered a framework for professionals in the field of ABA due to a lack of DEI training standards.The need for inclusion of DEI topics in graduate programs has also been stressed by various authors in behavior analysis and related disciplines such as counseling and psychology (Carey & Marques, 2007; Diaz-Lazaro & Cohen, 2001; Fong et al., 2016; Westefeld & Rasmussen, 2013). However, DEI must be included in courses related to behavior assessment and intervention, organizational behavior management, and ethics in behavior analysis (BACB, 2022b). This change is significant to ensure that future generations of behavior analytic professionals have foundational knowledge regarding the impact of diversity on behavior analytic practice. This requirement notwithstanding, there is not much guidance for university programs on what DEI topics should or must be covered in their courses.What would be helpful for university programs is guidance on minimum training standards related to DEI as specified in the BACB Task List and presented in Table 1.

328

Applied behavior analysis advanced guidebook

Table 1  Proposed DEI standards for higher education programs in behavior analysis. Standard

Description

1

Students receive training in culturally responsive assessment practices. Students gain experience in working with diverse populations. The ABA curriculum includes DEI topics throughout the training program (e.g., coursework, practicum, supervision, etc.). DEI training is provided to all faculty, staff, and students in the ABA graduate program at minimum once per semester. The university ABA program can demonstrate efforts to recruit diverse student populations with annual goals of increase enrollment of a diverse student body. The university ABA program can demonstrate efforts to recruit diverse faculty to join the ABA program. The ABA program has a retention plan for how the program is actively working toward increasing retention of both students and from diverse backgrounds. The ABA programs teaching materials (e.g., videos, photos, readings, etc.) represent diverse populations. Marketing and promotional materials used by the ABA program are representative of diverse populations. The ABA program can demonstrate that students and faculty engage in self-assessment around DEI topics. The ABA program has an established feedback mechanism for faculty and students to share with program administrators on DEI issues. The ABA program makes demographic data publicly available for potential students and faculty interested in joining the program as either a student or faculty member. Students engage in culturally responsive supervision practices. The ABA program has a formal program evaluation process in place to evaluate DEI efforts. The ABA program has an ongoing strategic plan to further its initiatives in DEI in the field of behavior analysis. The ABA program has an active DEI committee or task force in place that meets at a minimum of 4 times per year to carry out DEI initiatives identified in the strategic plan. Ongoing professional development on DEI topics are provided by the program for faculty and students. There is a mentoring program for students, particularly for those of diverse backgrounds to support them in their educational and professional journey. A mentoring program is provided for junior faculty in the ABA program, particularly for those diverse faculty members. The program has at minimum two full time ABA faculty members that identify from diverse backgrounds.

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20



Diversity and multiculturalism

329

Despite these proposed training standards, there are actions that university programs can implement independently. Some universities have published guidelines that address DEI within their ABA program (Hilton et al., 2021). Some initiatives included forming a departmental DEI Task Force, conducting a needs assessment with students, infusing culturally responsive course content, changing discussion forums in the beginning of semesters so faculty and students can share preferred name and pronouns, and updating courses to include more DEI readings and projects (Hilton et al., 2021). Another actionable area is taking a deep examination of the curriculum within ABA programs to identify ways to integrate DEI topics and training. Fong, Ficklin, and Lee (2017) first identified a growing need to develop culture- and diversity-related curricula and training opportunities in the field of ABA. Research shows that curriculum modifications that include multicultural information within the presented course materials along with the use of multimodal instructional and assessment approaches to further support diverse student populations are often cited as valuable to the growth of professionals (Maringe & Sing, 2014; Pincas, 2001). Ortiz et al. (2022) conducted a pilot investigation examining the effects of providing ABA faculty members with a supplemental diversity/culturally responsive service delivery (CRSD) curriculum and the impact on the presence of diversity/ CRSD content in ABA course syllabi. The findings of this study suggested that the development and delivery of a supplemental curriculum that specifically provides tailored course objectives and resources could increase diversity/CRSD content in graduate ABA coursework. When examining how to integrate DEI content into the ABA curriculum of graduate programs in higher education, there are some areas that programs should consider in how this content is included. First, programs need to understand the difference between group-based (i.e., a focus one demographic group) and inclusion-based (i.e., incorporating multiple diverse populations) diversity training. Research has found that participants in diversity training often find training curricula with an inclusion-based focus to be more positive and are less resistant to diversity training concepts than training programs that are solely focused on one specific demographic population like race and gender (Bezrukova, Jehn, & Spell, 2012). Furthermore, when designing the DEI curriculum and coursework, ABA programs should decide whether training will be awareness-based, behavior-based, or mixed. Awareness-based diversity training focuses predominantly on students sharing their own experiences in working with various diverse populations and developing self-awareness of diversity issues,

330

Applied behavior analysis advanced guidebook

teaching about different cultural groups, and recognizing one’s cultural assumptions, values, and biases (Baba & Herbert, 2005; Bezrukova et al., 2012; Probst, 2003; Roberson, Kulik, & Pepper, 2001; Robinson & Bradley, 1997). Behavior-based diversity training emphasizes building the skill repertoire of students while monitoring and changing their actions toward working with individuals of diverse groups so they can effectively manage associations with diverse client populations (Armour, Bain, & Rubio, 2004; Bezrukova et al., 2012). Studies reveal that behavior-based training is rarely used in isolation and often combined with awareness-based training (Bezrukova et al., 2012).Therefore, ABA graduate programs may wish to develop curricula on diversity training that reflects a mixed training approach. Lastly, it is recommended that ABA programs select the modalities or methods they will use for including DEI training in their programs. Research shows that there are two main categories of training instruction (Bezrukova et al., 2012). Single modality training is either lecture-based only or includes video materials (Bezrukova et al., 2012; Chrobot-Mason, 2004; Kulik, Perry, & Bourhis, 2000; Lee, Anderson, & Hill, 2006). By comparison, multiple methods DEI training incorporates lecture, role-play, experiential and didactic learning, discussion, videos, panel presentations, home visits, and simulated client sessions based on case analysis (Bezrukova et al., 2012; Juarez et  al., 2006). Students appear to learn better using multi-methods for DEI training and judge it more favorably compared to single method training (Bezrukova et al., 2012; Kolb & Kolb, 2005). Therefore, ABA programs should consider multi-methods instructional strategies for DEI content whenever possible. In order to better assist ABA graduate programs in understanding their current efforts to include DEI content, a checklist is provided in Table 2. While not exhaustive, the checklist can be used as a form of self-study for ABA programs to help consider further program development.

Supervisory practices There have been articles published discussing the importance of culturally responsive supervision practices in behavior analysis. Many of the suggestions have focused on areas of self-assessment for supervisors and supervisees (Gatzunis, Edwards, Rodriguez-Diaz, Conners, & Weiss, 2022; LeBlanc, Sellers, & Ala’i, S., 2020; Leland & Stockwell, 2019). Most recent examples include self-assessment tools and activities that are structured for supervisors to use in examining their supervisory practices as it relates to cultural responsiveness (LeBlanc et  al., 2020). Another self-assessment ­instrument



Diversity and multiculturalism

331

Table 2  DEI checklist for higher education programs in behavior analysis. DEI self-assessment items for ABA programs

1.

2. 3. 4. 5.

6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

Yes

No

Does our ABA program have a current statement on diversity, equity, and inclusion included on the program website, marketing materials, in student program handbook, in course syllabi, etc.? Does our ABA program include a land use statement in our course syllabi? Does our ABA program infuse readings into all courses on DEI topics? Does our ABA program marketing materials represent diverse populations? Is there a DEI task force or committee in place for the ABA program comprised of a diverse representation of both students, faculty, and administrators? Are there opportunities for students to gain experience in working with clients from diverse backgrounds? Are there ways for students and faculty to provide feedback on DEI efforts within the ABA program? Is there a way for students to report discrimination or bias that they may experience while in the ABA program? Does the Aba program have diverse representation among full and part-time faculty? Are there active research agendas on examining topics of diversity in the ABA program? Is there a formal mentoring program for diverse students to become involved in research and leadership positions? Does the ABA program offer a mentorship program for faculty members to promote retention and promotion of diverse faculty? Does the ABA program have a strategic plan focused on DEI topics? Does the ABA program have a marketing plan for recruiting both diverse student populations and faculty members? Does the ABA program offer financial support for students to attend and present state and national conferences that help to eliminate financial barriers for students from lower socioeconomic backgrounds? Continued

332

Applied behavior analysis advanced guidebook

Table 2  DEI checklist for higher education programs in behavior analysis.—cont’d DEI self-assessment items for ABA programs

16. 17.

18. 19. 20.

Yes

No

Does the ABA program provide training opportunities translated into native language for culturally and linguistically diverse students? Does the ABA program provide assistance for potential access barriers for diverse students to participate in the coursework and training program (e.g., laptops, web cameras, etc.)? Are there opportunities in the ABA program for students to do international work in behavior analysis or engage in cultural emersion programs? Does the ABA program provide financial support and resources for diverse faculty to develop research laboratories? Does the ABA program provide professional development workshops and other opportunities on DEI topics for students, faculty, staff, and administrators?

for supervisors was developed by Leland and Stockwell (2019) which focuses on a self-examination of creating a safe and supportive environment for transgender and gender-nonconforming (TGNC) individuals. This tool emphasizes ethics, the development of inclusive environments, and behavior analysts supporting TGNC clients, students, supervisees, and colleagues (Leland & Stockwell, 2019). Lastly, Gatzunis et  al. (2022) proposed the Culturally Responsive Supervision Self-Assessment Tool designed to help supervisors and supervisees understand how their cultural and racial backgrounds impact aspects of the supervision process. Furthermore, the tool assists the supervisor and supervisee in understanding how the cultural and racial backgrounds of individuals receiving ABA services can influence clinical care. And finally, the tool allows the supervisor to explore their own behavior at the commencement of the supervision process and throughout the supervisory relationship to better understand how the supervisor’s behaviors around culture can impact the supervisor-supervisee relationship (Gatzunis et al., 2022). Self-assessment is an important step in being a culturally responsive practitioner and supervisor, but this is only the beginning of how to become a culturally responsive supervisor. Within the context of culturally responsive supervision practices, further development might concentrate on a standardized supervision curriculum that integrates DEI throughout the supervisory practice. The curriculum



Diversity and multiculturalism

333

could provide opportunities for supervisors and supervisees to engage in skill development jointly as part of the supervisor-supervisee relationship. Another option would be building a database of supervisors and supervisees where supervisees could match with a supervisor and site based upon different demographics and criteria. A final suggestion would be for the BACB to consider updating the Supervision Training Curriculum Outline to reflect DEI requirements as part of the training and development of supervisors (BACB, 2018).

Leadership development It is important for the field of behavior analysis to support, guide, and mentor professionals from diverse backgrounds in becoming leaders in ABA organizations. Research reveals that the leadership within behavior analysis organizations is dominated by White/Caucasian males (Cirincione-Ulezi, 2020). Current leadership within ABA organizations whether it be locally within ABA companies or among national and international ABA professional organizations, must promote, create, and maintain prosocial and inclusive environments to train, support, and mentor diverse professionals in leadership roles. Cirincione-Ulezi (2020) discussed some of the possible barriers, particularly for black women, in being able to obtain leadership positions within behavior analysis. Potential barriers mentioned include but are not limited to lack of diversity in the field of behavior analysis, stereotypes, and insufficient access to mentors (Cirincione-Ulezi, 2020). Therefore, the field of behavior analysis will want to proactively work on diversifying leadership by eliminating these barriers and encouraging diverse professionals to join leadership positions at local (e.g., administration within an ABA company), state (e.g., executive boards of state ABA associations), national (e.g., BACB and Association of Professional Behavior Analysts), and internationals levels (e.g., Association for Behavior Analysis International). The field can only benefit by strengthening diverse voices among various leadership levels in ABA, thus improving and reinforcing an inclusive narrative that normalizes perceptions of high competency and efficacy of diverse groups (Cirincione-Ulezi, 2020; Makino & Oliver, 2019).

Mentoring Nosik and Grow (2015) found that there was a strong emphasis on mentorship when interviewing prominent women in behavior analysis. Mentorship was valued during graduate training and throughout professional careers. Mentoring for graduate students entering the field and existing professionals

334

Applied behavior analysis advanced guidebook

in different stages of their careers in behavior analysis can make one think, how are we effectively providing mentoring in the field and what can we do better? There are many unique opportunities that can be created in the field for mentoring of behavior analytic professionals that can assist in the recruitment and retention of future generations of diverse practitioners. One way is within the ABA professional conference circuit. Similar to other professional associations there are mentoring opportunities while attending the conference. For example, the National Association of School Psychologists (NASP) has a formal mentoring program where conference participants can be matched with senior professionals that they can meet with at conferences and then continue mentorship contact after the conference for a minimum of one month (National Association of School Psychologists [NASP], 2021). ABA professional conferences could adopt a similar model where senior behavior analysts from various backgrounds and locations around the world are paired with diverse graduate student populations or early career behavior analysts for mentoring relationships. Other mentoring initiatives that have occurred within other disciplines to promote DEI have included conscious efforts within peer-reviewed journals. For example, the School Psychology Review leadership team came up with the following initiatives to increase DEI efforts in school psychology including: …(a) establishing individual and collective commitments to advocating for and advancing DEI as the foundation of our scholarship; (b) diversifying the journal leadership; (c) diversifying the editorial advisory board; (d) preparing future diverse journal leadership through mentored editorial fellowship programs, especially focused on early research career individuals; (e) mentoring future colleagues by establishing a student editorial board with members from diverse backgrounds; (f ) focusing on special topics relevant to diverse and minoritized children, youth, families, and school communities; (g) making available professional development opportunities and resources; and (h) establishing a journal action plan focused on advancing DEI. (Jimerson et al., 2021, p. 1).

Similar efforts can be made among the professional journals in behavior analysis, such as Behavior Analysis in Practice, Journal of Applied Behavior Analysis, Behavior Analysis: Research and Practice and related journals. Lastly, mentorship should also occur within higher education programs in behavior analysis with both students and faculty. To recruit and retain diverse faculty, behavior analysis departments should have a formal mentoring program for junior faculty to assist getting acclimated to the culture and climate of the department and university, coach on teaching, encourage



Diversity and multiculturalism

335

service within the university and profession, and collaborate on setting a research agenda and furthering research initiatives leading to scholarly production and dissemination of behavior analysis. Furthermore, having more diverse faculty in behavior analysis programs will increase the likelihood of more diverse student populations being attracted to joining those university ABA programs. In addition to focusing on the recruitment, retention, and mentorship of diverse faculty, a focus on mentoring both undergraduate and graduate students who are wanting to pursue careers in behavior analysis would be beneficial. At the undergraduate level, a mentoring program could be designed for students studying in related disciplines such as education and psychology, who may wish to purse graduate study in behavior analysis. For instance, a virtual mentoring program that was developed for undergraduate psychology students was successful in providing mentoring that helped them overcome potential barriers to entry to graduate programs (Silverstein, Miller, Rivet, & Nuhu, 2022). A mentoring program can be developed to support diverse graduate student populations to become not only trained as future behavior analytic professionals but have them engaged in research and the profession by presenting at conferences and publishing in peer-reviewed journals.

Research Within recent years there has been more focus on research and publications being conducted on topics of diversity and multiculturalism in behavior analysis. While research has continued to grow in this area, there are always opportunities for further improvement. As previously mentioned, studies have found that other than age and gender, most demographics are not reported in behavior analysis (Jones, St. Peter, & Ruckle, 2020). Demographic reporting should be a requirement of the peer-review process. Fontenot, Uwayo, Avendano, and Ross (2019) examined the inclusion of economically disadvantaged children in behavior analytic research. Results demonstrated that about 5% of studies reviewed from their sample were during the time periods of 1968–1977 and 2002–2017. These findings were suggested to be the result of national educational policy trends during those times related to the Education for all Handicapped Children Act, Elementary and Secondary Act, and No Child Left Behind which influenced the number of publications on economically disadvantaged children during these timeframes. Furthermore, the researchers found that it was not uncommon in most behavior analytic publications that the socioeconomic status of participants were not included in the demographic description of

336

Applied behavior analysis advanced guidebook

study participants. Therefore, it would be advantageous for researchers to include socioeconomic status when describing participants in their studies. Of note, many of the reviewed articles indicated that behavioral interventions improved the academic and social outcomes of economically disadvantaged children. Additionally, Fontenot et  al. (2019) highlighted areas for investigation by behavior analytic professionals in practice and research. One recommendation is to examine the effects of behavior analytic interventions for economically disadvantaged children within community centers and rural areas. Also, there are opportunities for further research on clinical interventions for English Language Learners (ELL), particularly those with disabilities. Lastly, more studies need to be conducted with economically disadvantage populations to better understand the need for behavior analysis with this population. The limited number of studies including economically disadvantaged populations could reflect a larger need for behavior analytic services among this population (Fontenot et  al., 2019; National Council on Disability, 2018). Another good practice for research is also reporting the demographics of the authors so journals can have metrics on the diversity of professionals publishing in the field. This information can be helpful to inform mentoring of scholars from diverse backgrounds to publish in behavior analytic journals. This information can either be published in the article if the authors wish to disclose details to the public or can be kept privately through the journal.The publishing of author demographic data in a research article is exemplified by Ortiz et al. (2022) in Behavior Analysis in Practice. If journals were to collect data privately, they could publish the information annually as part of its metrics about the journal and have the data de-identified. The data can then also assist the journal in strategic planning of creating more opportunities for diverse scholars to publish. Additionally, behavior analysis can have a robust research agenda to further examine DEI topics and its impact in aspects of ABA service delivery, supervision, and training. In terms of service delivery, behavior analysts should conduct research examining behavior analytic treatment procedures with diverse populations to explore cultural variables that may impact assessment and interventions. In illustration, researchers could evaluate the effects of culture and language in preference assessments, skill acquisition, behavior assessment, treatment planning, stimulus equivalence, and more. In terms of furthering research on supervision and supervisory practices, more research on cross-cultural supervision from a behavior analytic perspective



Diversity and multiculturalism

337

would be useful. This would be beneficial for the field as most research in this area has been focused on from the viewpoints of related fields (e.g., education, social work, counseling, psychology, etc.).

Chapter summary With the growing diversity of the population of clients that ABA professionals serve, there is a growing urgency for the field of behavior analysis to address the topic of DEI. Behavior analysis is continually evolving and its efforts around DEI work will continue to be an area of focus and growth for years to come. This chapter provided a synthesis of the current state of DEI in behavior analysis within the areas of clinical service delivery, supervision, and training. Please note that publications on this topic are becoming more rapidly available. Depending on when behavior analytic professionals read this chapter, new articles will have been published addressing some of the topics, suggestions, and concerns covered in this chapter.The recommendations provided in this chapter are observations of ways the field can continue to evolve for the betterment of the profession. By the field of behavior analysis making conscious efforts to address DEI issues, it can only benefit the clients we aim to serve by learning to provide more culturally responsive behavior analytic services.

References Addington, J. H. (2019). An evaluation of a cultural interview checklist for behaviorally oriented clinicians (Master’s thesis). Florida Institute of Technology Library Repository. https:// repository.lib.fit.edu/bitstream/handle/11141/2951/ADDINGTON-THESIS-2019. pdf?sequence=1&isAllowed=y. Aguilar, J., & Clay, C. J. (2020). Cultural accommodations in caregiver training. In B. M. Conners, & S. T. Capell (Eds.), Multiculturalism and diversity in applied behavior analysis: Bridging theory and application (1st ed., pp. 137–154). Routledge. Armour, M. P., Bain, B., & Rubio, R. (2004). An evaluation study of diversity training for Field instructors: A collaborative approach for enhancing cultural competence. Journal of Social Work Education, 40(1), 27–38. https://doi.org/10.1080/10437797.2004.10778477. Baba,Y., & Herbert, C. (2005).The effects of participation in a cultural awareness program on jail inmates. Journal of Ethnic and Cultural Diversity in Social Work, 13(3), 91–113. https:// doi.org/10.1300/J051v13n03_05. Beaulieu, L., Addington, J., & Almeida, D. (2019). Behavior analysts’ training and practices regarding cultural diversity: The case for culturally competent care. Behavior Analysis in Practice, 12, 557–575. https://doi.org/10.1007/s40617-018-00313-6. Behavior Analyst Certification Board. (2018). Supervision training curriculum outline (2.0). Littleton, CO: Author. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb. com/wp-content/ethics-code-for-behavior-analysts/.

338

Applied behavior analysis advanced guidebook

Behavior Analyst Certification Board. (2022a). BACB certificant data. Retrieved from https:// www.bacb.com/bacb-certificant-data/. Behavior Analyst Certification Board. (2022b). BACB March 2022 newsletter. Retrieved from https://www.bacb.com/wp-content/uploads/2022/01/BACB_March2022_ Newsletter-220330-4.pdf. Behavioral Health Center of Excellence. (2022). BHCOE accreditation standards. https:// www.bhcoe.org/standards/. Betancourt, J. R., Green, A. R., Carrillo, J. E., & Owusu Ananeh-Firempong, I. I. (2016). Defining cultural competence: A practical framework for addressing racial/ethnic disparities in health and health care. Public Health Reports, 118, 293–302. https://doi. org/10.1016/S0033-3549(04)50253-4. Bezrukova, K., Jehn, K. A., & Spell, C. S. (2012). Reviewing diversity training:Where we have been and where we should go. The Academy of Management Learning and Education, 11(2), 207–227. https://doi.org/10.5465/amle.2008.0090. Brodhead, M. T., Duran, L., & Bloom, S. E. (2014). Cultural and linguistic diversity in recent verbal behavior research on individuals with disabilities: A review and implications for research and practice. The Analysis of Verbal Behavior, 30(1), 75–86. https://doi. org/10.1007/s40616-014-0009-8. Buzhardt, J., Rusinko, L., Heitzman-Powell, L., Trevino-Maack, S., & McGrath, A. (2016). Exploratory evaluation and initial adaptation of a parent training program for Hispanic families of children with autism. Family Process, 55(1), 107–122. https://doi.org/10.1111/ famp.12146. Carey, D., & Marques, P. (2007). From expert to collaborator: Developing cultural competency in clinical supervision. The Clinical Supervisor, 26(1), 141–157. Chrobot-Mason, D. (2004). Managing racial differences: The role of majority managers’ ethnic identity development on minority employee perceptions of support. Group and Organization Management, 29(1), 5–31. https://doi.org/10.1177/1059601103252102. Cirincione-Ulezi, N. (2020). Black women and barriers to leadership in ABA. Behavior Analysis in Practice, 13, 719–724. https://doi.org/10.1007/s40617-020-00444-9. Colby, S. L., & Ortman, J. M. (2014). Projections of the size and composition of the U.S. population: 2014 to 2060 (current population reports no. P25–1143). Washington, DC: U.S. Census Bureau. Conners, B., Johnson, A., Duarte, J., Murriky, R., & Marks, K. (2019). Future directions of training and fieldwork in diversity issues in applied behavior analysis. Behavior Analysis in Practice, 12(4), 767–776. https://doi.org/10.1007/s40617-019-00349-2. Dennison, A., Lund, E. M., Brodheah, M. T., Mejia, L., Armenta, A., & Leal, J. (2019). Delivering home-supported applied behavior analysis therapies to culturally and linguistically diverse families. Behavior Analysis in Practice, 12, 887–898. https://doi. org/10.1007/s40617-019-00374-1. Diaz-Lazaro, C. M., & Cohen, B. B. (2001). Cross-cultural contact in counseling training. Journal of Multicultural Counseling and Development, 29(1), 41–56. Fong, E. H., Catagnus, R. M., Brodhead, M. T., Quigley, S., & Field, S. (2016). Developing the cultural awareness skills of behavior analysts. Behavior Analysis in Practice, 9(1), 84–94. Fong, E. H., Ficklin, S., & Lee, H.Y. (2017). Increasing cultural understanding and diversity in applied behavior analysis. Behavior Analysis: Research and Practice, 17(2), 103–113. https:// doi.org/10.1037/bar0000076. Fong, E. H., & Tanaka, S. (2013). Multicultural alliance of behavior analysis. International Journal of Behavioral Consultation and Therapy, 8(2), 17–19. Fontenot, B., Uwayo, M., Avendano, S. M., & Ross, D. (2019). A descriptive analysis of applied behavior analysis research with economically disadvantaged children. Behavior Analysis in Practice, 12, 782–794. https://doi.org/10.1007/s40617-019-00389-8. Garcia, A. R. (2018). A cultural adaptation of functional communication training. In Paper presented at Association for Behavior Analysis International 44th Annual Conference, San Diego, CA.



Diversity and multiculturalism

339

Gatzunis, K. S., Edwards, K. Y., Rodriguez-Diaz, A., Conners, B. M., & Weiss, M. J. (2022). Cultural responsiveness framework in BCBA supervision. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-022-00688-7. Hilton, J., Syed, N., Weiss, M. J., Tereshko, L., Marya, V., et al. (2021). Initiatives to address diversity, equity, and inclusion within a higher education ABA department. Behavior and Social Issues, 30, 58–81. https://doi.org/10.1007/s42822-021-00082-y. Jimerson, S. R., Arora, P., Blake, J. J., Canivez, G. L., Espelage, D. L., et al. (2021). Advancing diversity, equity, and inclusion in school psychology: Be the change. School Psychology Review, 50(1), 1–7. https://doi.org/10.1080/2372966X.2021.1889938. Jones, S. H., St. Peter, C. C., & Ruckle, M. M. (2020). Reporting of demographic variables in the journal of applied behavior analysis. Journal of Applied Behavior Analysis, 53(3), 1304–1315. https://doi.org/10.1002/jaba.722. Juarez, J. A., Marvel, K., Brezinski, K. L., Glazner, C., Towbin, M. M., & Lawton, S. (2006). Bridging the gap: A curriculum to teach residents cultural humility. Family Medicine, 38(2), 97–102. Kodjo, C. (2009). Cultural competence in clinician communication. Pediatrics in Review/ American Academy of Pediatrics, 30(2), 57. https://doi.org/10.1542/pir.30-2-57. Kolb, A.Y., & Kolb, D. A. (2005). Learning styles and learning spaces: Enhancing experiential learning in higher education. The Academy of Management Learning and Education, 4(2), 193–212. Kuhn, J. L.,Vanegas, S. B., Saldago, R., Borjas, S. K., Magana, S., & DaWalt, L. S. (2019). The cultural adaptation of a transition program for latino families of youth with autism spectrum disorder. Family Process. https://doi.org/10.1111/famp.12439. Kulik,C.T.,Perry,E.,& Bourhis,A.(2000).Ironic evaluation processes:Effects of thought suppression on evaluations of older job applicants.Journal of Organizational Behavior,21(6),689–711. https://doi.org/10.1002/1099-1379(200009)21:63.0.CO;2-W. LeBlanc, L. A., Sellers, T. P., & Ala’i, S. (2020). Building and sustaining meaningful and effective relationships as a supervisor and mentor. Sloan Publishing. Lee, C. A., Anderson, M. A., & Hill, P. D. (2006). Cultural sensitivity for nurses: A pilot study. Journal of Continuing Education in Nursing, 37(3), 137–141. Leland, W., & Stockwell, A. (2019). A self-assessment tool for cultivating affirming practices with transgender and gender-nonconforming (TGNC) clients, supervisees, students, and colleagues. Behavior Analysis in Practice, 12(4), 816–825. https://doi.org/10.1007/ s40617-019-00375-0. Lo, H. T., & Fung, K. P. (2003). Culturally competent psychotherapy. The Canadian Journal of Psychiatry, 48(3), 161–170. https://doi.org/10.1177/070674370304800304. Luczaj, P., Cacciaguerra-Decorato, F., & Conners, B. M. (2020). Cultural incompetency in applied behavior analysis service delivery models. In B. M. Conners, & S.T. Capell (Eds.), Multiculturalism and diversity in applied behavior analysis: Bridging theory and application (1st ed., pp. 155–165). Routledge. Makino, K., & Oliver, C. (2019). Developing diverse leadership pipelines: A requirement for 21st century success. Organizational Development Review, 51(1), 4–10. Maringe, F., & Sing, N. (2014). Teaching large classes in an increasingly internationalising higher education environment: Pedagogical, quality and equity issues. Higher Education, 67, 761–782. https://doi.org/10.1007/s10734-013-9710-0. Moes, D., & Frea, W. (2002). Contextualized behavioral support in early intervention for children with autism and their families. Journal of Autism and Developmental Disorders, 35(6), 519–533. https://doi.org/10.1023/A:1021298729297. Najdowski, A. C., Gharapetian, L., & Jewett, J. (2021). Toward the development of antiracist and multicultural graduate training programs in behavior analysis. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-020-00504-0 (Advance online publication). National Association of School Psychologists. (2021). NASP mentorship program. Retrieved from https://www.nasponline.org/mentorship.

340

Applied behavior analysis advanced guidebook

National Council on Disability. (2018). IDEA series: English learners and students from low-­ income families.Washington, DC: National Council on Disability. Retrieved from https:// www.ncd.gov/sites/default/files/NCD_EnglishLanguageLearners_508.pdf. Nosik, M. R., & Grow, L. L. (2015). Prominent women in behavior analysis: An introduction. The Behavior Analyst, 38, 225–227. Ortiz, S. M., Joseph, M. A., & Deshais, M. A. (2022). Increasing diversity content in graduate coursework: A pilot investigation. Behavior Analysis in Practice. https://doi.org/10.1007/ s40617-022-00714-8. Parette, P., & Huer, M. B. (2002). Working with Asian American families whose children have augmentative and alternative communication needs. Journal of Special Education Technology, 17(4), 5–13. https://doi.org/10.1177/016264340201700401. Pincas, A. (2001). Culture, cognition and communication in global education. Distance Education, 22(1), 30–51. https://doi.org/10.1080/0158791010220103. Probst, T. M. (2003). Changing attitudes over time: Assessing the effectiveness of a workplace diversity course. Teaching of Psychology, 30(3), 236–239. https://doi.org/10.1207/ S15328023TOP3003_09. Rispoli, M., O’Reilly, M., Lang, R., Sigafoos, J., Mulloy, A., Aguilar, J., et al. (2011). Effects of language of implementation on functional analysis outcomes. Journal of Behavioral Education, 20(4), 224–232. Roberson, L., Kulik, C. T., & Pepper, M. B. (2001). Designing effective diversity training: Influence of group composition and trainee experience. Journal of Organizational Behavior, 22(8), 871–885. https://doi.org/10.1002/job.117. Robinson, B., & Bradley, L. J. (1997). Multicultural training for undergraduates: Developing knowledge and awareness. Journal of Multicultural Counseling and Development, 25(4), 281– 289. https://doi.org/10.1002/j.2161-1912.1997.tb00349.x. Rodriguez, A., & Williams, A. M. (2020). Accounting for cultural differences to make parent training more socially significant. In B. M. Conners, & S.T. Capell (Eds.), Multiculturalism and diversity in applied behavior analysis: Bridging theory and application (1st ed., pp. 155– 165). Routledge. Rodriguez, A. (2018). A comparison of traditional and culturally sensitive parent training of functional communication training (Unpublished master’s thesis). Winter Park, Florida: Rollins College. https://scholarship.rollins.edu/mabacs_thesis/7. Salend, S. J., & Taylor, L. S. (2002). Cultural perspectives: Missing pieces in the functional assessment process. Intervention in School and Clinic, 38(2), 104–112. Silverstein, M. W., Miller, M., Rivet, J., & Nuhu, N. (2022). Program evaluation of a virtual mentoring program for BIPOC undergraduates in psychology. Scholarship of Teaching and Learning in Psychology. https://doi.org/10.1037/stl0000322 (Advance online publication). Tanaka-Matsumi, J., Seiden, D. Y., & Lam, K. N. (1996). The culturally informed functional assessment (CIFA) interview: A strategy for cross-cultural behavioral practice. Cognitive and Behavioral Practice, 3(2), 215–233. Tiger, J. H., Hanley, G. P., & Bruzek, J. (2008). Functional communication training: A review and practical guide. Behavior Analysis in Practice, 1(1), 16–23. https://doi.org/10.1007/ BF03391716. Vandenberghe, L. (2008). Culture-sensitive functional analytic psychotherapy. The Behavior Analyst, 31(1), 67–79. https://doi.org/10.1007/BF03392162. Westefeld, J. S., & Rasmussen, W. (2013). Supervision: The importance and interaction of competency benchmarks and multiculturalism. The Counseling Psychologist, 41(1), 110–123. Wood,W., & Eagly, A. H. (2002). A cross-cultural analysis of the behavior of women and men: Implications for the origins of sex differences. Psychological Bulletin, 128(5), 699–727. https://doi.org/10.1037//0033-2909.128.5.699.

CHAPTER 14

Ethics and ethical problem solving Matthew T. Brodhead and Noel E. Oteto

Department of Counseling, Educational Psychology, and Special Education, Michigan State University, East Lansing, MI, United States

Ethics refers to rules and guidelines that form from moral codes (Boone, 2017). For example, the Ethics Code for Behavior Analysis (BACB, 2020) (hereafter referred to as the BACB Code) represents rules the field of applied behavior analysis (ABA) has formulated based on culturally agreed upon morals. Of course, the BACB Code includes ethical principles and standards that espouse ubiquitous moral values, such as being truthful and avoiding harm (see Rachels, 2014). However, because the BACB Code is meant to govern behavior in a certain area of practice, the ethical principles and standards can be considered as extensions of morals of a specific cultural group, namely the discipline of ABA. Brodhead, Quigley, and Cox (2018) further put the definition of ethics into context of ABA.They define ethics as “the emission of behavior in compliance/coordination with the verbally stated rules and behavior-analytic cultural practices guiding practitioner behavior that are espoused by the BACB Code” (p. 167). Here, ethical behavior not only involves universal ethical values such as being truthful (BACB Code Standard 1.01) and avoiding harm (BACB Code Standard 2.01), but it also conveys principles and standards that are specific to the profession itself such as collecting and using data (BACB Code Standard 2.17), third-party contracts (BACB Code Standard 3.07), and facilitating continuity of services (BACB Code Standard 3.14). Indeed, the latter ethical standards represent cultural values of the field of ABA, and also pertain to behaviors that occur daily, if not hourly or by the minute, that make up what may be referred to as practicing applied behavior analysis. Brodhead, Cox, & Quigley, 2018emphasize that “Analyzing ethical behavior is not like fine dinnerware, wherein you only pull out the fine China for a holiday dinner” (p. 14). However, translating ethical principles and standards in the BACB Code into processes that are contextually relevant to Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00014-3

Copyright © 2023 Elsevier Inc. All rights reserved.

341

342

Applied behavior analysis advanced guidebook

practice is much easier said than done. Ethics codes, alone, are not sufficient to support ethical behavior. Therefore, professionals must be equipped with strategies that translate the BACB Code to practice, whether that practice is in a community-, home-, or school-based setting. Fortunately, behavioral systems analysis (BSA) provides an analytical framework that allows professionals to convert ethical principles or standards into customized organizational processes that ensure both consumer protection and high quality of care (Brodhead, 2020; Brodhead et al., 2018; Brodhead & Higbee, 2012; Thomas & Brodhead, 2021). A BSA allows the user to match ethical principles or standards to the context of organizational need (e.g., the need to protect consumer confidentiality) and to develop organizational systems (e.g., systems of training and supervision) that support employee behavior by telling them what to do (e.g., use HIPAA compliant technology to communicate with families) instead of what not to do (e.g., do not use unsecure forms of communication). The purpose of this chapter is to expand upon previously published work and describe how behavioral systems may be used to support professional and ethical obligations to protect consumers and provide high-­ quality care.We use the term providers and consumers to refer to the agents responsible for delivering interventions and the people who receive those interventions, respectively. Because our collective backgrounds are in ABA and behavioral interventions, our examples are influenced from those experiences. However, we encourage readers from disciplines outside of ABA to consider our recommendations and ideas in the context in which they provide services, so they can employ the concepts and ideas we describe in a way that best meets their needs and those of their consumers. We first define behavioral systems and behavioral systems analysis, then provide examples of contexts where behavioral systems may be used to fulfill ethical obligations to protecting consumers and providing high-quality care. Specifically, we present a systems approach to incorporate consumer choice into treatment decisions, monitoring adverse events in behavioral treatment, and develop and use consumer feedback loops to improve practice and standards. Where appropriate, we provide examples behavioral systems in visual or table format to help further illustrate the concept of a behavioral system to help support ethical behavior in practice.

Behavioral systems analysis Before providing examples of behavioral systems in practice, we explain key terms and concepts plus additional rationale for a BSA approach in



Behavioral systems

343

s­upporting ethical behavior in ABA treatment. The reader may consult Diener, McGee, and Miguel (2009), Sigurdsson and McGee (2015), and Malott and Garcia (1987) for more in-depth descriptions of behavioral systems and BSA. Although a thorough explanation of the six steps of BSA is beyond the scope of this chapter, we encourage readers to review Brodhead (2020) for a guided practice workbook on conducting a BSA.

Behavioral systems and behavioral systems analysis Malott and Garcia (1987) define a system as “an organized, integrated, unified set of components, accomplishing a particular set of ultimate goals or objectives” (p. 127). Systems that involve human behavior are considered behavioral systems, which are common in ABA whether they have been formally referred to as such. For example, functional assessment of problem behavior is an example of a behavioral system commonly used in ABA (Brodhead, 2019). Functional assessment involves the deliberate organization and integration of steps of behavior designed to accomplish a goal: to identify the function, or cause, of behavior. Functional assessment involves hypothesis generation and testing, systematic data collection, and data analysis to identify why behavior happens. High procedural fidelity is also a key to functional assessment achieving its goal. Discrete trial instruction, data collection procedures, assessment protocols, and naturalistic instruction are other examples of behavioral systems within ABA (Brodhead, 2019). These examples illustrate that behavioral systems are not abstract or distant concepts, at least in the context of conventional ABA clinical approaches and protocols. Unfortunately, the field of ABA has done far less to describe systems to support ethical behavior than it has with systems of assessment and treatment. Instead, ABA values such as incorporate consumer choice into treatment decisions, monitoring adverse events in behavioral treatment, and use of collaborative consumer feedback loops to improve practice and standards can be as vague as they are well-intentioned and important.Though these values statements are certainly rooted in agreed-upon moral values, they still require interpretation and subsequent effort and monitoring to be effectively executed in a ­desired context. For example, incorporating consumer choice into treatment decisions aligns with the BACB Code’s Core Principle #2 (Treat Others With Compassion, Dignity, and Respect); however, what incorporating consumer choice into treatment looks like will likely vary across settings and mean different things to different people. Without explicit guidelines about how to incorporate consumer choice into treatment (i.e., what to do), professionals are more likely to engage in behavior we would label as i­ncongruent with

344

Applied behavior analysis advanced guidebook

Core Principle #2, or fail to engage in behavior altogether. As a result, the risk of harm to consumers increases, and the quality of care provided to those consumers will likely decline. We propose that ethical behavior should be treated no differently than other behavior that supports consumers and employees. Expecting employees to understand “how to be ethical” without training and support is akin to the ill-advised train and hope strategy that Stokes and Baer (1977) had argued against. A benefit of BSA or taking a systems approach is that BSA provides an analytical framework for conceptualizing, designing, implementing, evaluating, and revising systems to support ethical behavior in the context in which ethical behavior should occur. Scholars have described multiple frameworks for conducting a BSA (see Sigurdsson & McGee, 2015). In this chapter, we briefly introduce what is commonly referred to as the ASDIER (pronounced “as-deer”) process described by Malott (1974). The ASDIER process contains six steps: Analyze the natural contingencies; Specify the performance objectives; Design the system; Implement the system; Evaluate the system; and Revise the system. The six steps of BSA are depicted in Fig. 1. Notice BSA occurs in a perpetual state of motion. Brodhead (2020) notes that “Just as there is no such thing as a perfect machine, there is no such thing as a perfect behavioral system” (p. 25). Here, continuous quality improvement is a value that is inherent in BSA. Just as we would always commit to ongoing monitoring and program adjustment for interventions provided to our consumers, BSA commits to ongoing monitoring and program adjustments for systems provided to employees.

Fig. 1  The six steps of behavioral systems analysis.



Behavioral systems

345

Using behavioral systems analysis to improve ethical behavior Above, we introduced and defined the terms system and behavioral system, noting how behavioral systems are common in ABA treatment but uncommon in supporting employee ethical behavior. However, the absence of behavioral systems in supporting ethical behavior puts adherence to core ethical values at risk. Behavioral systems analysis gives us an analytical framework to translate core ethical values into systems of employee support that increase the likelihood of consumer protection and high-quality care. Below, we provide three examples of values relevant to ABA treatment and describe behavioral systems that may increase the likelihood of employees engaging in behaviors that resemble those values.These values are (a) incorporating consumer choice into treatment decisions; (b) monitoring adverse events in behavioral treatment, and (c) using collaborative consumer feedback loops to improve practice and standards. Referencing existing literature, we suggest behavioral systems that behavior analysts can design to support employee behavior that aligns with each of these three values. Our objective is to provide multiple examples of behavioral systems, so the reader can more readily translate our recommendations into practice. Also, adopting a behavioral systems approach allows the user to translate purposefully vague ethical principles and standards into processes that can be customized to meet the unique needs of their practice setting.

Example 1 Incorporating consumer choice into treatment decisions Choice is defined “the distribution of operant behavior among alternative sources of reinforcement” (Pierce & Cheney, 2013, p. 472). In a treatment context, choice occurs when a consumer selects or provides input on a course of treatment. For example, a consumer may select preferred items to serve as rewards for instructional responding (e.g., Brodhead et al., 2016), express preference for the way rewards are delivered (e.g., Luczynski & Hanley, 2009), choose the type and order of activities they engage with during leisure time (e.g., Deel, Brodhead, Akers, White, & Miranda, 2021), and select the type of behavioral interventions delivered (e.g., Hanley, Piazza, Fisher, & Maglieri, 2005). Indeed, consumer choice increases consumer autonomy, aligning with the Belmont Report’s (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979) principle of Respect for Persons. Even if a consumer is under the care of a parent or guardian, that

346

Applied behavior analysis advanced guidebook

custodial relationship does not preclude the ethical obligation to honor the consumer’s fundamental right to autonomy. In addition, White, Oteto, and Brodhead (in press) report that integrating choice into treatment brings several ancillary benefits to the consumer such as less frequent challenging behavior and improved academic performance, engagement, and on-task behavior. Further, the “cost” or effort associated with integrating consumer choice into treatment is often very low and demands little preparation by the provider (Jolivette, Ennis, & Swoszowski, 2017). The ethical obligations to support consumer choice (e.g., Respect for Persons) are clear, as are the benefits of consumer choice and the positive role choice plays in a treatment context. Therefore, providers should incorporate choice into the treatment context whenever possible. The two examples of behavioral systems below demonstrate how to incorporate consumer choice into treatment, one being a system designed to systematically evaluate consumer preference for rewards and incentives during intake, treatment and discharge, and the other a system designed to identify appropriate opportunities for consumers to provide input on the form of treatment that is delivered.

Examples of behavioral systems to incorporate consumer choice into treatment Rewards and incentives The best way to determine what a person wants is to ask them and not to presume their preferences. Notably, research suggests that instructor predictions about preferred items and activities varies widely, and only moderately correlates with actual consumer preferences (Brodhead, Abston, Mates, & Abel, 2017). Fig.  2 presents an example of how incorporating consumer preference for rewards and incentives can occur throughout the treatment process. At intake, the consumer is asked about their preferences, but the way in which the “ask” occurs may vary considerably depending on the context and the consumer’s ability to express their wants and needs. Here, two potential strategies include using a structured interview to ask the consumer about their preferences directly or by observing the consumer choices during a selection (Carr, Nicolson, & Higbee, 2000) or ­engagement-based (e.g., Roane,Vollmer, Ringdahl, & Marcus, 1998) preference assessment. During treatment, the efficacy of rewards and incentives is continuously monitored. Such monitoring may take multiple forms, such as observing whether contingent presentations of rewards results in intended changes



Behavioral systems

347

Fig. 2  A sample behavioral system to incorporate preferences for types of rewards and incentives.

in behavior, and the extent to which the consumer willingly and independently engages in the treatment setting or finds that treatment socially acceptable (see also our discussion about social validity later in this chapter). In addition to monitoring, providers should continue to seek consumer input because consumer preference for rewards and incentives can vary considerably over the passage of time (Butler & Graff, 2021) When the consumer is discharged, the provider could submit a detailed account of consumer preferences to the receiving provider (e.g., residential facility or vocational program), along with ways that provider can continue to seek the consumer’s input. Though this step does not involve the active opportunity for consumer choice-making, it works to ensure continuation of consumer autonomy and uphold the provider’s ethical obligation to appropriately transition services (e.g., BACB Code Standard 3.16). Fig.  2 also includes a continuous quality improvement component, which is depicted by the dashed lines.The continuous quality improvement component ensures outcomes and feedback arising from each part of the system are integrated into other components of the system, if necessary. For example, during consumer treatment, a provider may observe that an intake process did not account for preference for physical activities such as riding a bike or jumping on a trampoline. Here, changes could be incorporated

348

Applied behavior analysis advanced guidebook

into intake processes to ensure preference for physical activities is evaluated in the future. Form of treatment Assuming proper contextual fit, supporting evidence, and provider competence (see Slocum et  al., 2014), there are treatment options a provider can deliver to the consumer for any presenting behavioral problem. For example, if the consumer engages in challenging behavior and requires a function-based treatment plan, the provider may consider using either functional communication training (FCT) (Tiger, Hanley, & Bruzek, 2008) or non-contingent reinforcement (NCR) (Vollmer, Iwata, Zarcone, Smith, & Mazaleski, 1993) based on consumer preference for one or both procedures. In another example, a consumer may be using an activity schedule to help support engagement with leisure activities during unstructured play time. However, does the consumer prefer a standard binder-based activity schedule or an iPad to depict their schedule (see Giles & Markham, 2017)? Further, does the consumer prefer to have an adult decide what their leisure activities should be or would they rather identify those activities themselves as well as the order in which they should occur (see Deel et  al., 2021)? Again, the opportunity presents itself to the provider to consider consumer input on the form and construction of activity schedule. The preceding examples illustrate only a small fraction of the opportunities to provide choice about the form of treatment to the consumer. It is necessary to recognize that consumer autonomy is honored to the greatest extent possible while simultaneously ensuring consumer choice does not result in harm. Behavioral systems allows for the provider to develop a structured process for risk assessment when identifying appropriateness of consumer choice of form of treatment. Table  1 presents a list of four questions a provider may ask when considering whether to obtain consumer input on the form of treatment they provide. The first question asks Would one or more treatment choices presented to the consumer cause harm or increase risk of harm to that consumer? If the answer to this question is yes, that treatment option is not appropriate and should be substituted (or if substitution is not possible, a choice should not be provided). For example, the first option may be the Picture Exchange Communication System (PECS; Frost & Bondy, 1994), a functional communication program that has robust evidence support among children with autism spectrum disorder (ASD). (In this chapter we use person first language for consistency. However, we recognize that



Behavioral systems

349

Table 1  Sample questions to ask when considering consumer choice of form of treatment. Question number

Sample question

1

Would one or more treatment choices presented to the consumer cause harm or increase risk of harm to that consumer? Are there at least two available treatment options that have proper contextual fit and adequate supporting evidence, and the provider is competent in delivering those treatment options? Do the benefits of providing treatment choice outweigh the costs of not providing treatment choice? Can you monitor the effects of the treatment choice to ensure that relative treatment achieves its stated goal or purpose?

2

3 4

some scholars and autistic individuals prefer disability first language). The second option may be facilitated communication (FC), a pseudoscience that has been erroneously marketed as an effective form of functional communication (Foxx & Mulick, 2016). At best, FC presents harm in the form of lost instructional time and at worst, users may be subject to abuse by their designated facilitator (Engber, 2015). Thus, FC should not be presented as a treatment option to the consumer (See Brodhead (2015) for further discussion about appraising treatments for potential for harm). The second question in Table 1 asks Are there at least two available treatment options that have proper contextual fit and adequate supporting evidence, and the provider is competent in delivering those treatment options? Contextual fit, supporting evidence, and provider competence are hallmark components of the decision-making process of evidence-based practice (see Slocum et al., 2014). Likewise, all three components of the decision-­making process of evidence-based practice must be equally considered, or else likelihood of treatment success will diminish. Therefore, if two treatment options are available (e.g., FCT and NCR) and the provider is only skilled in administering one option (e.g., FCT), the situation may not be appropriate for a consumer to choose the form of treatment (e.g., choosing between FCT and NCR). However, the responsibility to the provider may be to improve their competence in treatment delivery so that future consumers may be presented with a choice between two alternative forms of treatment (see Brodhead, Quigley, & Wilczynski, 2018, for a discussion about scope of competence).

350

Applied behavior analysis advanced guidebook

The third question in Table 1 asks Do the benefits of providing treatment choice outweigh the costs of not providing treatment choice? Though two or more forms of treatment may serve the consumer’s best interest, those treatment options may have different timelines for success and/or require different levels of resources from the provider or the consumer’s caregivers.Treatment choices often affect caregivers who are then at least somewhat responsible for payment, coordination, and delivery of that treatment (see Salter, 2012). Certainly, cost and resources are concerns for many families and third-party payers (e.g., Medicaid). Therefore, assessment of treatment choice is incomplete without thorough consideration of all the stakeholders (e.g., caregivers) who may be affected by any given choice. It is worth noting that the provider’s construction and identification of choice opportunities may itself limit a consumer’s personal freedom and autonomy. Accordingly, the provider should take seriously the roles of consent and transparency to reduce coercive treatment choice construction (see Goltz, 2020 for further discussion). The fourth and final question in Table 1 asks Can you monitor the effects of the treatment choice to ensure that relative treatment achieves its stated goal or purpose? Ultimately, the best indicator of success is whether that treatment works for the individual to whom it is prescribed. Though the benefits of providing choices to consumers are clear, the delivery of ineffective treatment is harmful and incongruent with the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979) principle of Beneficence. At the very least, the provider must weigh the delivery of an ineffective treatment against the consumer’s right to autonomy. In most cases, it is difficult justifying an intervention without knowing whether it is helpful or harmful. Conversely, implementing an intervention, monitoring it, and understanding it is not meeting its stated goal could be ethically justifiable if its delivery helps to achieve a more important objective. In illustration, an intervention may be ineffective but it brings measurable joy and happiness to the consumer.

Summary Above, we introduced the importance of consumer choice and described its benefits with two examples of behavioral systems that integrate consumer choice into the treatment process. The first example described a behavioral system intended to describe how to incorporate consumer preference for rewards and incentives throughout the treatment process. The second example described a behavioral system designed to guide deliberate assessment about the appropriateness of integrating opportunities for consumer



Behavioral systems

351

choice into treatment. These examples are just two of many ways in which behavioral systems can be used to enhance opportunities for choice-making during treatment while maintaining a commitment to consumer protection and high-quality of care.

Example 2 Monitoring adverse events of behavioral treatment An adverse event is defined as “an unfavorable or harmful outcome that occurs during, or after, the use of a drug or other intervention, but is not necessarily caused by it” (Peryer, Golder, Junqueira, Vohra, & Loke, 2019, section 19–1-1). An adverse effect is defined as “an adverse event for which the causal relation between the intervention and the event is at least a reasonable possibility” (Peryer et  al., section  19–1-1). Put simply, an adverse event may result in an adverse effect (harm). Consider, for example, that anxiety and stress a consumer may experience from a behavioral intervention may be considered an adverse event. If 95% of consumers who received a particular behavioral intervention experienced anxiety and stress, there may be a causal relation which produces an adverse effect. In this example, the behavioral intervention is at least partially responsible for the observed anxiety and stress. Of course, some adverse events and events may be unavoidable, where the act causing the adverse event is of benefit of the consumer. Alternatively, many adverse events can be avoided or reduced.

Coercion Consider the misuse (adverse event) of behavioral treatment through coercion. Sidman (2001) described coercion as behavior change that occurs in the presence or threat of negative reinforcement or punishment. As Thomas and Brodhead (2021) note, nature is coercive by design and in many cases coercion is biologically beneficial to living organisms. For example, a child who purposefully steps on a Gartner Snake and becomes the unfortunate recipient of a snake bite is likely to avoid engaging in such behavior in the future and causing future harm in the form of pain from a snake bite. Although Gartner Snakes are not venomous, many snakes and other reptiles are, therefore serving as an example of how coercion can promote generalization of avoidance behavior that is also of biological benefit. We consider coercion to be an adverse effect, which may result in adverse events for a collective of recipients of behavioral services whom experience those events. Though the adverse events of coercion are well

352

Applied behavior analysis advanced guidebook

described by Sidman (2001), including actively avoiding places or people associated with coercion and increased level of anxiety and fear, the adverse events arising through coercion in behavioral interventions are not as well understood (see Bottema-Beutel, Crowley, Sandbank, & Woynaroski, 2021). Skin shock is one example of an adverse event that has well documented adverse effects on consumers. (We should note that the number of providers who use skin shock as a behavioral treatment is incredibly small and these providers do not represent the overwhelming majority of those who deliver behavioral services. Further, both authors find the use of skin shock to be morally indefensible. Instead, we use it as an example to further illustrate the concepts described within this section.) Nisbet (2021) describes correspondence from Jennifer Msumba, a student who attended the Judge Rotenberg Center (JRC) in Massachusetts. In their correspondence, Msumba notes that while attending JRC, she was subjected to a Graduated Electronic Decelerator (GED), a wearable device that delivers shock to the skin via remote control contingent on a target behavior to reduce the future frequency of that behavior. Msumba reported the shock from the GED (adverse event) resulted in “long term loss of sensation and numbness in my lower left leg” (p. 1) and because she had to wear the GED while sleeping, “I was very anxious to close my eyes, always fearing a shock for something I might not have even known I did” (p. 3). In this example, the adverse event arising from skin shock from the GED would be loss of sensation and numbness as well as anxiety. If other consumers subjected to the GED experienced similar outcomes, those similar outcomes would be adverse effects. Unfortunately, Msumba’s story is not unique (see Nisbet, 2021, for a historical account of adverse events reported by those attending JRC).

Science is not inherently coercive Helping professions are not inherently coercive and the events that are reported to have occurred at JRC do not represent the values and moral standards of the science and profession of ABA. By definition, ABA involves changing behavior to produce socially significant outcomes (Bear, Wolf, & Risley, 1968) and in most cases, positive reinforcement to change behavior is what consumers and stakeholders consider to be socially significant. However, in some situations, ABA may drift into a practice of coercion if behavioral systems that oversee its delivery allow for it to become coercive, and therefore lead to adverse events for a given consumer that result in adverse effects for a collective of consumers. For example, a consumer may prefer the provider use manual guidance to help them wash their



Behavioral systems

353

hands while a different consumer, may physically resist and vocally protest this method of prompting. The longer the latter example continues to occur without provider modification that results in the reduction of the consumer’s resistance and protest, the more obvious it becomes that manual guidance is a coercive practice for that consumer in that specific context. Further, the more consumers who experience unwanted manual guidance (adverse event), the more unlikely that experiences arising from unwanted manual guidance (e.g., stress and discomfort) become adverse effects themselves. Identifying whether a practice such as manual guidance is coercive requires nuance (Thomas & Brodhead, 2021). It is not true that manual guidance is always or never coercive. Instead, as the above example illustrates, context matters. Therefore, we recommend against compartmentalizing most behavioral interventions as “coercive” or “not coercive” and encourage the reader to evaluate the context in which a given intervention is applied. We say “most behavioral interventions” because some approaches to behavioral intervention may be inherently coercive, such as the events described by Nisbet (2021) above. We argue behavior analysts and other providers are ethically obligated to make good-faith efforts to remove as much coercion from the environment as possible. Doing so will decrease the likelihood of adverse events and therefore decrease the possibility of a collective of consumers experiencing any adverse effects resulting from behavioral treatment. Until adverse events of behavioral treatment are identified and understood in the research literature, providers must take it upon themselves to uphold their ethical obligations to monitor adverse events and minimize harm that may result from those adverse events (e.g., coercion) (e.g., BACB Code Core Principle #1 and BACB Code Standards 2.01, 2.13, 2.15, and 3.01). Even if adverse events of behavioral treatment are identified and understood, we would still consider good-faith efforts to minimize coercion as acting in the consumer’s best interest (BACB Code Standard 2.01) which may result in higher quality of care and better consumer outcomes. Here presents another opportunity for behavioral systems to provide a systematic structure for such efforts. Below are two examples of behavioral systems for identifying, monitoring, and reducing adverse events. We encourage providers to customize and tailor their own systems so they fit well within the unique context in which they provide services. Reduction in adverse events will lessen probability that consumers will collectively experience adverse effects or perhaps

354

Applied behavior analysis advanced guidebook

reduce the severity of adverse effects of behavioral treatment. However, we recognize additional research is warranted to evaluate this further.

Examples of identifying, monitoring, and reducing adverse events and effects Identifying and monitoring adverse events and effects Focused questions, such as those described in Table 2, may help providers engage in the process of identifying and monitoring adverse events and effects. Such questions are an example of a behavioral system, and a behavioral system to identify and monitor adverse events and effects would, of course, be necessary i to reduce their presence in behavioral treatment. The first question in Table 2 asks What are the known adverse events and effects of an intended behavioral treatment? As we noted above, not all adverse events and effects are bad or even avoidable Instead, it is important to know what adverse events or effects may be present, so they can be removed or managed if necessary. Further, ethical principles and standards exist to ensure providers actively decrease risk or presence of harmful interventions (e.g., BACB Code Principle #4; Standards 2.01, 2.13, and 2.15); therefore, it is the provider’s ethical responsibility to be aware of potential adverse events or effects arising from behavioral interventions. Further, lacking awareness is incongruent with the ethical standards described by the BACB. The second question in Table 2 asks What other adverse events and effects may arise while delivering an intended behavioral treatment? The objective is to identify any adverse events or effects that may not be readily known. Again, the ethical obligations to identify them are clear (e.g., BACB Code Table 2  Sample questions for identifying and monitoring adverse events and effects. Question number

Sample question

1

What are the known adverse events and effects of an intended behavioral treatment? What other adverse events and effects may arise while delivering an intended behavioral treatment? How can you describe the adverse events and effects in observable and measurable terms? How do you plan to observe and analyze measures of adverse events and effects during the implementation of the behavioral intervention? How do you plan to periodically review measures to ensure the greatest extent of consumer protection?

2 3 4 5



Behavioral systems

355

Principle #4; Standards 2.01, 2.13, and 2.15). For example, the provider may use time out in the form of removing a consumer from the dinner table when that consumer complains about the food they are asked to eat. As a result, the consumer comes to dislike and actively avoid that provider. The use of time out results in the consumer actively avoiding the provider because that provider’s presence may cause them to feel anxious or scared, which does not serve the consumer’s best interest (BACB Code Standard 2.01), in addition to decreasing the provider’s ability to effectively teach that consumer. Should multiple consumers experience the same outcome from the use of time out, this would indeed be an adverse effect of time out itself. The third question in Table  2 asks How can you describe the adverse events and effects in observable and measurable terms? In order to measure adverse events and effects, those events and effects should be described in a way that is free from bias and can be agreed upon by two independent observers (Cooper, Heron, & Heward, 2020; Gast & Ledford, 2014). Related, the fourth question in Table 2 asks How do you plan to observe and analyze measures of adverse events and effects during the implementation of the behavioral intervention? The purpose of this question is to ensure the provider not only has a plan to observe a consumer to identify a potential adverse event or effect, but to also analyze the results of those observations to make necessary programmatic changes. Though describing methods for creating strong behavioral definitions and appropriately observing and analyzing data are beyond the scope of this chapter, we encourage the reader to review Cooper et al. (2020) and Gast and Ledford (2014) for more information. The fifth and final question in Table 2 asks How do you plan to periodically review measures to ensure the greatest extent of consumer protection? As we have emphasized repeatedly throughout this chapter, it is important for the provider to continue to monitor and assess their own behavior, and to make any changes to their behavioral systems that are necessary. It may be, for example, that during data collection, a provider recognizes that additional adverse events are worth measuring, therefore changes should be incorporated into ongoing measurement and analysis systems. Reducing adverse events and effects Once adverse events and effects are identified, the provider is ethically obligated to reduce their presence in treatment. Here, the Belmont Report central principle of Beneficence is relevant, along with d­ iscipline-specific

356

Applied behavior analysis advanced guidebook

Fig.  3  Example of a behavioral system for reducing adverse events in behavioral treatment.

ethical principles or standards (e.g., BACB Code Principle #4; Standards 2.01, 2.13, and 2.15). In Fig. 3, and below, we describe a behavioral system designed to reduce the occurrence of adverse events in behavioral ­treatment, which would in theory reduce the likelihood of any adverse effects resulting from those treatments. Fig. 3 assumes that an adverse event has been identified through systematic monitoring for those events. Is a less coercive intervention available for implementation? is an example of the first question a provider may ask. If the answer is yes, then the provider may choose to implement that less coercive intervention and continue to monitor for adverse events. Of course, it would be important that provider has the appropriate knowledge, skills, and abilities to implement that alternative intervention, and uses the ­decision-making process of evidence-based practice while doing so (see Slocum et al., 2014). If a less coercive intervention is not available, then Fig. 3 recommends the provider seek consultation from an expert. Obtaining expert advice occurs for a few reasons. It is often recommended that providers seek the input of experts to help them evaluate their behavioral interventions (Brodhead, Quigley, & Cox, 2018) and seeking advice from a trusted colleague is commonplace in ethical evaluation (Bailey & Burch, 2022). The expert may be able to inform the provider of alternative routes to treatment that provider may not have considered. For example, instead of using time out to reduce the frequency of disruptive mealtime behavior, the expert may advise the



Behavioral systems

357

provider to provide preferred food items to the consumer alongside their prescribed meal. If that expert recommends a less coercive treatment, then Fig. 3 recommends the provider implement that treatment and continue to monitor for adverse events. If the expert does not identify a less coercive intervention, Fig. 3 then asks the provider to consider the harms and benefits of continuing to implement that intervention, knowing that it occasions adverse events. It could be those adverse events are minor (e.g., vocal expressions of non-compliance) and the long-term effects of those events on the consumer may be negligible. In other cases, the adverse events may be cause for concern (e.g., visible display of panic) and certainly could have long-term negative effects on the consumer. Events should be weighed against the behavioral problem being addressed. General non-compliance that possess little harm to the consumer or anyone else is an example of a minor behavioral topography, whereas self-injury or severe property destruction being an example of a behavioral topography of significant concern. For the latter, vocal expressions of non-compliance may be less of an issue than visible displays of panic for interventions addressed at reducing general non-compliance. If the harms of exposure to the adverse event outweigh the benefits of the behavioral intervention, Fig. 3 recommends the intervention is discontinued until a less coercive intervention is identified. If the benefits of the intervention outweigh the harms of exposure to the adverse event, the provider continues to implement the intervention and monitor and evaluate it for adverse events. It is critically important to continue to monitor for and evaluate the presence of adverse events, as harms and benefits are relative and may change as treatment progresses, rending a need for additional assessment and possibly a new course of action.

Summary We introduced adverse events and effects, described and defined coercion, and identified the provider’s ethical obligation to understand what adverse events or effects may arise as a result of behavioral treatment. Further, we emphasized the provider’s ethical obligation to minimize their occurrence, described how to identify and monitor adverse events and effects in practice, and explained a process to reduce adverse effects and events, once identified. Like other behavioral systems described in the chapter, these examples serve as a framework that illustrates the broad utility of behavioral systems and how they can be used to support ethical behavior and improve the quality of care and protection provided to consumers.

358

Applied behavior analysis advanced guidebook

Example 3 Using collaborative consumer feedback loops to improve practice and standards In 1978, Montrose Wolf stated that “if we aspire to social importance, then we must develop systems that allow our consumers to provide us feedback about how our applications relate to their values, their reinforcers” (p. 213). His use of the term values is noteworthy because it suggests that social validity should be a measurement of something more personal than a consumer’s opinion (e.g., indicating on a survey whether they enjoyed a particular treatment). Wolf suggested the consumers of interventions or treatment should not only have their opinion about interventions heard, but their core values should be considered at all stages of intervention design, implementation, and evaluation. Wolf (1978) also emphasized that “…we must develop systems that allow our consumers to provide us feedback…” (p. 213). However, he did not describe these systems because they were novel and still needed persuasion of their significance from other scholars. Further, context shows how social validity is measured and what specific variables are measured. Wolf challenged scholars to create social validity measures for the benefit of the recipients of behavioral services. Feedback loops, discussed below, can serve as a behavioral system to improve the ethical practices and standards of providers of behavioral services.

Feedback loops The feedback loop is a common concept across disciplines, most often mentioned in biological studies where both positive and negative feedback loops explain how the human body regulates itself. For example, Lioutas, Einspanier, Kascheike, Walther, and Ivell (1997) explained how oxytocin is involved in a positive feedback loop during childbirth where its release in the body of the woman giving birth causes contractions, which leads to a stimulation of the pituitary gland, which then ultimately releases more oxytocin. Feedback loops certainly apply to ABA, for example, in the relationship between the Board Certified Behavior Analyst (BCBA), a supervisor, and the providers whom the BCBA oversees. In this relationship, the BCBA provides supervision through verbal or ­model-based feedback to the ­supervisee, who processes the feedback, and implements any changes to their practice. The changes in practice lead to more feedback from the supervising BCBA, hopefully positive, thus, the loop continues. All the while, the consumer benefits because positive changes in supervisee behavior should improve quality of consumer care and protection.



Behavioral systems

359

Data-based decision-making often occurs at the intervention level, meaning a feedback loop exists where providers implement an intervention with a consumer, collect intervention data, analyze that data, then decide whether to continue, stop, or adjust intervention (Gischlar, Hojnoski, & Missall, 2009; Hojnoski, Gischlar, & Missall, 2009; Vanselow, Thompson, & Karsina, 2011;Wolfe, McCammon, LeJeune, & Holt, 2021). Fig. 4 is a visual depiction of how the data-based decision-making feedback loop can be used in the application of a human services intervention. In addition, Figs. 1, 2, and 3 describe various uses of feedback loops. In Fig. 1, the feedback loop is used to ensure continuous quality improvement in BSA. The feedback loop in Fig.  2 is similar to that in Fig.  1, except continuous quality improvement occurs in the context of incorporating consumer choice into treatment decisions. Fig. 3’s feedback loop relies on the input of an expert regarding the evaluation of the extent to which an intervention is coercive. The focus on intervention-level decision-making described above, albeit critical to the efficacy of interventions by giving immediate feedback to the provider, often lacks consideration of the values of consumers and stakeholders that Wolf (1978) argued were so crucial. So, how might we use the values of stakeholders as a source of data? Social validity assessments are one solution. However, not all social validity assessments are created equal. Below, we first describe what are commonly referred to as practical ­social validity assessments. As the reader will learn, though practical social validity assessments have some value in applied service delivery, they do not fully

Fig.  4  A sample data-based decision-making decision framework used in human services.

360

Applied behavior analysis advanced guidebook

capture the values of consumers or stakeholders affected by behavioral services. Instead, we argue that collaborative feedback loops which emphasize collaborative practices to measure social validity in meaningful ways may lead providers closer to integrating the values of consumers and stakeholders into behavioral treatments, and therefore closer to the vision described by Wolf (1978).

Practical social validity assessments Social validity data are often collected through questionnaires, surveys, or rating forms (Luiselli, 2021). These measures often contain statements or questions where individuals such as a consumer, family of consumer, teacher using intervention, or stakeholder answer questions or indicate the extent to which they agree with a certain statement. For example, a question may be general, asking Was the teaching program effective in teaching imitation? A more specific question asks Was the teaching program effective for teaching the child to zip up their coat properly? (Park & Blair, 2019; Ricciardi et al., 2020). The type of question, of course, depends on the goals of the social validity assessment. Often, social validity assessments rely on a Likert-type scale where the person taking the survey is encouraged to select the number that corresponds with how much they agree or disagree with a statement (e.g., 1- Very much disagree, 7- Very much agree). In some cases, there are opportunities for consumers to leave written feedback where the expression of their opinion is encouraged. A social validity survey or questionnaire are cost-­effective methods to measure social validity (Schwartz & Baer, 1991). However, are consumer opinions a reliable and valid measure, which Schwartz and Baer also emphasized as critical? Because social validity should measure the “values” of consumers (Wolf, 1978), with specific emphasis on context, we believe surveys or questionnaires fall short of this goal. That is, a ­provider-generated survey is unlikely to capture the rich and dynamic opinions of consumers and stakeholders of behavioral services. To better understand consumer values, we argue that a more intimate approach to engaging with clients and stakeholders is needed. Below we provide an example of a behavioral system of social validity measures that serve as an alternative to questionnaires and surveys: collaborative feedback loops. This method is encourages providers to customize their own social validity measures consistent with our ethical responsibility to incorporate the values of consumers and stakeholders in our practice (BACB Code Standard 2.09). Meaningful measurement and evaluation of consumer values can promote



Behavioral systems

361

favorable outcomes, positive perceptions of behavioral interventions, and acceptance of ABA as a helping profession. (Brodhead & Higbee, 2012).

Collaborative social validity assessments Collaboration is not a new concept to providers of behavioral interventions. For example, collaboration in education often involves behavior analysts, school psychologists, speech pathologists, and occupational therapists (see Cox, 2012) working together to enhance student outcomes or develop programs such individual educational plans (IEPs) for children receiving special education services (Baker, Saurer, Foor, Forbes, & Namey, 2017; Dillon, Armstrong, Goudy, Reynolds, & Scurry, 2021; Kuntz & Carter, 2021; Sisti & Robledo, 2021). A collaborative feedback loop mirrors the interactive process known as collaborative consultation (Idol, Nevin, & PaolucciWhitcomb, 1994; Idol, Paolucci-Whitcomb, & Nevin, 1995; Villeneuve & Hutchinson, 2012), a process in which people with unique backgrounds bring together their differing expertise to solve a shared problem (Idol et al., 1995). A collaborative feedback loop is one where feedback is not simply a source of criticism or angst, rather data that providers use to adjust their practices and ethical standards. Noted earlier, a feedback loop in the field of ABA is the relationship between a supervising BCBA and a supervisee. The supervisor presents feedback to the supervisee and the supervisee implements the feedback. The collaborative feedback loop encourages participants to garner, give, or respond to feedback from others.Thus, a collaborative feedback loop between the supervising BCBA and supervisee would look somewhat different than the traditional feedback loop. In the collaborative feedback loop example, both the supervising BCBA and supervisee can stop the intervention at any point to solicit feedback from the other person. They would both have and embrace the opportunity to respond to the feedback given to them by the other person and work together to come to an agreement about the next steps regarding the intervention. This relationship allows both supervisor and supervisee feedback to serve as valuable sources of data and moves to create a collaborative environment. Below, we use recent criticism leveraged toward those who provide behavioral interventions as an example for how providers can engage with the neurodiversity movement to further increase the social validity of behavioral interventions. We use this example because it reflects ongoing and ­ever-increasing discussion being held online, at conferences, and in s­ cholarly journals. We also recognize that many providers may want to engage with

362

Applied behavior analysis advanced guidebook

consumers or stakeholders from the neurodiversity movement but are unsure how to do so.

Example of how providers can engage with the neurodiversity movement Veneziano and Shea (2022) note several criticisms of behavioral interventions for people with ASD, describing an increasingly loud chorus of complaints that some may find ABA abusive, harmful, or likely to cause trauma. Also at issue is the often perceived “benchmark” that recipients of behavioral services should receive treatment so they are indistinguishable from their neurotypical peers, an outcome famously reported in Lovaas (1987). The criticisms leveraged against ABA are, in part, a result of a broader neurodiversity movement apparent within the human services industry. Many advocates and insiders within the neurodiversity movement argue that the movement is still in its infancy (Den Houting, 2019). Regardless, growing criticisms of ABA presents providers with a great opportunity to advocate for those we aim to help with our services by not only involving consumers in intervention decisions, but also considering the voices and concerns of those within the community being served (e.g., individuals with ASD within the neurodiversity community) as a source of collaborative data. For example, some individuals with ASD who received behavioral interventions may not agree with treatment targets (e.g., stereotypy reduction; Kapp et al., 2019) or how the treatment was developed, often without input from the person receiving treatment (Gillespie-Lynch et al., 2021). We interpret these criticisms as reflecting upon instances of poor administration and construction of behavioral interventions, both of which do not align with Wolf ’s definition of social validity. One way to respond to recent criticisms of behavioral interventions and deliver services that consumers and stakeholders would deem socially valid is for providers to develop and implement collaborative feedback loop where consumers and stakeholders can be treated as consultants and their feedback regarded as data that providers can use to make data-based decisions. This approach demonstrates a serious commitment to those in the neurodiversity community that we view them as colleagues and their voices as important. Second, the approach fosters and maintains meaningful relationships that are long overdue (Veneziano & Shea, 2022). Also, collaborative feedback is in line with what we interpret as helping better realize the potential of ABA in producing socially significant outcomes (see Bear et al., 1968).



Behavioral systems

363

Fig. 5  A collaborative feedback loop between providers and the community they serve.

Fig. 5 depicts an example collaborative feedback loop between providers and the community they serve. In this example, providers spend significant time building relationships after reaching out into their communities. Providers then implement feedback from their community into their practice and take data on the target changes provided by consumers and stakeholders. The arrow from “data collection” to “relationship building” signifies the most critical portion of this feedback loop: providers analyze data with the community insiders. Providers must regard community insiders as colleagues, thus, analyzing data with them will hopefully encourage a collaborative relationship and create a more ethical and socially acceptable practice for all involved.

Summary We introduced social validity measures and feedback loops, described practical social validity measures, and touched on how they might not be in line with the visions that inspired the measures (e.g., including the values of consumers; Wolf (1978). We also discussed how feedback loops and collaboration are common across and within different fields, introduced collaborative feedback loops as a means for providers and consumers to create a relationship with each other, and proposed that feedback from both groups to be regarded as critical data. Lastly, we offered an example of collaborative feedback loop framework that providers can use to reach out to insiders in their communities, and in this case, members of the neurodiversity movement.

364

Applied behavior analysis advanced guidebook

Conclusion In this chapter we defined behavioral systems and BSA, and provided examples of how behavioral systems can be used to improve ethical behavior and consumer protection in three contexts: (1) incorporating consumer choice into treatment decisions; (2) monitoring adverse events in behavioral treatment, and (3) using collaborative consumer feedback loops to improve practice and standards. These examples are meant to demonstrate what we believe is the broad utility of behavioral systems to improve ethical behavior. We encourage providers to continue to explore how behavioral systems can be used in their practice. A major strength of behavioral systems is that they can be tailored to virtually any environment to meet virtually any need. Of course, proper training and supervision remain critical for effective system implementation, and we encourage providers to seek such training and supervision when appropriate (see Brodhead, Quigley, & Wilczynski, 2018). Using behavioral systems requires attention to measurement, evaluation, and revision to support consumer outcomes. We would never recommend forgoing data collection and analysis for a behavioral intervention to support consumer behavior; and we would never recommend designing and implementing a behavioral system without the same commitment. Behavioral systems presents an opportunity to translate morals and ethical values into practice. This process takes time and commitment but improves ethical behavior, resulting in stronger consumer outcomes and protection, and more socially valid interventions within service settings.

Conflict of interest The first author receives financial compensation for the sale of authored text and workbooks cited in this chapter. The second author declares no conflicts of interest.

References Bailey, J., & Burch, M. R. (2022). Ethics for behavior analysts (4th ed.). Routledge. Baker, J., Saurer, S., Foor, A., Forbes, H., & Namey, D. (2017). Role of the speech-language pathologist in the ABA classroom: Inter-professional collaboration. Bear, D. M., Wolf, M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97. https://doi.org/10.1901/ jaba.1968.1-91. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb. com/wp-content/ethics-code-for-behavior-analysts/.



Behavioral systems

365

Boone, B. (2017). Ethics 101. Adams Media. Bottema-Beutel, K., Crowley, S., Sandbank, M., & Woynaroski, T. G. (2021). Adverse event reporting in intervention research for young autistic children. Autism, 25, 322–335. https://doi.org/10.1177/1362361320965331. Brodhead, M. T. (2015). Maintaining professional relationships in an interdisciplinary setting: Strategies for navigating non-behavioral treatment recommendations for individuals with autism. Behavior Analysis in Practice, 8, 70–78. https://doi.org/10.1007/s40617-015-0042-7. Brodhead, M.T. (2019). Culture always matters: Some thoughts on Rosenberg and Schwartz. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-019-00351-8 (Advanced online publication). Brodhead, M.T. (2020). A workbook in behavioral systems analysis and ethical behavior. Better ABA. Brodhead, M.T., Abel, E., Al-Dubayan, M., Brouwers, L., Abston, G.W., & Rispoli, M. (2016). An evaluation of a brief multiple-stimulus without replacement preference assessment conducted in an electronic pictorial format. Journal of Behavioral Education, 25, 417–430. https://doi.org/10.1007/s10864-016-9254-3. Brodhead, M. T., Abston, G. W., Mates, M., & Abel, E. (2017). Further refinement of ­video-based brief multiple-stimulus without replacement preference assessments. Journal of Applied Behavior Analysis, 50, 170–175. https://doi.org/10.1002/jaba.358. Brodhead, M.T., Cox, D. J., & Quigley, S. P. (2018). Practical ethics for effective treatment of autism spectrum disorder. New York, NY: Academic Press. Brodhead, M. T., & Higbee, T. S. (2012). Teaching and maintaining ethical behavior in a professional organization. Behavior Analysis in Practice, 5, 86–92. https://doi. org/10.1007/2FBF03391827. Brodhead, M. T., Quigley, S. P., & Cox, D. J. (2018). How to identify ethical practices in organizations prior to employment. Behavior Analysis in Practice, 11, 165–173. https://doi. org/10.1007/s40617-018-0235-y. Brodhead, M. T., Quigley, S. P., & Wilczynski, S. M. (2018). A call for discussion about scope of competence in applied behavior analysis. Behavior Analysis in Practice, 11, 424–435. https://doi.org/10.1007/s40617-018-00303-8. Butler, C., & Graff, R. B. (2021). Stability of preference and reinforcing efficacy of edible, leisure, and social attention stimuli. Journal of Applied Behavior Analysis, 5, 684–699. https:// doi.org/10.1002/jaba.807. Carr, J. E., Nicolson, A. C., & Higbee, T. S. (2000). Evaluation of a brief multiple-stimulus preference assessment in a naturalistic context. Journal of Applied Behavior Analysis, 33(3), 353–357. https://doi.org/10.1901/jaba.2000.33-353. Cooper, J. O., Heron,T. E., & Heward,W. L. (2020). Applied behavior analysis (3rd ed.). Pearson Education. Cox, D. J. (2012). From interdisciplinary to integrated care of the child with autism: The essential role of a code of ethics. Journal of Autism and Developmental Disorders, 42, 2729– 2738. https://doi.org/10.1007/s10803-012-1530-z. Deel, N. M., Brodhead, M. T., Akers, J. S., White, A. N., & Miranda, D. R. G. (2021). Teaching choice-making within activity schedules to children with autism. Behavioral Interventions, 36, 731–744. https://doi.org/10.1002/bin.1816. Den Houting, J. (2019). Neurodiversity: An insider’s perspective. Autism, 23, 271–273. Diener, L. H., McGee, H., & Miguel, C. F. (2009). An integrated approach to conducting a behavioral systems analysis. Journal of Organizational Behavior Management, 29, 108–135. https://doi.org/10.1080/01608060902874534. Dillon, S., Armstrong, E., Goudy, L., Reynolds, H., & Scurry, S. (2021). Improving special education service delivery through interdisciplinary collaboration. Teaching Exceptional Children, 54(1), 36–43. Engber, D. (2015). The strange case of Anna Stubblefield. New York Times. October 20 https://www.nytimes.com/2015/10/25/magazine/the-strange-case-of-anna-stubblefield.html.

366

Applied behavior analysis advanced guidebook

Foxx, R. M., & Mulick, J. A. (2016). Controversial therapies for autism and intellectual disabilities (2nd ed.). Routledge. Frost, L. A., & Bondy, A. (1994). PECS:The picture exchange communication system. Cherry Hill, NJ: Pyramid Educational Consultants. Gast, D. L., & Ledford, J. R. (2014). Single case research methodology: Applications in special education and behavioral sciences (2nd ed.). Routledge. Giles, A., & Markham, V. (2017). Comparing book- and tablet-based picture activity schedules: Acquisition and preference. Behavior Modification, 41, 647–664. https://doi. org/10.1177/0145445517700817. Gillespie-Lynch, K., Bisson, J. B., Saade, S., Obeid, R., Kofner, B., Harrison, A. J., … Jordan, A. (2021). If you want to develop an effective autism training, ask autistic students to help you. Autism, 26, 13623613211041006. Gischlar, K. L., Hojnoski, R. L., & Missall, K. N. (2009). Improving child outcomes with ­data-based decision making: Interpreting and using data. Young Exceptional Children, 13(1), 2–18. Goltz, S. M. (2020). On power and freedom: Extending the definition of coercion. Perspectives on Behavior Science, 43, 137–156. https://doi.org/10.1007/s40614-019-00240-z. Hanley, G. P., Piazza, C. C., Fisher,W.W., & Maglieri, K. A. (2005). On the effectiveness of and preference for punishment and extinction components of function-based interventions. Journal of Applied Behavior Analysis, 38, 51–65. https://doi.org/10.1901/jaba.2005.6-04. Hojnoski, R. L., Gischlar, K. L., & Missall, K. N. (2009). Improving child outcomes with data-based decision making: Collecting data. Young Exceptional Children, 12(3), 32–44. Idol, L., Nevin, A., & Paolucci-Whitcomb, P. (1994). Collaborative consultation. Pro-ed. Idol, L., Paolucci-Whitcomb, P., & Nevin, A. (1995). The collaborative consultation model. Journal of Educational and Psychological Consultation, 6(4), 329–346. Jolivette, K., Ennis, P. R., & Swoszowski, N. C. (2017). Educator “what-ifs”: The feasibility of choice making in the classroom. Beyond Behavior, 26, 74–80. https://doi. org/10.1177/1074295617713977. Kapp, S. K., Steward, R., Crane, L., Elliott, D., Elphick, C., Pellicano, E., & Russell, G. (2019). ‘People should be allowed to do what they like’: Autistic adults’ views and experiences of stimming. Autism, 23(7), 1782–1792. Kuntz, E. M., & Carter, E. W. (2021). Effects of a collaborative planning and consultation framework to increase participation of students with severe disabilities in general education classes. Research and Practice for Persons with Severe Disabilities, 46(1), 35–52. Lioutas, C., Einspanier, A., Kascheike, B.,Walther, N., & Ivell, R. (1997). An autocrine progesterone positive feedback loop mediates oxytocin upregulation in bovine granulosa cells during luteinization. Endocrinology, 138(11), 5059–5062. Lovaas, I. O. (1987). Behavioral treatment and normal educational and intellectual functioning in young autistic children. Journal of Consulting and Clinical Psychology, 55, 3–9. https://doi.org/10.1037/0022-006X.55.1.3. Luczynski, K. C., & Hanley, G. P. (2009). Do children prefer contingencies? An evaluation of the efficacy of and preference for contingent versus noncontingent social reinforcement during play. Journal of Applied Behavior Analysis, 42, 511–525. https://doi.org/10.1901/ jaba.2009.42-511. Luiselli, J. K. (2021). Social validity assessment. In Applied behavior analysis treatment of violence and aggression in persons with neurodevelopmental disabilities (pp. 85–103). Cham: Springer. Malott, R. M., & Garcia, M. E. (1987). A goal-directed model for the design of human performance systems. Journal of Organizational Behavior Management, 9, 125–129. Malott, R. W. (1974). A behavioral systems approach to the design of human services. In D. Harshbarger, & R. F. Maley (Eds.), Behavior analysis and systems analysis: An integrative approach to mental health programs. Behaviordelia: Kalamazoo, MI. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of



Behavioral systems

367

human subjects of research. U.S. Department of Health and Human Services. https://www. hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/ index.html. Nisbet, J. (2021). Pain and shock in America: Politics, advocacy, and the controversial treatment of people with disabilities. Brandeis University Press. Park, E. Y., & Blair, K. S. C. (2019). Social validity assessment in behavior interventions for young children: A systematic review. Topics in Early Childhood Special Education, 39(3), 156–169. Peryer, G., Golder, S., Junqueira, D., Vohra, S., & Loke, Y. K. (2019). Chapter  19: Adverse effects. In J. P. T. Higgins, J. Thomas, J. Chandler, M. Cumpston, T. Li, M. J. Page, & V. A. Welch (Eds.), Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019) Cochrane. Retrieved from https://training.cochrane.org/handbook/ current/chapter-19. Pierce,W. D., & Cheney, C. D. (2013). Behavior analysis and learning (5th ed.). Psychology Press. Rachels, J. (2014). The elements of moral philosophy. McGraw-Hill. Ricciardi, J. N., Rothschild, A.W., Driscoll, N. M., Crawley, J.,Wanganga, J., Fofanah, D. A., & Luiselli, J. K. (2020). Social validity assessment of behavior data recording among human services care providers. Behavioral Interventions, 35(3), 458–466. Roane, H. S., Vollmer, T. R., Ringdahl, J. E., & Marcus, B. A. (1998). Evaluation of a brief stimulus preference assessment. Journal of Applied Behavior Analysis, 31(4), 605–620. https://doi.org/10.1901/jaba.1998.31-605. Salter, E. K. (2012). Deciding for a child: A comprehensive analysis of the best interest standard. Theoretical Medicine and Bioethics, 33, 179–198. https://doi.org/10.1007/ s11017-012-9219-z. Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the art? Journal of Applied Behavior Analysis, 24(2), 189–204. Sidman, M. (2001). Coercion and its fallout (revised edition). Authors Cooperative. Sigurdsson, S. O., & McGee, H. M. (2015). Organizational behavior management: Systems analysis. In H. Roane, J. Ringdahl, & T. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 627–647). New York, NY: Elsevier. Sisti, M. K., & Robledo, J. A. (2021). Interdisciplinary collaboration practices between education specialists and related service providers. Journal of Special Education Apprenticeship, 10(1), n1. Slocum,T. A., Detrich, R.,Wilczynski, S. M., Spencer,T. D., Lewis,T., & Wolfe, K. (2014).The evidence-based practice of applied behavior analysis. The Behavior Analyst, 37, 41–56. https://doi.org/10.1007/s40614-014-0005-2. Stokes,T. F., & Baer, D. M. (1977). An implicit technology of generalization. Journal of Applied Behavior Analysis, 10, 349–367. https://doi.org/10.1901/jaba.1977.10-349. Thomas, A. L., & Brodhead, M. T. (2021). Bringing challenge to coercion and the status quo. In A. Beirne, & J. A. Sadavoy (Eds.), Understanding ethics in applied behavior analysis: Practical applications (2nd ed., pp. 345–384). Routledge. Tiger, G. H., Hanley, G. P., & Bruzek, J. (2008). Functional communication training: A review and practical guide. Behavior Analysis in Practice, 1, 16–23. https://doi.org/10.1007/ BF03391716. Vanselow, N. R., Thompson, R., & Karsina, A. (2011). Data‐based decision making: The impact of data variability, training, and context. Journal of Applied Behavior Analysis, 44(4), 767–780. Veneziano, J., & Shea, S. (2022). They have a voice: Are we listening? Behavior Analysis in Practice. https://doi.org/10.1007/s40617-022-00690-z (Advanced online publication, in press). Villeneuve, M., & Hutchinson, N. L. (2012). Enabling outcomes for students with developmental disabilities through collaborative consultation. Qualitative Report, 17, 97.

368

Applied behavior analysis advanced guidebook

Vollmer, T. R., Iwata, B. A., Zarcone, J. R., Smith, R. G., & Mazaleski, J. L. (1993). The role of attention in the treatment of attention-maintained self-injurious behavior: Noncontingent reinforcement and differential reinforcement of other behavior. Journal of Applied Behavior Analysis, 26, 9–21. https://doi.org/10.1901/jaba.1993.26-9. White, A. N., Oteto, N., & Brodhead, M. T. (2022). Providing choice-making opportunities to students with autism during instruction. Teaching Exceptional Children. https://doi. org/10.1177/00400599211068386. Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart 1. Journal of Applied Behavior Analysis, 11(2), 203–214. Wolfe, K., McCammon, M. N., LeJeune, L. M., & Holt, A. K. (2021).Training preservice practitioners to make data-based instructional decisions. Journal of Behavioral Education, 1–20.

CHAPTER 15

Organizational behavior management in human services James K. Luiselli

Clinical Development and Research, Melmark New England, Andover, MA, United States

Organizational behavior management in human services Organizational behavior management (OBM) is a subspecialty of applied behavior analysis (ABA) concerned with employee performance within business, industry, and manufacturing (Wilder & Sigurdsson, 2015; Wine & Pritchard, 2018). The scope of OBM is broad and, among many objectives, focuses on assessment, process and outcome measurement, training, occupational incentives, and social validity. As well, OBM considers behavioral systems analysis (BSA; McGee & Diener, 2010), performance management (PM; Daniels & Bailey, 2014), and behavior-based safety (BBS; Martinez-Onstott, Wilder, & Sigurdsson, 2016). With relevance to this chapter, Ludwig (2015) emphasized that OBM “studies and applies behavior analysis techniques to assess and intervene in organizational problems” in order to “change human behavior in the workplace” (p. 606). Among several ABA practice components that have relevance to OBM are functional behavioral assessment (FBA), antecedent and consequence interventions, and observationderived program evaluation. Luke, Carr, and Wilder (2018) commented further on the compatibility between OBM and ABA, specifically with regard to behavior analyst certification (www.bacb.com). Although nearly three-quarters of board certified behavior analysts (BCBA) practice in the area of intellectual and developmental disabilities, certification requirements are practice-neutral and the task list qualifications designated by the BACB “that can be directly linked to the OBM literature is a testament to the prominence of OBM as a subspeciality area in applied behavior analysis” (p. 292). However, among several related matters, Luke et al. (2018) concluded that few OBM professionals are certified behavior analysts and many organizations discount credentialing in most instances. Also, OBM professionals may perceive the Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00015-5

Copyright © 2023 Elsevier Inc. All rights reserved.

369

370

Applied behavior analysis advanced guidebook

process of acquiring added credentials such as a BCBA a burden and not necessary to practice in a competitive market. Another consideration is that reimbursement for services in common OBM settings does not come from third-party payers that require professional certification. Behavior analysis and OBM have a lengthy history within human services settings for persons with cognitive and behavior challenges (Luiselli, Gardner, Bird, & Maguire, 2021; Reid, 1998). Sturmey (1998) summarized several OBM topics that were relevant to human services at the time of publication, notably the competencies and targeted behavior of care providers. For example, studies evaluated the effects of participant management on direct services personnel completing habilitation activities (Burgio, Whitman, & Reid, 1983), large-scale staff training programs (Greene,Willis, Levy, & Bailey, 1978), interventions to decrease employee absenteeism (Reid, Shuh-Weir, & Brannon, 1978), and safety management (Van den Pol, Reid, & Fuqua, 1983). Sturmey (1998) highlighted “a seven step prototypical OBM staff training program” (p. 19) designed by Reid and Parsons (1995) in which trainers-­ supervisors (a) provide care providers with written checklists of work skills, (b) describe and demonstrate (model) the skills, (c) observe care providers practicing the skills while receiving performance feedback, and (d) maintain the training sequence until care providers implement skills competently within both simulated and in vivo (on-the-job) conditions. This approach eventually evolved into the model of behavioral skills training (BST) that is used so effectively with human services practitioners (Erath, DiGennaro Reed, & Blackman, 2021; Reid, 2017;Vladescu & Marano, 2021). Luiselli (2018b) proposed that an OBM perspective on human services training and performance management should begin by delineating the vocational objectives and expectations of care providers.Teachers, aides, therapists, and counselors at schools, community centers, and residential facilities must conduct skills instruction with children and adults, implement behavior support plans, record outcome data, follow regulatory policies and guidelines, and care for the physical environment. Personnel shortages, limited resources, competing work demands, absence of definitive competency standards, inadequate training and supervision, and poor performance motivators often exist in human services settings and contribute to less-than-­ exemplary service delivery. Accordingly, the priorities for OBM practice in human services is to assess obstacles and barriers that compromise care provider training, performance improvement, and service-recipients achieving quality of life goals. The end product of such assessment is to initiate and evaluate effective systems-change projects with lasting results.



Organizational behavior management

371

In a review of OBM research within human services settings, Gravina et al. (2019) identified articles published in the Journal of Applied Behavior Analysis, Journal of Organizational Behavior Management, and Behavior Analysis in Practice for the period 1990 to 2016. The population targeted most frequently in studies was persons diagnosed with intellectual disability. Schools were the most common research location, followed equally by residential facilities, day treatment centers, and group homes. Teachers and direct care personnel dominated the employees who participated in the research. Gravina et al. (2019) found that the highest number of articles addressed treatment integrity and in descending order, safety, engagement, administrative and staff management, and preparation. The various OBM interventions were both antecedent and consequence focused, for example, staff training, performance monitoring, praise and feedback, goal-setting, and incentives. This review concluded that although OBM research in the area of human services has increased, very few studies conducted preintervention assessment, the range of intervention procedures was limited, outcome measures were not extensive, and greater inquiry should be directed at supervisors and program managers. Finally, Gardner, Bird, Maguire, and Luiselli (2021) proposed that “A starting point for OBM practitioners and researchers is that human services organizations must integrate systems with a unified framework that stresses clinical operations, benchmark measures, continuous progress monitoring, evidence-based interventions, data-driven performance review, and transparent reporting to stakeholder groups” (p. 4). Priorities for assessment and evaluation include performance diagnostics (Wilder, Cymbal, & Villacorta, 2020), the skills-enhancing effects of personnel training programs, and social validity. Supervisory practices, acknowledgement of care provider preferences, and technology-assisted methods of service delivery also demand attention from organizational leaders. Numerous systems interventions are noteworthy such as incentive-based performance contingencies with practitioners, managing staff turnover and attrition, reducing implementation of restrictive behavior management practices (RBMPs), and promoting health and wellness organization-wide. In Gardner et  al. (2021), we also commented on the necessity of human services organizations developing a unified culture grounded in professional ethics (Brodhead & Higbee, 2012) and the promotion of diversity, equity, and inclusion (DEI) in the workplace (Akpapuna, Choi, Johnson, & Lopez, 2020; Fong, 2021). This chapter involves ABA and OBM practices within human services settings for persons with intellectual and developmental disabilities (IDD).

372

Applied behavior analysis advanced guidebook

In some situations, the behavior analysis practitioner may be a salaried employee at a school, center, or residential program, or contracted as an independent provider to these sites. Regardless of role, the first section of the chapter covers a consultation model for conducting and evaluating OBM applications. Next, I detail several domains within the scope of OBM and human services. A third section summarizes additional areas of interest and practice considerations

OBM consultation model Behavioral consultation is a multistage model of indirect service delivery in which a professional instructs, guides, and consults to interventionists (consultees) who engage with service-recipients. Thus, a behavior analyst as OBM consultant would advise directors, managers, and care providers at human services settings about methods to improve operational systems, implement effective procedures with children and adults, evaluate performance, and build supportive systems, among other initiatives. There is good evidence-based and practice support for behavioral consultation to different clinical populations and service settings (Bergan & Kratochwill, 1990; Kratochwill, Elliott, & Callan-Stoiber, 2002; Luiselli, 2018a; Palmer, Pham, & Carlson, 2011). The first stage of behavioral consultation is establishing a relationship with consultees. In the role of consultant, you should explain the purposes of your involvement, discuss how you were identified (e.g., as an on-staff behavior analyst or hired from outside of the setting), familiarize yourself with the consultees you will be working with, present background information, and answer questions. Relationship-building is enhanced by becoming familiar with the structure and intervention philosophy of the setting, operational components, and administrative hierarchy. Time is also well spent by inquiring with consultees about their prior experiences with consultants, what they found was useful, and any negative encounters. Note that, “At its most basic level, consultation is an interpersonal exchange—the consultant’s success is going to hinge largely on his or her communication and relationship skills” (Gutkin & Curtis, 1982, p. 822). Therefore, be aware of promoting rapport among consultees and conditioning your “likeability” as a consultant. Stage two of behavioral consultation is labeled problem identification and has several objectives. A “problem” can be considered the reason a referral for consultation was made, therefore, it is necessary that consultees agree



Organizational behavior management

373

about purpose and objective(s). Problem identification often begins with a consultant and consultees meeting to pinpoint concerns, develop behaviorspecific definitions of intervention targets, and present the rationale for consultation-directed plans. Once consensus is reached and participating consultees are confirmed, the consultant considers methods to measure the identified problem(s) before recommending corrective actions. In illustration, a residential care facility you consult to may be concerned about decreasing tardiness and absenteeism among care providers. This problem requires that facility administrators conscientiously document late attendance and missed hours, aggregate the data in usable form, and analyze patterns and trends as performance indicators (Luiselli et al., 2009). An already existing measurement system or one designed at the problem identification stage usually continues throughout consultation. In stage three, problem analysis, the consultation team concentrates on the antecedent and consequence conditions that evoke and maintain the identified problem(s). Similar to functional behavior assessment (FBA) with children and adults (Call, Scheithauer, & Mevers, 2017; Chok, Harper,Weiss, Bird, & Luiselli, 2020), problem analysis relies on direct observation, interviews, completion of rating scales, and conditional probability relationships among behavior and environmental variables. Referring to employee tardiness and absenteeism again, the scope of analysis might find that the problems are associated with particular days of the week, specific work requirements, shift assignments, and compensation (Strouse & DiGennaro Reed, 2021). Such analysis informs function-based intervention rather than arbitrarily selected strategies unlikely to solve the problem. Other matters to consider at the problem identification stage are the training requirements of consultees who will be responsible for implementing consultation recommendations, their approval and acceptance (social validity) of intervention decisions, and necessary resources at services settings (Luiselli, 2018a). Stage four of behavioral consultation is devoted to plan implementation in which consultant intervention recommendations are initiated. There must be sufficient training of consultees for them to become competent interventionists, written (criterion-referenced) procedural protocols, and definitive explanation of the conditions under which intervention should and should not be applied. Further, consultants and other designated personal such as program managers and supervisors are required to observe consultees during plan implementation in order to document intervention integrity. The second purpose of integrity-focused observation is to present consultees with feedback intended to reinforce competent performance and

374

Applied behavior analysis advanced guidebook

correct misapplied procedures (DiGennaro Reed, Hirst, & Howard, 2013; Parsons, Rollyson, & Reid, 2012; Reid, 2017). Observation and feedback continue until it can be confirmed that consultees are fully competent or require additional training before acceptable performance is demonstrated. Finally, stage five addresses the effectiveness of consultation through plan evaluation that considers measurement data recorded before and during intervention. Behavior analysts are well acquainted with single-case evaluation designs that have utility in educational and treatment settings (Barlow, Nock, & Hersen, 2008; Kazdin, 2011) and are also applicable in an OBM context (Erath et al., 2021; Gravina et al., 2019). Idiographic methods can be translated successfully in practice venues and for plan evaluation in less controlled (quasi-experimental) formats. Both the immediate and extended (maintenance) effects of plans should be evaluated. Social validity assessment also comprises plan evaluation to record consultee acceptance and approval of intervention objectives, methods, and outcomes (Luiselli, 2021b; Schwartz & Baer, 1991; Wolf, 1978). Although the behavioral consultation model is represented in five stages, there is less of a distinct sequence in practice. That is, consultation activities are typically blended and not conducted in isolation. For example, OBM consultation may merge problem identification and problem analysis, or plan implementation may occur for several problems but not at the same time. The advantages of the model are showing that indirect service delivery requires a consultant to possess multiple competencies and be able to accomplish interrelated tasks proficiently.

OBM-human services domains This section of the chapter covers several domains of OBM that are relevant to most human services settings for persons with IDD. For each domain, I describe practice options available to behavior analysts and pertinent research that has guided empirically supported procedures. Performance diagnostic assessment. Performance diagnostic assessment, also termed performance analysis (PA), identifies the workplace conditions that hamper, impede, and interfere with the productive behavior of employees (Wilder & Cymbal, 2021). Aspects of the physical environment, the demands placed on workers, behavior-consequence contingencies, and operations specificity are some of the variables that affect performance. Conducting BSA is another way to frame performance diagnostics from a large-scale organizational perspective (Diener, McGee, & Miguel, 2009).



Organizational behavior management

375

Austin (2000) designed the Performance Diagnostic Checklist (PDC) as an interview-based assessment instrument along four domains: Antecedents and Information, Equipment and Processes, Knowledge and Skills-Training, and Consequences. An OBM consultant gathers information from the PDC by asking supervisors and managers to describe how each domain contributes to performance problems. These findings are translated into performance improvement interventions according to antecedent and consequence sources of control. A second instrument, the Performance Diagnostic ChecklistSafety (PDC-Safety), isolates safety-related performance deficiencies. The PDC-Safety also informs corrective actions such as intervention to increase sanitary hand washing at a clinic for children with developmental disabilities (Cruz et al., 2019). Third, the Performance Diagnostic Checklist-Human Services (PDC-HS; Carr, Wilder, Majdalany, Mathisen, & Strain, 2013) is most ­applicable to the content of this chapter. The PDC-HS presents 20 questions across four domains requiring “yes, no, not applicable” responses from individuals responsible for program implementation at schools, centers, residential facilities, and similar settings.The category, Training, inquires about the skills of practitioners in completing necessary tasks. Task Clarification and Prompting clarifies practitioner understanding of procedural requirements, availability of job aids, and cuing strategies that may be in place. The category Resources, Materials, and Processes concerns the personnel and objects needed for task engagement. Within Performance Consequences, Effort, and Completion, the PDC-HS documents task supervision, monitoring, performance feedback, and response effort. Items scored “no” on the PDC-HS indicate interventions should be initiated for the applicable categories. In illustration, suggested interventions for negatively scored items in the Training category would be intensified competency-based training (Reid, 2017; Vladescu & Marano, 2021) and instructional checklists and guidance procedures in the Task Clarification and Prompting category. Wilder et al. (2020) reviewed research on the PDC-HS and proposed new areas of inquiry. Among primarily human services settings for children, the PDC-HS has been used to inform interventions dedicated to environmental cleaning (Carr et al., 2013), securing therapy room doors (Ditzian, Wilder, King, & Tanz, 2015), implementing error correction during discrete trial instruction (Bowe & Sellers, 2018), conducting natural environment training (Wilder, Lipschultz, & Gehrman, 2018), and reducing employee tardiness (Merritt, DiGennaro Reed, & Martinez, 2019). With regard to future PDC-HS research and implications for practitioners, “current ­practice

376

Applied behavior analysis advanced guidebook

calls for a behavior analyst or consultant to use the tool to conduct an interview with a supervisor or manager regarding a performance problem exhibited by an employee” (Wilder et al., 2020, p. 174). Having nonbehavioral personnel conduct PDC-HS analysis with good results would increase applicability and the scope of performance diagnostic assessment. Studies that compare the PDC-HS to other methods of performance evaluation will also benefit practice. The psychometric properties of the PDC-HS, refining protocol that guides interventions, and examining effectiveness within diverse service settings are additional research recommendations (Wilder et al., 2020). Other procedural refinements to the PDC-HS include establishing better quantified threshold (cut-off) levels for indicated interventions and applying a decision-making model when interventions are needed within multiple categories (Vance, Saini, & Guertin, 2022).These issues notwithstanding, the PDC-HS is a necessary tool for behavior analysts conducting OBM consultation with human services practitioners and highlights preintervention assessment as a priority regardless of the performance problems encountered. Safety. Behavior-based safety (BBS) is a hallmark of OBM with emphasis on prevention, incident analysis, goal-setting, and performance management (Geller, 2005; McSween, 2003; Wilder & Sigurdsson, 2015). Accidental falls, environmental hazards, toxic substances, and improper handling of objects are common sources of injury in persons with IDD (Gianotti, Kahl, Harper, & Luiselli, 2021; Sherrard, Ozanne-Smith, & Staines, 2004; Tyler, WhiteScott, Ekvall, & Abulafia, 2008). Physical intervention between care providers and service-recipients also threaten safety, for example, when treating aggression and intervening with restraint (Luiselli, 2013; Sanders, 2009). Further, health concerns occasioned by the COVID-19 pandemic posed many safety risks to ABA programs, heightened awareness about disease transmission, and mandated mitigation strategies among clients and employees (Cox, Plavnick, & Brodhead, 2020; Kornack, Williams, Johnson, & Mendes, 2020; LeBlanc, Lazo-Pearson, Pollard, & Unumb, 2020). Gravina and Matey (2021) summarized many areas of safety and injury prevention in human services settings that encompass an OBM framework. Safety targets should be defined according to observation checklists and employee behavior that conforms to task analyzed steps. Examples are care providers following standardized protocols for removing and securing hazardous materials, keeping certain objects out of reach, sanitizing surfaces, and wearing personnel protective equipment (PPE). Procedures that require lifting and positioning children and adults, fitting them with assistive



Organizational behavior management

377

d­ evices, and performing medical-care routines must be monitored. Similarly, checklist data (e.g., percentage of steps completed accurately) demonstrate care provider competencies and deficiencies that must be overcome. Safety training described by Gravina and Matey (2021) can assist care providers discriminating at-risk from safety behavior. A BST format with video modeling has been shown to be highly effective (Garcia, Dukes, Brady, Scott, & Wilson, 2016). Following training, care providers usually receive performance management interventions on-the-job comprised of antecedent environmental manipulations, goal-setting, and prompting (Luiselli, 2021a). Recognition, feedback, and positive reinforcement are the consequence interventions typically implemented with safety behavior. Organizational leadership and employee engagement in safety programming are necessary to promote and maintain a risk-adverse culture (Weatherly, 2019). In approaching human services safety objectives from an OBM model, behavior analysts should consider both individual and systems-wide interventions (Miltenberger and Novotny, 2022). Skill-building programs have included teaching children and adults to avoid poison hazards (Morosohk  & Miltenberger, 2022), exit buildings during a fire emergency (Jones, Kazdin, & Haney, 1981), and wear facial coverings in a health crisis (Ertel, Wilder, & Hodges, 2021). Acquiring independent safety skills enables persons with IDD to react purposefully to threats that may arise when they are under supervised. In an assessment study with adult services care providers (N = 59), Driscoll et al. (2022) distributed a survey that inquired about their experiences with 25 safety concerns among the persons they served within group homes and the community. Understanding the perceptions of care providers and whenever possible, service-recipients themselves, allows organizations to identify safety priorities and plan validated interventions. Self-recording and in vivo assessment, root-cause analysis of safety data, and conscientious attention to environmental safeguards are noteworthy systems objectives (Gravina & Matey, 2021; Miltenberger and Novotny, 2022). Training and performance management. The involvement of behavior analysts in OBM within human services is no more evident than in the areas of training and performance management (Novak et  al., 2019). Training is customarily conducted preservice and incorporates didactic instruction, written materials, exercises, and simulated role-playing (DiGennaro Reed et al., 2013; Lerman, LeBlanc, & Valentino, 2015; Shapiro & Kazemi, 2017). Performance management involves supporting trainee competencies in their active roles with service-recipients (Parsons et  al., 2012; Parsons, Rollyson, & Reid, 2013). However, despite the ­availability

378

Applied behavior analysis advanced guidebook

of evidence-based and empirically supported training and performance management methods, many human services settings lack implementation expertise. Notably, DiGennaro Reed and Henley (2015) found that in response to an online survey, only 55% of certified and aspiring behavior analysts received initial training after being hired at their setting. A larger number of respondents confirmed ongoing training and support, but received less effective forms of performance management and were required to deliver supervision without sufficient prior experience. From this study, “These results suggest training and performance management in settings that employ behavior analysts are not consistent with recommended practices” (p. 956). BST comprised of instructions, demonstration, rehearsal, and feedback is the most effective model for teaching competencies to human services practitioners such as conducting discrete trial instruction (Sarokoff & Sturmey, 2004), preference assessment (Pence, St. Peter, & Tetreault, 2012), and functional analysis (Chok, Shlesinger, Studer, & Bird, 2012). Instructions to trainees are delivered verbally and in writing and aligned with steps listed and defined on a procedural integrity checklist. A trainer models steps in correct order and form, typically with accompanying description, both in vivo and via video recording. The rehearsal component of BST involves trainees implementing integrity checklist steps while receiving feedback from the trainer. Feedback is presented as praise and approval contingent on steps displayed accurately and correction with additional practice following steps that were misapplied. Training sessions continue until a trainee achieves predetermined mastery criteria. The extensive research literature on BST (Parsons et  al., 2013; Reid, 2017; Vladescu & Marano, 2021) often comments on the amount of time needed to conduct training that in many settings, may be prohibitive and beyond available resources. To economize training, studies have evaluated the effects of implementing some but not all BST components (Drifke,Tiger, & Wierzba, 2017; Ward-Horner & Sturmey, 2012) and found that optimal results are dependent on the full package. Erath et al. (2020) approached the practical considerations of using BST through group-­administered training with care providers (N = 25) at a residential-services setting for persons with IDD. The care providers participated in a single group session lasting approximately 50 min in which they received BST to learn how to train other staff in similar procedures.Ten care providers reached mastery criterion after group training and ten care providers demonstrated competency with supplemental posttraining feedback.



Organizational behavior management

379

Another practical approach to training is reducing the time delivering BST. Hartz, Gould, Harper, and Luiselli (2021) initially evaluated the effects of a written memo and nine-step integrity checklist on classroom instructors (N = 6) assessing interobserver agreement (IOA) with their students. Compared to baseline performance, this intervention was not effective. Next, a trainer conducted a 60 min BST session with two groups of three instructors. The first 30 min of training occurred outside of the classroom where the trainer implemented conventional BST components of instruction, demonstration, rehearsal, and feedback.The remaining 30 min of training shifted to in vivo conditions in the classroom with the trainer observing the instructors assessing IOA and providing feedback. Abbreviated BST improved performance of all instructors and total training time of 120 min was efficient. Video-based training (VBT) evaluated by Erath, DiGennaro Reed, et al. (2021) is one more times-saving strategy applicable with human services providers. This study involved BST with four staff being trained to teach job-related skills to confederate peers. In a single training session, the research participants viewed (a) information about BST, (b) operational definitions and practice guidelines for implementing BST, and (c) two video models of a trainer conducting BST with a newly hired employee. During video viewing, they completed guided notes and later responded to a postvideo quiz. Compared to baseline conditions in a multiple baseline design, two of the four participants exhibited high BST integrity with a confederate peer, two participants reached similar levels after receiving brief positive feedback, and average duration of VBT was 18 min and 34 s, a relatively brief amount of time to achieve these noteworthy results. In summary, behavior analysts functioning as OBM consultants to human services settings are able to advise about BST, the most widely used and effective approach to training and performance management, including recommendations that address time and resource limitations. Posttraining performance management is particularly critical and can be promoted through BST in  vivo and many supplemental interventions such as environmental cuing, goal-setting, self-management, and supervisor feedback (Luiselli, 2021a; Reid, Parsons, & Green, 2012). Social validity. Wolf (1978) proposed that when evaluating ABA services, “It seems that if we aspire to social importance, then we must develop systems that allow our consumers to provide feedback about how our applications relate to their values, to their reinforcers” (p. 213). The direct consumers of social validity assessment are the recipients of services and

380

Applied behavior analysis advanced guidebook

the individuals who implement them. A second group, indirect consumers, consist of persons who are not involved with but have a perspective on service delivery, for example, community merchants, referral sources, and regulatory agencies. Social validity is assessed through interviews, surveys, and questionnaires that pose statements to consumers about service objectives, methods, and outcomes (Luiselli, 2021b; Schwartz & Baer, 1991). Social validity within human services is closely related to the concept of perceived organizational support (POS; Rhodes & Eisenberger, 2002) that seeks the opinions of employees about how they are valued and their contributions appreciated in occupational settings (Kurtessis et  al., 2015). Consumer feedback influences program decisions on many levels, suggests performance goals previously overlooked, and enhances job satisfaction. However, social validity has been underrepresented in the OBM-human services literature (Gravina et  al., 2019) and ABA publications in general (Ferguson et al., 2018) for many years. More recently, several examples of social validity assessment in human services organizations have appeared. The participants in Erath et al. (2020) completed a modified version of the Intervention Rating Profile-15 (IRP15; Martens, Witt, Elliott, & Darveaux, 1985) consisting of six items that rated acceptability of the training workshop and BST they received on a six-point Likert-type scale from “strongly disagree” to “strongly agree.” Three other items were open-ended questions that solicited qualitative feedback. In the Erath, DiGennaro Reed, et al. (2021) study on video-based BST, a 10-item questionnaire was distributed to training participants with statements such as “The training program would be an acceptable way to help trainers accurately train their staff ” and “I like the procedures used to assist me in training new staff.” Other contemporary illustrations of social validity are systems-wide assessments of organizational policies and procedures. Ricciardi et al. (2020) sampled care providers (N = 78) from 13 group homes about the practicality, training, supervision, importance, and utility of behavior data recording practices required in their work with adults who had IDD. Table 1 shows the statements contained on a questionnaire the care providers ranked from a five-point Likert-type scale (1: strongly disagree, 2: disagree, 3: neither disagree nor agree, 4: agree, 5: strongly disagree). Rothschild et al. (2022) evaluated responsiveness of a human services organization to the COVID-19 pandemic from a questionnaire completed by care providers (N = 498) and targeted leadership actions such as modifying work assignments, delivering services remotely, improving technology resources, and initiating new



Organizational behavior management

381

Table 1  Example of social validity assessment questionnaire completed by human services care providers. Questionnaire statement

Average rating

I know when I am responsible for recording behavior data Behavior data I record are used to make decisions about the people I serve The behavior data recording sheets are easily accessible Behavior data are important to my work I do not have someone I can contact if I need clarification completing behavior data recording sheets* I do not find it helpful to review behavior data periodically* Recording behavior data does not help the people I serve* Training in behavior data recording has happened on my shift My supervisor teaches me how to record behavior data recording sheets There is enough time to record behavior data while on shift I understand how clinicians use behavior data after it has been recorded I have not been trained to complete data recording sheets in the program* Behavior definitions are easy to understand My other job tasks get in the way of recording behavior data* Behavior data recording sheets are not easy to fill out* Additional training in behavior data recording would be helpful I have been shown graphs of the behavior data we recording in the program Other staff record behavior data the same way

4.66 (sd = 0.75) 4.57 (sd = 0.91) 4.57 (sd = 0.75) 4.42 (sd = 1.03) 4.38 (sd = 1.14) 4.24 (sd = 1.2) 4.23 (sd = 1.32) 4.19 (sd = 1.18) 4.18 (sd = 1.10) 4.10 (sd = 1.14) 4.03 (sd = 1.08) 3.96 (sd = 1.45) 3.84 (sd = 1.12) 3.71 (sd = 1.38) 3.69 (sd = 1.40) 3.64 (sd = 1.26) 3.54 (sd = 1.37) 3.38 (sd = 1.33)

Note: 1: strongly disagree, 2: disagree, 3: neither disagree nor agree, 4: agree, 5: strongly agree; * = reverse coding.

safety guidelines. Another COVID-19 project, by Maguire, Harper, Gardner, and Luiselli (2022), conducted social validity assessment with supervisors (N = 14) who trained care providers to implement health and safety protocols at a residential school for students with intellectual and neurodevelopmental disabilities.The assessment asked the supervisors to rate their approval of the school plans, communication about risk-mitigation strategies, and recommendation for similar intervention at other human services settings. The preceding examples should encourage behavior analysts to emphasize social validity in OBM consultation to human services settings.

382

Applied behavior analysis advanced guidebook

Perceptions of service-recipients, care providers, and other stakeholders contribute greatly to program evaluation on many fronts and improving quality of care. Keep in mind that the social value of behavioral interventions is a foundational principle of ABA articulated more than 50 years ago (Baer, Wolf, & Risley, 1968). Schwartz and Baer (1991), Common and Lane (2017), and Luiselli (2021b) contain several recommendations for designing, administering, and aggregating the data from social validity assessment forms. Turnover and incentives. Two other areas of OBM consultation are employee turnover and programming incentives to motivate performance improvement. Wine, Osborne, and Newcomb (2020) noted that ABA human services organizations must confront high incidence of turnover that has many origins and predisposing factors. Turnover within all employment positions is evident but most frequent among direct service providers (National Core Indicators [NCI], 2019). Persistent turnover imposes increased efforts on the workforce, lessens quality of care, is a financial burden, and has a negative impact on morale (Larson,Tolbize, Kim, & York, 2016). It appears that high turnover is associated with low pay, hard-to-manage work schedules, and difficult job assignments but no single factor is apparent and conditions vary from one human services setting to another (Bogenshutz, Hewitt, Nord, & Hepperlen, 2014; Strouse & DiGennaro Reed, 2021). There is evidence that turnover within ABA service settings is impacted by training, supervision, and salaries (Kazemi, Shapiro, & Kavner, 2015), the degree of collegial support (Plantiveau, Dounavi, & Virues-Ortega, 2018), and general job satisfaction (Cymbal, Litvak, Wilder, & Burns, 2021). Addressing the sources of turnover is the logical approach to resolution but no one strategy is likely to be effective and most settings have to attack the problem incrementally. Strouse, Carroll-Hernandez, Sherman, and Sheldon (2004) is one of the few examples of an organizational plan to reduce turnover, specifically by decreasing part-time employees, raising pay, and adjusting shift assignments to longer hours over fewer days. Wine et al. (2020) suggested that hiring practices should delineate job responsibilities more clearly and effective performance management systems are needed to diminish turnover. Hence, behavior analysts can advise human services settings on methods to systematically decrease turnover and in consequence, establish a cohesive and high-quality workforce. Performance incentives are a popular OBM intervention (Daniels & Bailey, 2014; Wine, 2017) and have been used in human services settings to increase care provider completion of clerical tasks (Cook & Dixon, 2006), ­intervention



Organizational behavior management

383

integrity (Miller, Carlson, & Sigurdsson, 2014), and attendance (Luiselli et al., 2009). Delivering incentives successfully requires precise pinpointing of performance criteria and reliable measurement through permanent product and observational data recording. Contingencies must be arranged so that the behavior and outcomes that have been incentivized are defined unambiguously (DiGennaro Reed, Novak, Erath, Brand, & Henley, 2018). The effectiveness of performance incentives in OBM is highlighted by assessment of employee preferences (Wine & Doan, 2021). Positive consequences to increase work behavior include privileges (e.g., early release, flex scheduling), tangible items such as gift cards and lottery tickets, and money (Wine, Gugliemella, & Axelrod, 2013) but what is preferred varies among employees, changes over time, and has relative value among available options. Behavior analysts trained in preference assessment with service-recipients, a fundamental practice competency, are well equipped to design employee surveys and questionnaires that inquire about and rank order proposed incentives. Two guidelines for assessment are (a) inclusion of incentives a human services setting has the resources to offer, and (b) giving employees the opportunity to suggest other possible incentives. Second, assessment should be carried-out at regular intervals which accommodate fluctuating preferences among established employees and the choices of newly hired personnel. Further, if and when incentives are identified and programmed within a performance improvement plan, human services settings must apply them consistently, with fidelity, tied to probabilistic outcomes (Wine, Chen, & Brewer, 2019), and equitably.

Additional practice considerations This chapter has called attention to the compatibility between OBM and the professional practice of behavior analysts in human services settings for persons who have IDD. Among contemporaries, Florence DiGennaro Reed, Nicole Gravina, Dennis Reid, Peter Sturmey, David Wilder, and Byron Wine are ABA stalwarts who exemplify this confluence and been responsible for many research-to-practice advancements in assessment, intervention, evaluation, and program design. OBM is a vibrant practice endeavor that behavior analysts with human services expertise in IDD should pursue including populations challenged by brain injury, learning disabilities, emotional disorders, and physically challenged conditions. The points Luke et al. (2018) raised about behavior analyst certification pertinent to OBM touched on several stages of preparation linked to

384

Applied behavior analysis advanced guidebook

the BACB task list (BACB, 2022b). Graduate school coursework required for credentialing features articles from the OBM literature (Pastrana et al., 2018) and the topics of ethics and professionalism “could be taught from an OBM lens” (p. 296). Fieldwork requirements embrace many OBM activities consisting of performance analysis, formulating incentive-based initiatives, delivering supervision, implementing pyramidal (train-thetrainer) training, and coaching managers. It is meaningful that the BACB Professional and Ethical Compliance Code for Behavior Analysts (BACB, 2022a) guides conduct in an OBM context and as proffered be Luke et al. (2018), may be useful to noncredentialed practitioners. Therefore, preparing for and practicing with behavior analyst certification makes OBM specialization a reasonable goal. Behavior analysts in OBM roles can establish teams at their service settings, bringing together personnel from multiple divisions, operations staff, program directors, practitioners, and care providers dedicated to performance improvement projects (Luiselli, 2018b). Building teams ensures that OBM projects are consensus-driven, there are requisite resources for successful implementation, project assignments can be shared by the group, and interventions receive continuous monitoring and evaluation. Stressed throughout the chapter, performance improvement teams should have definitive service objectives and evaluation criteria associated with valid outcome measures. Staying current with the OBM literature and most contemporary topics in the field is necessary. Apropos to human services in IDD, the Journal of Organizational Behavior Management, Journal of Applied Behavior Analysis, Behavior Analysis in Practice, and Behavioral Interventions regularly publish research, review, and discussion articles whose information is practice-­ enhancing. Attendance at conferences sponsored by the Association for Behavior Analysis International (ABAI),Association of Professional Behavior Analysts (APBA), and the Organizational Behavior Management Network is another avenue toward continuing education, professional development, and forming relationships with likeminded colleagues. Whenever possible, conducting and disseminating OBM research will sharpen practice skills and, of course, contribute to the profession at large (Luiselli, Gardner, Bird, Maguire, & Harper, 2022). Finally, behavior analysts involved with OBM consultation should take advantage of the many technology-assisted and telehealth modalities that facilitate practice (Bice-Urbach, 2019; Radley, Helbig, & Schrieber, 2019). Computer- and web-based instruction delivered asynchronously provide



Organizational behavior management

385

a useful training medium with many automated and recording functions (Dart & Wright, 2019). Teleconsultation enables remote observation, supervision, and monitoring through synchronous videoconferencing, text messaging, and data sharing (Sellers & Walker, 2019). The benefits of technology innovations are conducting services without travel restrictions, reaching large audiences, consolidating time, documenting activities within video-recorded files, and communicating instantaneously. Such practice demands diligent adherence to prevailing privacy, confidentiality, legal, and licensing regulations (Kramer, Mishkind, Luxton, & Shore, 2013).

References Akpapuna, M., Choi, E., Johnson, D. A., & Lopez, J. A. (2020). Encouraging multiculturalism and diversity within organizational behavior management. Journal of Organizational Behavior Management, 40(3–4), 186–209. https://doi.org/10.1080/01608061.2020.1832014. Austin, J. (2000). Performance analysis and performance diagnostics. In J. Austin, & J. E. Carr (Eds.), Handbook of applied behavior analysis (pp. 321–350). Context Press. Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97. https://doi.org/10.1901/ jaba.1968.1-91. Barlow, D. H., Nock, M. N., & Hersen, M. (2008). Single-case experimental designs: Strategies for studying behavior change (3rd ed.). Pearson. Behavior Analyst Certification Board. (2022a). Professional and ethical compliance code for behavior analysts. Littleton, CO: Behavior Analyst Certification Board. Retrieved from: https://www.bacb.com/ethics/ethics-code. Behavior Analyst Certification Board. (2022b). BCBA/BCaBA task list (5th ed.). Littleton, CO: Behavior Analyst Certification Board. Retrieved from: https://www.bacb.com/ bcba-bcaba-task-list-5th-ed. Bergan, J. R., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. Plenum. Bice-Urbach, B. (2019). Practical issues when using technology during consultation, supervision, and professional training. In A. J. Fischer, T. A. Collins, E. H. Dart, & K. C. Radley (Eds.), Technology applications in school psychology, consultation, supervision, and training (pp. 221–245). Routledge. Bogenshutz, M. D., Hewitt, A., Nord, D., & Hepperlen, R. (2014). Direct support workforce supporting individuals with IDD: Current wages, benefits, and stability. Intellectual and Developmental Disabilities, 52(5), 317–329. https://doi.org/10.1352/1934-9556-52.5.317. Bowe, M., & Sellers, T. P. (2018). Evaluating the performance diagnostic checklist-human services to assess incorrect error-correction procedures by preschool paraprofessionals. Journal of Applied Behavior Analysis, 51, 166–176. https://doi.org/10.1002/jaba.428. Brodhead, M. T., & Higbee, T. S. (2012). Teaching and maintaining ethical behavior in a professional organization. Behavior Analysis in Practice, 5(2), 82–88. https://doi.org/10.1007/ BF03391827. Burgio, L. D., Whitman, T. L., & Reid, D. H. (1983). A participative management approach for improving client care staff performance in an institutional setting. Journal of Applied Behavior Analysis, 16, 37–53. https://doi.org/10.1901/jaba.1983.16-37. Call, N. A., Scheithauer, M. C., & Mevers, J. L. (2017). Functional behavioral assessments. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook: A manual for professional practice (pp. 73–92). Elsevier/Academic Press. https://doi.org/10.1016/ B978-0-12-811122-2.00004-8.

386

Applied behavior analysis advanced guidebook

Carr, J. E.,Wilder, D.A., Majdalany, L., Mathisen, D., & Strain, L.A. (2013).An ­assessment-based solution to a human-service employee performance problem: An initial evaluation of the performance diagnostic checklist—Human services. Behavior Analysis in Practice, 6, 16–32. https://doi.org/10.1007/bf03391789. Chok, J. T., Harper, J. M., Weiss, M. J., Bird, F. L., & Luiselli, J. K. (2020). Functional analysis: A practitioner’s guide to implementation and training. Elsevier/Academic Press. Chok, J. T., Shlesinger, A., Studer, L., & Bird, F. L. (2012). Description of a practitioner training program on functional analysis and treatment development. Behavior Analysis in Practice, 5(2), 25–36. https://doi.org/10.1007/BF03391821. Common, E. A., & Lane, K. L. (2017). Social validity assessment. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook: A manual for professional practice (pp. 73–92). Elsevier/ Academic Press. https://doi.org/10.1016/B978-0-12-811122-2.00004-8. Cook, T., & Dixon, M. R. (2006). Performance feedback and probabilistic bonus contingencies among employees in a human services program. Journal of Organizational Behavior Management, 25, 45–63. https://doi.org/10.1300/J075v25n03_04. Cox, D. J., Plavnick, J. B., & Brodhead, M. T. (2020). A proposed process for risk mitigation during the COVID-19 pandemic. Behavior Analysis in Practice, 13, 299–305. https://doi. org/10.1007/s40617-020-00430-1. Cruz, N. J., Wilder, D. A., Phillabaum, C., Thomas, R., Cusick, M., & Gravina, N. (2019). Further evaluation of the performance diagnostic checklist-safety (PDC-safety). Journal of Organizational Behavior Management, 39, 266–279. https://doi.org/10.1080/0160806 1.2019.1666777. Cymbal, D. J., Litvak, S., Wilder, D. A., & Burns, G. N. (2021). An examination of variables that predict turnover, staff and caregiver satisfaction in behavior-analytic organizations. Journal of Organizational Behavior Management. https://doi.org/10.1080/0168061.2021. 1910099. Daniels, A., & Bailey, J. (2014). Performance management: Changing behavior that drives organizational effectiveness. Performance Management Systems. Dart, E. H., & Wright, S. J. (2019). Web-based and technology mediated training through a three-tiered model. In A. J. Fischer, T. A. Collins, E. H. Dart, & K. C. Radley (Eds.), Technology applications in school psychology, consultation, supervision, and training (pp. 179– 197). Routledge. Diener, L. H., McGee, H. M., & Miguel, C. F. (2009). An integrated approach for conducting a behavioral systems analysis. Journal of Organizational Behavior Management, 29, 108–135. https://doi.org/10.1080/01608060902874534. DiGennaro Reed, F. D., & Henley, A. J. (2015). A survey of staff training and performance management practices: The good, the bad, and the ugly. Behavior Analysis in Practice, 8, 16–26. https://doi.org/10.1007/s40617-015-0044-5. DiGennaro Reed, F. D., Hirst, J. M., & Howard, V. J. (2013). Empirically supported staff selection, training, and management strategies. In D. D. Reed, F. D. DiGennaro Reed, & J. K. Luiselli (Eds.), Handbook of crisis intervention and developmental disabilities (pp. 71–85). Springer. DiGennaro Reed, F. D., Novak, M. D., Erath, T. G., Brand, D., & Henley, A. J. (2018). Pinpointing and measuring employee behavior. In B. Wine, & J. K. Pritchard (Eds.), Organizational behavior management:The essentials (pp. 143–168). Hedgehog Publishers. Ditzian, K.,Wilder, D. A., King, A., & Tanz, J. (2015). An evaluation of the performance diagnostic checklist—Human services to assess an employee performance problem in a center-based autism treatment facility. Journal of Applied Behavior Analysis, 48, 199–203. https://doi. org/10.1007/s40617-018-0243-y. Drifke, M. A., Tiger, J. H., & Wierzba, B. C. (2017). Using behavioral skills training to teach parents to implement three-step prompting: A component analysis and generalization assessment. Learning and Motivation, 57, 1–14. https://doi.org/10.1016/j.lmot.2016.12.001.



Organizational behavior management

387

Driscoll, N. M., Rothschild, A. W., Luiselli, J. K., Goldberg, S., Crawley, J., Fofanah, D., et al. (2022). Brief report: Safety concerns of direct service providers for adults with intellectual and developmental disabilities. Manuscript submitted for publication. Erath, T. G., DiGennaro Reed, F. D., & Blackman, A. L. (2021). Training human service staff to implement behavioral skills training using video-based instruction. Journal of Applied Behavior Analysis, 54, 1251–1264. Erath,T. G., DiGennaro Reed, F. D., Sundermeyer, H.W., Brand, D., Novak, M. D., Harbison, M. L., et al. (2020). Enhancing the training integrity of human service staff using pyramidal behavioral skills training. Journal of Applied Behavior Analysis, 53, 449–464. https:// doi.org/10.1002/jaba.608. Erath, T. G., Pellegrino, A. J., DiGennaro Reed, F. D., Ruby, S. A., Blackman, A. L., & Novak, M. D. (2021). Experimental research methodologies in organizational behavior management. Journal of Organizational Behavior Management, 41, 150–181. https://doi.org/10.10 80/01608061.2020.1869137. Ertel, H. M., Wilder, D. A., & Hodges, A. C. (2021). Evaluation of a graduate exposure procedure to teach extended mask wearing in various settings to children with autism. Behavior Modification. Ferguson, J. L., Cihon, J. H., Leaf, J. B., Van Meter, S. M., McEachin, J., & Leaf, R. (2018). Assessment of social validity trends in the journal of applied behavior analysis. European Journal of Behavior Analysis. https://doi.org/10.1080/15021149.2018.1534771. Fong, E. B. H. (2021). Building a culturally and diversity sensitive workforce. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 251–266). Routledge. Garcia, D., Dukes, C., Brady, M. P., Scott, J., & Wilson, C. (2016). Using modeling and rehearsal to teach fire safety to children with autism. Journal of Applied Behavior Analysis, 49, 699–704. https://doi.org/10.1002/jaba.331. Gardner, R. M., Bird, F. L., Maguire, H., & Luiselli, J. K. (2021). Introduction to organizational behavior management in intellectual and developmental disabilities. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 3–13). Routledge. Geller, E. S. (2005). Behavior-based safety and occupational risk management. Behavior Modification, 29(3), 539–561. https://doi.org/10.1177/0145445504273287. Gianotti, J., Kahl, T., Harper, J. M., & Luiselli, J. K. (2021). Behavioral safety assessment and intervention among residential care providers of students with intellectual and developmental disabilities. Journal of Developmental and Physical Disabilities, 33, 789–798. https:// link.springer.com/article/10.1007/s10882-020-09773-7. Gravina, N., & Matey, N. (2021). Safety and injury prevention. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 191–209). Routledge. Gravina, N., Villacorta, J., Albert, K., Clark, R., Curry, S., & Wilder, D. (2019). A literature review of organizational behavior management interventions in human services settings from 1990 to 2016. Journal of Organizational Behavior Management, 38, 191–224. https:// doi.org/10.1080/01608061.2018.1454872. Greene, B. F., Willis, B. S., Levy, R., & Bailey, J. S. (1978). Measuring client gain from ­staff-implemented programs. Journal of Applied Behavior Analysis, 11, 395–412. https:// doi.org/10.1901/jaba.1978.11-395. Gutkin,T. B., & Curtis, M. J. (1982). School-based consultation:Theory and techniques. In C. R. Reynolds, & T. B. Gutkin (Eds.), The handbook of school psychology (pp. 796–828).Wiley. Hartz, R. M., Gould, K., Harper, J. M., & Luiselli, J. K. (2021). Assessing interobserver agreement (IOA) with procedural integrity: Evaluation of training methods among classroom instructors. Child & Family Behavior Therapy, 43, 1–12. https://doi.org/10.1080/00168 890.2020.1848404.

388

Applied behavior analysis advanced guidebook

Jones, R. T., Kazdin, A. E., & Haney, J. I. (1981). Social validation and training of emergency fire safety skills for potential injury prevention and life-saving. Journal of Applied Behavior Analysis, 14(3), 249–260. https://doi.org/10.1901/jaba.1981.14-249. Kazdin, A. E. (2011). Single-case research designs. In Methods for clinical and applied settings (2nd ed.). Oxford University Press, ISBN:978-0-19-534188-1. Kazemi, E., Shapiro, M., & Kavner, A. (2015). Predictors of intention to turnover in behavior technicians working with individuals with autism spectrum disorder. Research in Autism Spectrum Disorders, 17, 106–115. https://doi.org/10.1016/j.rasd.2015.06.012. Kornack, J., Williams, A. L., Johnson, K. A., & Mendes, E. M. (2020). Reopening the doors to center-based ABA services: Clinical and safety protocols during COVID-19. Behavior Analysis in Practice, 13, 543–549. https://doi.org/10.1007/s40617-020-00462-7. Kramer, G. M., Mishkind, M. C., Luxton, D. D., & Shore, J. (2013). Managing risk and protecting privacy. In K. Myers, & C. L. Turvey (Eds.), Telemental health: An overview of legal, regulatory, and risk-management issues (pp. 83–107). Elsevier. Kratochwill, T. R., Elliott, S. N., & Callan-Stoiber, K. (2002). Best practices in school-based problem-solving consultation. In A. Thomas, & J. Grimes (Eds.), Best practices in school psychology IV (pp. 583–608). National Association of School Psychologists. Kurtessis, J. N., Eisenberger, R., Ford, M.T., Buffardi, L. C., Stewart, K. A., & Adis, C. S. (2015). Perceived organizational support: A meta-analytic evaluation of organizational support theory. Journal of Management, 31, 874–900. https://doi.org/10.1177/0149206315575554. Larson, S. A., Tolbize, M. S. A., Kim, O., & York, B. (2016). Direct support professional turnover costs in small group homes: A case study. Minneapolis: University of Minnesota, Research and Training Center on Community Living. LeBlanc, L. A., Lazo-Pearson, J. F., Pollard, J. S., & Unumb, L. S. (2020). The role of compassion and ethics in decision making regarding access to applied behavior analysis services during the COVID-19 crisis: A response to Cox, Plavnick, and Brodhead. Behavior Analysis in Practice, 13, 604–608. https://doi.org/10.1007/s40617-020-00446-7. Lerman, D. C., LeBlanc, L. A., & Valentino, A. L. (2015). Evidence-based application of staff and caregiver training procedures. In H. Roane, J. E. Ringdahl, & T. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 321–351). Elsevier. Ludwig, T. D. (2015). Organizational behavior management: An enabler of applied behavior analysis. In H. Roane, J. E. Ringdahl, & T. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 605–625). Elsevier. Luiselli, J. K. (2013). Descriptive analysis of a staff injury-reduction intervention in a human services setting for children and youth with intellectual and developmental disabilities. Behavior Modification, 37, 665–679. https://doi.org/10.1177/0145445513489131. Luiselli, J. K. (2018a). Conducting behavioral consultation in educational and treatment settings. London, UK: Elsevier/Academic Press. Luiselli, J. K. (2018b). Organizational behavior management in human services programs. In B. Wine, & J. Pritchard (Eds.), Organizational behavior management: The essentials (pp. 340–363). Orlando, FL: Hedgehog Publishers. Luiselli, J. K. (2021a). Performance management interventions. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 100–123). Routledge. Luiselli, J. K. (2021b). Social validity assessment in human services organizations. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 46–66). Routledge. Luiselli, J. K., DiGennaro Reed, F. D., Christian, W. P., Markowski, A., Rue, H. C., St. Amand, C., et al. (2009). Effects of an informational brochure, lottery-based financial incentive, and public posting on absenteeism of direct-care human services employees. Behavior Modification, 33, 175–181. https://doi.org/10.1177/0145445508320624.



Organizational behavior management

389

Luiselli, J. K., Gardner, R. M., Bird, F., & Maguire, H. (Eds.). (2021). Organizational behavior management approaches for intellectual and developmental disabilities Routledge. Luiselli, J. K., Gardner, R. M., Bird, F., Maguire, H., & Harper, J. M. (2022). Organizational behavior management in human services settings: Conducting and disseminating research that improves client outcomes, employee performance, and systems development. Journal of Organizational Behavior Management, 42, 255–273. https://doi.org/10.1080/01 608061.2022.2027319. Luke, M., Carr, J. E., & Wilder, D. A. (2018). On the compatibility of organizational behavior management and BACB certification. Journal of Organizational Behavior Management, 38(4), 288–305. https://doi.org/10.1080/01608061.2018.1514347. Maguire, H., Harper, J. M., Gardner, R. M., & Luiselli, J. K. (2022). Behavioral training and performance management of human services organization care providers during the COVID-19 pandemic. Advances in Neurodevelopmental Disorders. https://doi. org/10.1007/s41252-021-00234-6. Martens, B. K., Witt, J. C., Elliott, S. N., & Darveaux, D. X. (1985). Teacher judgements concerning the acceptability of school-based interventions. Professional Psychology: Research and Practice, 16, 191–198. https://psycnet.apa.org/doi/10.1037/0735-7028.16.2.191. Martinez-Onstott, B., Wilder, D. A., & Sigurdsson, S. (2016). Identifying the variables contributing to at-risk performance: Initial evaluation of the performance diagnostic checklist-safety (PDC-safety). Journal of Organizational Behavior Management, 36, 80–93. https://doi.org/10.1080/01608061.2016.1152209. McGee, H. M., & Diener, L. H. (2010). Behavioral systems analysis in health and human services. Behavior Modification, 34, 415–442. https://doi. org/10.1177/0145445510383527. McSween, T. (2003). The value-based safety process: Improving your safety culture with a behavioral approach. John Wiley & Sons, Inc. Merritt, T. A., DiGennaro Reed, F. D., & Martinez, C. E. (2019). Using the performance diagnostic checklist-human services to identify an indicated intervention to decrease employee tardiness. Journal of Applied Behavior Analysis, 52, 1034–1048. https://doi. org/10.1002/jaba.643. Miller, M. V., Carlson, J., & Sigurdsson, S. O. (2014). Improving treatment integrity in a human service setting using lottery-based incentives. Journal of Organizational Behavior Management, 34, 29–38. https://doi.org/10.1080/01608061.2013.873381. Miltenberger, R. G., & Novotny, M. (2022). Teaching safety skills to individuals with developmental disabilities. Advances in Neurodevelopmental Disorders, 6, 270–279. Morosohk, E., & Miltenberger, R. (2022). Using generalization enhanced behavioral skills training to teach poison safety skills to children with autism. Journal of Autism and Developmental Disorders, 52, 283–296. https://doi.org/10.1007/s10803-021-04938-5. National Core Indicators [NCI]. (2019). National Core Indicators® 2018 staff stability survey report. Human Services Research Institute and The National Association of State Directors of Developmental Disabilities Services, Inc. https://www.nationalcoreindicators.org/ resources/staffstability-survey/. Novak, M. D., DiGennaro Reed, F. D., Erath,T. G., Blackman, A. L., Ruby, S. A., & Pellegrino, A. J. (2019). Evidence-based performance management: Applying behavioral science to support practitioners. Perspectives on Behavior Science, 42, 955–972. https://doi. org/10.1007/s40614-019-00232-z. Palmer, D. R., Pham, A. V., & Carlson, J. S. (2011). Behavioral consultation. In S. Goldstein, & J. A. Naglieri (Eds.), Encyclopedia of child behavior and development Springer. https://doi. org/10.1007/978-0-387-79061-9_312. Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-based staff training:A guide for practitioners. Behavior Analysis in Practice, 5, 2–11. https://doi.org/10.1007/bf03391819.

390

Applied behavior analysis advanced guidebook

Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2013). Teaching practitioners to conduct behavioral skills training: A pyramidal approach for training multiple human service staff. Behavior Analysis in Practice, 6, 4–16. https://doi.org/10.1007/BF03391798. Pastrana, S. J., Frewing, T. M., Grow, L. L., Nosik, M. R., Turner, M., & Carr, J. E. (2018). Frequently assigned readings in behavior analysis graduate training programs. Behavior Analysis in Practice, 11, 267–273. https://doi.org/10.1007/2Fs40617-016-0137-9. Pence, S. T., St. Peter, C. C., & Tetreault, A. S. (2012). Increasing accurate preference assessment implementation through pyramidal training. Journal of Applied Behavior Analysis, 45, 345–359. https://doi.org/10.1901/jaba.2012.45-345. Plantiveau, C., Dounavi, K., & Virues-Ortega, J. (2018). High levels of burnout among ­early-career board-certified behavior analysts with low collegial support in the work environment. European Journal of Behavior Analysis, 19, 195–207. https://doi.org/10.108 0/150211492018.1438339. Radley, K. C., Helbig, K. A., & Schrieber, S. R. (2019). The use of technology for assessment and intervention. In A. J. Fischer,T. A. Collins, E. H. Dart, & K. C. Radley (Eds.), Technology applications in school psychology, consultation, supervision, and training (pp. 83–105). Routledge. Reid, D. H. (1998). Organizational behavior management in developmental disabilities services: Accomplishments and future directions. Routledge. Reid, D. H. (2017). Competency-based staff training. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook: A manual for professional practice (pp. 21–40). Elsevier/Academic Press. Reid, D. H., & Parsons, M. B. (1995). Motivating human services staff: Supervisory strategies for maximizing work effort and work enjoyment. Habilitative Management Consultants. Reid, D. H., Parsons, M. B., & Green, C. W. (2012). The supervisor’s guidebook: Evidence-based strategies for promoting work quality and enjoyment among human service staff. North Carolina: Habilitative Management Consultants. Reid, D. H., Shuh-Weir, C. L., & Brannon, M. E. (1978). Use of a group contingency to decrease staff absenteeism in a state institution. Behavior Modification, 2, 251–266. Rhodes, L., & Eisenberger, R. (2002). Perceived organizational support: A review of the literature. Journal of Applied Psychology, 87, 698–714. https://doi. org/10.1037/0021-9010.87.4.698. Ricciardi, J. N., Rothschild, A. W., Driscoll, N. M., Crawley, J., Wanganga, J., Fofanah, D. A., et al. (2020). Social validity assessment of behavior data recording among human services care providers. Behavioral Interventions, 35, 458–466. Rothschild, A.W., Ricciardi, J. N., Luiselli, J. K., Goldberg, S., Crawley, J., Driscoll, N. M., et al. (2022). Organizational responsiveness to the COVID-19 pandemic: A mixed methods social validity assessment of human services care providers. Advances in Neurodevelopmental Disorders, 6. Sanders, K. (2009).The effects of an action plan, staff training, management support, and monitoring on restraint use and costs of work-related injuries. Journal of Applied Research in Intellectual Disabilities, 22, 216–230. https://doi.org/10.1111/j.1468-3148.2008.00491.x. Sarokoff, R. A., & Sturmey, P. (2004). The effects of behavioral skills training on staff implementation of discrete-trial teaching. Journal of Applied Behavior Analysis, 37, 535–538. https://doi.org/10.1901/jaba.2004.37-535. Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice state of the art? Journal of Applied Behavior Analysis, 24, 189–204. https://doi.org/10.1901/ jaba.1991.24-189. Sellers, T., & Walker, S. (2019). Telesupervision: In-field considerations. In A. J. Fischer, T. A. Collins, E. H. Dart, & K. C. Radley (Eds.), Technology applications in school psychology, consultation, supervision, and training (pp. 106–126). Routledge. Shapiro, M., & Kazemi, E. (2017). A review of training strategies to teach individuals implementation of behavioral interventions. Journal of Organizational Behavior Management, 37(1), 32–62. https://doi.org/10.1080/01608061.2016.1267066.



Organizational behavior management

391

Sherrard, J., Ozanne-Smith, J., & Staines, C. (2004). Prevention of unintentional injury to people with intellectual disability: A review of the evidence. Journal of Intellectual Disability Research, 48(2), 639–645. https://doi.org/10.1111/j.1365-2788.2003.00570.x. Strouse, M. C., Carroll-Hernandez,T. A., Sherman, J. A., & Sheldon, J. B. (2004).Turning over turnover:The evaluation of a staff scheduling system in a community-based program for adults with developmental disabilities. Journal of Organizational Behavior Management, 23, 45–63. https://doi.org/10.1300/J075v23n02_04. Strouse, M. C., & DiGennaro Reed, F. D. (2021). Employee turnover and workforce stability. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 210–234). Routledge. Sturmey, P. (1998). Overview of the relationship between organizational behavior management and developmental disabilities. Journal of Organizational Behavior Management, 18(2/3), 7–32. https://doi.org/10.1300/J075v18n02_02. Tyler, C.V.,White-Scott, S., Ekvall, S. M., & Abulafia, L. (2008). Environmental health and developmental disabilities: A lifespan approach. Family and Community Health, 31, 287–304. Van den Pol, R. A., Reid, D. H., & Fuqua, R. W. (1983). Peer training of safety-related skills to institutional staff: Benefits for trainers and trainees. Journal of Applied Behavior Analysis, 16, 139–156. https://psycnet.apa.org/doi/10.1901/jaba.1983.16-139. Vance, H., Saini,V., & Guertin, E. L. (2022). A preliminary investigation of procedural refinements to the performance diagnostic checklist - human services. Journal of Organizational Behavior Management. https://doi.org/10.1080/01608061.2022.2043218. Vladescu, J. C., & Marano, K. E. (2021). Behavioral skills training. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 69–99). Routledge. Ward-Horner, J., & Sturmey, P. (2012). Component analysis of behavior skills training in functional analysis. Behavioral Interventions, 27, 75–92. https://doi.org/10.1002/ bin.1339. Weatherly, N. L. (2019). A behavioral safety model for clinical settings: Coaching for institutionalization. Perspectives on Behavior Science, 42, 973–985. https://doi.org/10.1007/ s40614-019-00195-1. Wilder, D. A., & Cymbal, D. (2021). Performance diagnostic assessment. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 17–34). Routledge. Wilder, D. A., Cymbal, D., & Villacorta, J. (2020).The performance diagnostic checklist-human services: A brief review. Journal of Applied Behavior Analysis, 53, 1170–1176. https://doi. org/10.1002/jaba.676. Wilder, D. A., Lipschultz, J., & Gehrman, C. (2018). An evaluation of the performance diagnostic checklist: Human services (PDC-HS) across domains. Behavior Analysis in Practice, 11, 129–138. https://doi.org/10.1007/s40617-018-0243-y. Wilder, D. A., & Sigurdsson, S. O. (2015). Applications of behavior analysis to improve safety in organizations and community settings. In H. Roane, J. E. Ringdahl, & T. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 583–604). New York: Elsevier Inc. https://doi.org/10.1016/B978-0-12-420249-8.00014-9. Wine, B. (2017). Incentive-based performance improvement. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook (pp. 117–134). Elsevier. Wine, B., Chen, T., & Brewer, A. (2019). An examination of reward probability and delivery delays on employee performance. Journal of Organizational Behavior Management. https:// doi.org/10.1080/01608061.2019.1666776. Wine, B., & Doan, T. (2021). Assessing employee preferences for incentives. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 167–190). Routledge.

392

Applied behavior analysis advanced guidebook

Wine, B., Gugliemella, C., & Axelrod, S. (2013). An examination of g­ eneralized-conditioned reinforcers in stimulus preference assessments. Journal of Organizational Behavior Management, 33, 244–251. https://doi.org/10.1080/01608061.2013.843433. Wine, B., Osborne, M. R., & Newcomb, E. T. (2020). On turnover in human services. Behavior Analysis in Practice, 13, 492–501. https://doi.org/10.1007/s40617-019-00399-6. Wine, B., & Pritchard, J. K. (Eds.). (2018). Organizational behavior management: The essentials Hedgehog. Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214. https://psycnet.apa.org/doi/10.1901/jaba.1978.11-203.

CHAPTER 16

Practice and consultation in health, sport, and fitness Julie M. Slowiaka, Janet Daib, Sarah Davisc, and Rocky Perezd a University of Minnesota Duluth, Duluth, MN, United States The Chicago School of Professional Psychology, Chicago, IL, United States c Brock University, St. Catharines, ON, Canada d Western Michigan University, Kalamazoo, MI, United States b

Introduction to health, fitness, and sport and relevance for behavioral practitioners We have demonstrated the utility and effectiveness of applied behavior analysis (ABA) to address socially significant behavior and outcomes across a variety of domains. While practice in the area of autism spectrum disorder (ASD) dominates the field, practice in other subfields of ABA continues to grow and expand. As of January 2022, a mere 0.12% of certified behavior analysts identify health, sport, and fitness (HSF) as their primary area of professional emphasis (Behavior Analyst Certification Board, n.d.); however, we surmise that the percentage of behavior analysts for whom HSF is a secondary area of professional emphasis or interest is much higher. For example, membership in the Behavior Analysis in Health, Sport, and Fitness Special Interest Group (HSF SIG) has grown in excess of 400% over the last four years, with 171 active (dues-paying) members as of April 13, 2022 (G. Torres, personal communication). We also contend the need exists for the application of behavior analysis within the areas of health, sport, and fitness. While genetic, physiological, and environmental factors contribute to the prevalence of chronic disease, poor health-related outcomes are often the result of modifiable lifestyle behaviors associated with physical inactivity, poor nutrition, tobacco use, and excessive alcohol use (Centers for Disease Control and Prevention, 2021; World Health Organization [WHO], 2021a). In addition, participation in recreation activities and sport inevitably increases the risk for activity-related injuries, such as musculoskeletal injuries. We assert that health improvement, chronic disease prevention and management, and Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00016-7

Copyright © 2023 Elsevier Inc. All rights reserved.

393

394

Applied behavior analysis advanced guidebook

i­njury prevention and rehabilitation fall within the realm of public health priorities that can be addressed by behavior analysts. We also believe that the coronavirus disease 2019 pandemic has shed additional light on the many ways in which behavior analysts can assist with addressing public health concerns related to increased sedentariness and social isolation (e.g., Kim & Jung, 2021; Lefferts, Saavedra, Song, & Lee, 2022). Contributions from behavior analysis to address public health issues were highlighted in a recent special series in the Journal of Applied Behavior Analysis (LeBlanc, Lerman, & Normand, 2020). Under the umbrella of health, sport, and fitness, the opportunities to apply ABA are vast. In this chapter, we focus on applications outside of the formal healthcare system that serve to improve health and fitness through diet, physical activity, and exercise modification, as well as improve athletic and sport performance of recreational and competitive athletes.Within each section of the chapter, you will find an overview of the topic, along with common assessment and measurement tools related to frequently targeted behaviors for change, a summary of research-supported interventions, and major implications based on research outcomes.

Health Weight management Increased health risks, such as heart disease, diabetes, hypertension, stroke, cancer and even death, exist for those who do not maintain an optimal weight (Da Luz, Hay, Touyz, & Sainsbury, 2018; De Lorenzo et al., 2019). Despite disagreements regarding its validity (e.g., Keys, Karvonen, Kimura, & Taylor, 1972; Rasmussen, 2019; Tomiyama, Hunger, Nguyen-Cuu, & Wells, 2016), body mass index (BMI) is a measure of weight-for-height that is a commonly used method to classify optimal weight ranges and is calculated by dividing the person’s weight in kilograms by the square of their height in meters. The World Health Organization (2021b) maintains that an optimal weight range is associated with a BMI between 18.5 and 24.9. It is important to recognize that weight concerns exist on a continuum; individuals with a BMI under 18.5 are considered underweight; those with a BMI that is equal to or exceeds 25 or 35 are considered overweight or obese, respectively. While weight management efforts have predominantly focused on supporting individuals who are overweight or obese, individuals on either end of this weight continuum are at risk for the abovementioned health concerns.



Practice and consultation in health, sport, and fitness

395

Globally, 1.9 billion people are overweight, 650 million people are obese, and prevalence rates across the lifespan continue to grow exponentially (World Health Organization, 2021b). While a number of possible contributors to a suboptimal weight exist, one of the most concerning presentations may be the presence of an eating disorder. Among the many factors that can influence the development of disordered eating patterns are cultural norms regarding physical appearance (National Eating Disorders Information Centre [NEDIC], 2022). Given the intense focus on unrealistic body images portrayed in the media (e.g., magazine covers, social media sharing platforms), it is not surprising that 9% of the world’s population will be affected by an eating disorder in their lifetime (Galmiche, Déchelotte, Lambert, & Tavolacci, 2019). Moreover, the US annual healthcare costs associated with weight management have been reported to be as high as 956 billion dollars (Finkelstein et al., 2012; Wang, Beydoun, Liang, Caballero, & Kumanyika, 2008) and are likely to increase.Weight management is a public health concern of critical importance that is in need of a viable solution. Assessment Behavioral weight management is a common approach adopted to address weight concerns. Interventions focus on creating positive changes in eating and physical activity behaviors to promote establishing or maintaining an optimal weight. As the primary outcome variable of interest, weight is relatively easy to assess and monitor reliably. While not a behavior, weight is largely a behavioral product of diet and exercise. That is, following healthy eating guidelines (e.g., food guide portion recommendations) and engaging in daily exercise recommendations (e.g., moderate intensity exercise for a minimum of 150 min per week) is critical to achieving an optimal weight. Therefore, supporting individuals in developing healthy eating and exercise behaviors, and/or the skills to self-monitor these behaviors, are clinically relevant to achieve healthy weight outcomes and for long-term maintenance. Despite the logical connection between weight and health, important implications exist for behavior analysts to consider. First, we must recognize that weight and health are not perfectly correlated. To illustrate this phenomenon, consider an individual who begins a new exercise regime; they may initially gain weight due to muscle mass increase and/or an increase in their daily caloric intake needed to support their energy expenditure. Under these circumstances, weight gain is a necessary component in achieving an optimal health state. To the same effect, an experienced athlete with a high

396

Applied behavior analysis advanced guidebook

density of muscle mass is likely to have a higher BMI and may erroneously suggest an unhealthy lifestyle. Practitioners should interpret weight and/or BMI cautiously. Additionally, an optimal weight range should not be achieved through unhealthy behaviors, such as restricting calories to dangerously low levels, using diuretics regularly, and purging food. For some individuals, losing weight by engaging in these unhealthy behaviors may result in contacting reinforcement (e.g., compliments for weight loss), which may also encourage the individual to continue to engage in these behaviors in the future. Therefore, establishing powerful reinforcement contingencies for engaging in healthy behaviors is a priority. Practitioners supporting individuals in establishing and maintaining a healthy weight should be aware of these circumstances and incorporate this information in their assessment and intervention planning. Practitioners should also be aware of the idiosyncratic weight differences in children vs. adults and should collaborate with medical professionals (Behavior Analyst Certification Board, 2020) before designing and implementing behavioral interventions at this critical stage of development. Another potential limitation of assessing and monitoring weight management progress by measuring weight, a behavior product, is its validity. Generally, behavior analysts prefer direct observation data; however, if not possible, a behavior product can be used if it is reliably produced by the target behavior. Weight is a problematic behavior product because it is the outcome of several unique behaviors (e.g., eating healthy, engaging in regular exercise). Therefore, we are unable to determine if weight is capturing what we are intending to measure. Furthermore, in healthy contexts, weight is relatively slow to show change and may be an insensitive measure to capture an individual’s progress. As a result, practitioners may not have comprehensive assessment information to inform meaningful intervention. Thus, we recommend that practitioners supplement their assessment with direct observation data regarding the individual’s targeted diet and exercise behaviors. Intervention Traditional behavioral interventions targeting weight management have involved contingency management procedures such as systematic application of reinforcement or punishing consequences to promote and sustain behavior change with one or more of the following additional components: instructions/task clarification, goal setting, feedback, and



Practice and consultation in health, sport, and fitness

397

self-monitoring (e.g., Aragona, Cassady, & Drabman, 1975; Jeffery, 2011; Mann, 1972; Normand & Gibson, 2020). Financial incentives, in the form of monetary contingency contracts, have also been used to support weight loss (Sykes-Muskett, Prestwich, Lawton, & Armitage, 2015). These interventions can be delivered on an individual or group basis and produce similar outcomes (Jeffery, 2011; Williamson, 2017). Typically, the importance of maintaining a healthy weight is explained to the individuals participating, which may also be accompanied by instructional strategies on how to be successful. Individuals are also encouraged to independently establish clear, objective, and manageable goals with contingencies in place to support meeting these goals. Contingencies are often outlined in some form of a behavioral contract with the individual at the outset of the intervention. While the idiosyncrasies have varied, the contingencies established have included providing reinforcement for meeting expectations and response cost for failing to do so. For example, in a seminal weight management article, Mann (1972) required individuals to provide a personal item to the researcher at the outset of programming. Access to these items was earned contingent on meeting weight loss criteria, and these items could be lost permanently for failing to meet these expectations. Additionally, vocal, verbal, and graphic feedback may be provided to confirm and/or redirect the individual’s behavior. Individuals may also be taught to self-monitor their own behaviors as a method to promote generality across settings and behaviors (e.g., Donaldson & Normand, 2009). Despite demonstrated effectiveness for establishing an optimal weight, the abovementioned contingency management interventions have not yet demonstrated long-term maintenance of these results (Normand & Gibson, 2020). However, the more successful outcomes have been positively correlated with intervention length (e.g., Wadden, Webb, Moran, & Bailer, 2012; Williamson, 2017). It is imperative that practitioners provide meaningful support, but long-term services may not be possible due to resource constraints. Practitioners may want to consider methods to streamline intervention, while maintaining their effectiveness. Recent research suggests that virtually-delivered services can bring about effective behavior change in an efficient manner (Butryn, Webb, & Wadden, 2011; Wadden et al., 2012). Furthermore, the utility of the abovementioned behavioral interventions has not been extended to address the needs and support individuals who are underweight, and it is pertinent that more research be conducted to explore their impact.

398

Applied behavior analysis advanced guidebook

Acceptance-based interventions such as acceptance and commitment therapy (ACT are among the emerging and promising behaviorally-based approach to address a variety of weight related target behaviors and may promote consistent weight management results and is applicable to a spectrum of weight concerns (Forman et  al., 2016; Forman & Butryn, 2015; Karekla, Nikolaou, & Merwin, 2022). Broadly, acceptance-based approaches like ACT, assist individuals with clarifying what is most important to them and teaching them how to persist with engaging in their individualized healthy behaviors while simultaneously encountering uncomfortable internal (covert) experiences (e.g., thoughts, emotions, sensations, urges) and external barriers (Hayes, Strosahl, & Wilson, 1999). An acceptance-based intervention approach could complement a traditional contingency management approach and yield positive, long-term weight management results (Forman et  al., 2016; Forman & Butryn, 2015). Behavior analysts should evaluate their current scope of competence (Ethics Code 1.05 Practicing within Scope of Competence) and consider additional training and/or collaboration with healthcare professionals (Ethics Codes 2.10 Collaborating with Colleagues and 2.12 Considering Medical Needs) before proceeding to implement acceptance-based weight management interventions (Behavior Analyst Certification Board, 2020).

Healthy eating According to Rafacz (2019), eating is “a series of choice responses that form a temporally delayed behavioral chain” (p. 648). Rafacz identified a series of three choice responses within the larger behavioral chain: selection, preparation, and consumption. Selection involves the decisions made and actions taken with regard to the specific food items that are purchased and from where. One example is selecting fresh fruits and vegetables from a farmer’s market vs. ordering a deep-fried treat from a menu at an amusement park. Selecting a food item to purchase, however, does not guarantee it will be prepared or consumed. Preparation links selection to consumption and encompasses all behaviors associated with making foods available for consumption. The topography and response effort of the preparation behaviors can vary. For example, one can whisk eggs, fry fish, or bake chicken, and some food manufacturers have reduced the response effort required to prepare the food by offering ready-to-eat foods. In illustration, making pulled pork from scratch requires more preparation behaviors and response effort than using ready-to-eat pulled pork that was purchased from a store and only needs to be removed from the packaging and heated. Similar to selection,



Practice and consultation in health, sport, and fitness

399

preparing food does not ensure that it will be consumed. Consumption, the terminal behavior in the eating behavioral chain, involves behaviors typically thought of when describing eating such as using utensils, chewing, drinking, and swallowing. For the eating behavioral chain to describe healthy eating, individuals must select healthy foods, prepare selected foods in a way that maintains their health benefits, and consume appropriate portions of prepared foods based on their dietary needs. Before discussing assessment and interventions with the realm of healthy eating, it is important to note that behavior analysts working with individuals to improve eating habits and behaviors must ensure they remain within their scope of practice. Nutrition practice, unlike other areas of practice with health and fitness, is regulated at the state level (see American Nutrition Association, n.d., for an overview of state-specific regulations). While several states allow individualized nutrition coaching and consulting without a license, behavior analysts have a professional and ethical responsibility to ensure they follow the laws in their state. Assessment Nutrition assessment typically begins with an intake assessment. Information gathered may vary depending on client needs and may include client medical history, current dining habits, body measurements/composition, health/ nutrition goals, and a dietary record. Food frequency questionnaires (FFQs) are the most common assessment of dietary intake and nutrition habits identified in the nutrition science literature and epidemiological research, and they can be a valid and reliable assessment tool when set up properly (Cui et  al., 2021a, 2021b). In addition, nutrition coaches and consultants may assess a client’s home kitchen environment to identify food typically found in the home, as well as the availability of kitchen and dining tools (e.g., cookware, equipment, and utensils) necessary to effectively prepare food for consumption. These can be done using questionnaires that can be filled out by the practitioner during an interview or given to the client to fill out and return. Organizations that offer nutrition-specific credentials and certifications often provide client assessment form templates for practitioner use. Using information and data collected during the intake assessment, the practitioner and client should select a target behavior while also considering the impact that changes in eating-related behavior may have on the client’s calorie needs, overall health (e.g., optimal weight), additional health-related goals (e.g., reduce high cholesterol, improve heart and gut health), their lifestyle

400

Applied behavior analysis advanced guidebook

(e.g., level and type of physical activity), and their dietary preferences. For example, targeting consumption behavior is common when a client wants to lose weight; however, after evaluating the client’s medical information and other data from the intake assessment, the practitioner may identify that changes in consumption that would result in a reduction in total caloric intake may lead the client to an unhealthy low weight. While not impossible to directly observe selection, preparation, and consumption behaviors, doing so can be costly, time and resource intensive, and, in many cases, impractical. Direct observation of selection behavior requires the practitioner to either accompany the client in person or remotely in real-time via video technology to locations where food is purchased or provided (e.g., grocery store, restaurant, food market, cafeteria) or observe choices made in the client’s home from available food items.The same holds true for preparation and consumption behaviors. As such, self-report and other indirect assessment measures are often used for ongoing measurement of eating behavior. In some cases, the practitioner may choose to use the same assessments that were used at intake in order to capture changes over time related to body composition, dietary intake, and nutrition habits. Additional measures might include individual interviews and focus group discussions (Hyland, Stacy, Adamson, & Moynihan, 2006), questionnaires to evaluate food habit and intake (Epstein et al., 2001), parental feeding practices (Epstein, Paluch, Beecher, & Roemmich, 2008), food taste preference (Lakkakula, Geaghan, Zanovec, Pierce, & Tuuri, 2010), meal planning calendars, and photos of prepared foods before they are consumed (a phenomenon known as “camera eats first”; e.g., Yong, Tong, & Liu, 2020), and sales receipts (Papies & Veling, 2013; Sigurdsson, Larsen, & Gunnarsson, 2014). For tech-savvy clients, smart-phone apps designed to track nutrition can provide insight regarding selection, preparation, and consumption behaviors, and many of the available nutrition apps allow individuals to share their online food log with other people such as nutrition coaches. A primary advantage is that a nutrition app automatically provides the user with a variety of data related to the characteristics of the type and amount of each food item entered, such as their nutrient profiles. For example, if a client indicates that they ate 6 oz of grilled chicken, 1 medium-sized baked sweet potato (4 oz), and 1 cup of steamed broccoli without butter, many apps will show the amount (in grams) of protein, carbohydrates, fat, and other nutrients consumed. Smartphone apps allow for an efficient method of nutrition assessment and should be used to complement, not replace, the work of nutrition practitioners (Chen, Gemming, Hanning, & Allman-Farinelli, 2018).



Practice and consultation in health, sport, and fitness

401

Interventions Healthy eating interventions will be discussed according to those that target selection, preparation, and consumption. Note that much of the research on eating habits and behavior has focused on selection and consumption behaviors, with the latter receiving the most attention. Additionally, eating-related behaviors are often simultaneously targeted for change ­ alongside changes in physical activity to support weight management and healthy lifestyles. As such, we encourage practitioners to familiarize themselves with the literature on multiple (no less than two) health behavior change interventions (MHBCs) that can be delivered in person, are webbased, or use other technology (Duan et al., 2021; Geller, Lippke, & Nigg, 2017; Nigg & Long, 2012). Selection Interventions. While selection interventions are most often implemented in locations where food is purchased, they can be modified and applied within the home. A common intervention is to manipulate environmental stimuli to promote the selection of healthy foods. Examples include providing daily feedback on the cumulative estimated calorie and fat intake of food items purchased as a way to prompt healthier selections on the following day (Normand & Osborne, 2010) and putting nutrition and diet-related words and phrases (“calorie-conscious”, “Are you watching your weight?”) to serve as diet reminders on restaurant menus (Papies & Veling, 2013). Papies and Veling found that current and chronic dieters made healthier food choices than nondieters in response to diet reminders on restaurant menus; they suggest that eating/nutrition-related goals play a role in this difference. From a behavior-analytic perspective, when an individual has a goal to lose weight and they have not met their weight loss goal, a motivating operation for selecting healthier food options is present. Therefore, practitioners should consider their client’s history of dieting and current goals weight loss vs. muscle gain goals when designing food selection interventions. Traffic-light labeling (TLL) has been used as an intervention to both increase knowledge of healthy vs. unhealthy foods, as well as to serve as an antecedent prompt for healthy food and beverage selection In TLL interventions, food and beverage items are coded, and colored stickers are used to indicate the item’s level of health (i.e., green = healthy, yellow = moderately healthy, and red = unhealthy). While TLL interventions have been used successfully in hospital/worksite cafeterias (Chen et al., 2017; Mazza, Dynan, Siegel, & Tucker, 2018; Sonnenberg et al., 2013), school cafeterias (Snelling & Kennard, 2009; Stamos, Lange, & Dewitte, 2018), supermarkets (Franckle, Levy, Macias-Navarro, Rimm, & Thorndike, 2018), and concession stands

402

Applied behavior analysis advanced guidebook

(Olstad,Vermeer, McCargar, Prowse, & Raine, 2015), findings from a recent meta-analysis on front-of-package nutrition labeling indicated that TLL was effective in increasing healthy food selection in five of 12 intervention studies (An et al., 2021). Furthermore, Vorland, Bohan Brown, Cardel, and Brown (2022) contend that the direct impact of TLL on childhood obesity outcomes requires additional research. Other environmental arrangement interventions such as product placement to alter access to food items and in-store advertisement have been used successfully to increase healthy food selection in stores (Sigurdsson et  al., 2014) and in a cafeteria buffet line (Rozin et al., 2011). In addition to interventions used to increase knowledge of healthier food options and thereby prompt the selection of healthy foods, interventions that add a delay to access of food items when unhealthy options are selected from a vending machine have also increased selection of healthier options (Appelhans et  al., 2018); such interventions do, however, require more control over the environment. Finally, for those looking to increase the selection of healthy foods and support healthy eating patterns on a larger scale, focus should be given to making healthy food options more accessible to consumers (Healthy People 2020, n.d.). Preparation Interventions. To our knowledge, little behavioral research exists on intervening to improve preparation behaviors, though some general recommendations can be made. First, we recommend working with clients to reduce the response effort associated with food preparation. One way for clients to accomplish this could be to select foods that require less effort to prepare. For example, some grocery stores sell precut and prewashed fruits and vegetables and many sell premade salads that only require packages to be opened and ingredients combined (Rafacz, 2019). Another option to reduce response effort that has been recently explored is meal prepping (Mendez, Kubota, Widaman, & Gieng, 2021; Mendez, Tseng, Kubota, Widaman, & Gieng, 2020), which involves scheduling time to cook large quantities of food that can be used to create multiple meals that can be stored and reheated later in the week. Additionally, practitioners might consider having clients pair preparation behaviors with known reinforcers (e.g., listening to music or talking to someone on the phone); outside of eating, research has shown that doing so can lead to an increase in desired behaviors (Zerger, Miller, Valbuena, & Miltenberger, 2017). Research has also shown that teaching cooking and food preparation skills to children and adults, including those with intellectual disabilities, can increase preparation behaviors and frequency of cooking (Condrasky, Griffin,



Practice and consultation in health, sport, and fitness

403

Catalano, & Clark, 2010; Garcia, Reardon, Hammond, Parrett, & GebbieDiben, 2017; Hyland et al., 2006; Lancioni & O’Reilly, 2002). Consumption Interventions. Consumption interventions come in a variety of forms, such stimulus equivalence procedures to increase accurate and healthy portion size selection (Hausman, Borrero, Fisher, & Kahng, 2014; Trucil, Vladescu, Reeve, DeBar, & Schnell, 2015); taste exposure to shift preferences of consumed foods (Anzman-Frasca, Savage, Marini, Fisher, & Birch, 2012; Hausner, Olsen, & Moller, 2012; Lakkakula et  al., 2010; Solberg, Hanley, Layer, & Ingvarsson, 2007); gamification (Cassey, Washio, & Hantula, 2016; Jones, Madden,Wengreen, Aguilar, & Desjardins, 2014; Lowe, Horne, Tapper, Bowdery, & Egerton, 2004; Morrill, Madden, Wengreen, Fargo, & Aguilar, 2016); and token reinforcement to increase fruit and veggie selection (Loewenstein, Price, & Volpp, 2016). A person’s thumb/fist, tennis ball, coaster, and teaspoon are common items used in stimulus equivalence procedures to help individuals identify the appropriate amount of a certain food that should be consumed. Lakkakula et al. (2010) demonstrated that repeated taste exposure in which individuals are repeatedly presented with a bite-sized portion of food can shift taste preferences and consumption. Importantly, adding dips or condiments to taste exposure interventions does not increase efficacy and may be counterproductive (Anzman-Frasca et al., 2012; Hausner et al., 2012; Solberg et al., 2007). Additionally, adding tangible and social reinforcers can lead to longer-term maintenance than taste exposure alone (Cooke et al., 2011). Gamification of healthy eating has primarily been used with children in school settings. For example, the Food Dudes program involves a narrative in which superheroes battle junk food villains and includes both individual and group level social and tangible reinforcers (Lowe et al., 2004; Morrill et al., 2016).This type of program has been modified by providing points for each bite of fruits or vegetables (Cassey et al., 2016), as well as adding a board game component wherein the heroes move through the board contingent on a group meeting their consumption goals (Jones et al., 2014). Rigid and picky eating While a controversial topic in behavior analysis, much has been done with individuals who engage in rigid and picky eating behavior. Some of the more acceptable interventions that have successfully increased consumption of healthy foods include gradually fading out additives, like chocolate syrup in milk (Luiselli, Ricciardi, & Gilligan, 2005; Tiger & Hanley, 2006) and blending nonpreferred foods with preferred foods (Mueller, Piazza, Patel,

404

Applied behavior analysis advanced guidebook

Kelley, & Pruett, 2004). Alternatively, another strategy might be to restrict access to higher preference foods and make access contingent on consumption of lesser preferred foods (Levin & Carr, 2001;Whelan & Penrod, 2019), which is similar to saying, “Eat your vegetables first, and then you can have your ice cream.” Practitioners and consultants who work with parents of children who engage in rigid and picky eating may consider training parents via telecommunication to run behavioral interventions to reduce picky eating (Bloomfield, Fischer, Clark, & Dove, 2019). In addition, the necessity of conducting pretreatment functional analyses to inform interventions for inappropriate mealtime behaviors should be evaluated considering findings of a recent meta-analysis that indicated no differences in the effects of interventions that were versus were not based on a functional analysis (Saini, Jessel, Iannaccone, & Agnew, 2019). Those with interest or expertise in feeding disorders affecting children might consider joining the Pediatric Feeding Disorders Special Interest Group (affiliated with the Association for Behavior Analysis International).

Fitness Physical activity Adequate physical activity sets the foundation to achieve health-related outcomes, such as weight management goals (Normand & Gibson, 2020), as well as to ensure adequate conditioning for sport and athletic goals (Martin, 2015) and support health-related values (Berger, Garcia, Catagnus, & Temple, 2021). Both national (United States Department of Health and Human Services, 2018) and international organizations (World Health Organization, 2020) recommend similar guidelines to meet aerobic and muscle-strengthening activity requirements for all age groups. However, in response to the growing body of literature around holistic health (Ding et al., 2020), the World Health Organization (2020) published an updated set of guidelines and recommendations with specific recommendations for individuals with chronic illnesses (see Raiff, Burrows, Nastasi, Upton, & Dwyer, 2020 for behavioral approaches to treatment), pregnant and postpartum women, and an emphasis on minimizing sedentary behaviors for all age groups. The emphasis on reducing sedentary lifestyles highlights the importance of pinpointing specific physical activity behaviors in research on a wider behavioral spectrum (van der Ploeg & Bull, 2020). For practitioners and consultants, the need to address both physical activity and sedentary behaviors



Practice and consultation in health, sport, and fitness

405

is likely to contribute to a more comprehensive health-focused program. An individual’s lifestyle can be considered sedentary if they are physically inactive for more than three hours a day at a time, even if they meet the weekly aerobic and MVPA recommendations (Wong & Leatherdale, 2009). The Sedentary Behavior Research Network’s (SBRN) Consensus Project resulted in the development of a conceptual model for identifying and defining movement and nonmovement behaviors within a 24-h time period (Tremblay et al., 2017). Practitioners are encouraged to review the consensus definitions described by Tremblay and colleagues to aid in the process of defining target behaviors. Previous research shows that even individuals who engage in high levels of physical activity are not necessarily mutually exclusive of sedentary behaviors with common examples including prolonged periods of sitting or in a lying position (Tremblay et al., 2017; Wong & Leatherdale, 2009). Some examples of behavior-analytic approaches to address sedentary lifestyles include increasing daily step count (Gilson et al., 2009; Junaid, Bulla, Benjamin, Wind, & Nazaruk, 2020; Kurti & Dallery, 2013) and/or decreasing duration of physical inactivity (Brakenridge et al., 2016; Gilson et al., 2009; Green, Sigurdsson, & Wilder, 2016).

Assessment and measurement Given the physiological nature of physical activity, practitioners beginning an assessment process must be mindful of the Ethics Code for Behavior Analysts (Behavior Analyst Certification Board, 2020) when evaluating medical considerations (see Ethics Code 2.12 Considering Medical Needs). The Physical Activity Readiness Questionnaire (PAR-Q+; Warburton, Jamnik, Bredin, & Gledhill, 2011) is a common and recommended assessment tool used to screen participants for potentially relevant medical issues and to assess possible risk to injury before beginning any physical activity (Krebs & Nyein, 2021; Rosado, Jansz Rieken, & Spear, 2021; Stedman-Falls & Dallery, 2020; Van Camp & Hayes, 2012; Van Camp, Blejewski, Ruby, & Gordon, 2021; Patel, Normand, & Kohn, 2019;VanWormer, 2004), including light level activities such as walking (Kurti & Dallery, 2013; McCurdy & Normand, 2022; Zarate, Miltenberger, & Valbuena, 2019). Once cleared for physical activity, practitioners can work with clients to pinpoint a specific physical activity behavior and select an appropriate measurement type and method. Fitness tracking technology allows a wider variety of measurement dimensions and outcomes such as minutes of exercise, step count, and heart

406

Applied behavior analysis advanced guidebook

rate (Bassett Jr., Toth, LaMunion, & Crouter, 2017). To align with the ­physical activity recommendations, simple measurement of activity minutes or exercise intervals using step count as the basic measure are often considered (Andrade, Barry, Litt, & Petry, 2014; Batchelder & Washington, 2021; Galbraith & Normand, 2017; Hanashiro-Parson & Miltenberger, 2021; Nieto & Wiskow, 2020; Normand & Burji, 2020; Zerger et al., 2017). Although step count goals can vary based on age-group recommendations (Tudor-Locke et al., 2011), physical activity goals related to step count have resulted in positive outcomes to promote achievement toward one’s target stop goal (Andrade et  al., 2014; Chaudhry et  al., 2020; McCurdy & Normand, 2022; VanWormer, 2004). It should be noted that step count is commonly used as an outcome metric for behaviors that require a wide range of physical movement and energy expenditure. Debate exists among those in the fitness and health community regarding the popular 10,000 steps goal, though evidence indicates that individuals who achieve upward of 8000 steps per day have lower mortality risks compared to those who take 4000 steps per day (Center for Disease Control and Prevention, 2020). Fitness tracking devices ranging from simple pedometers measuring step count to active or inactive movements in all planes of motion with an accelerometer (Van Camp & Hayes, 2012) are available. For example, Nolan et al. (2017) measured frequency of total steps with a pedometer while also using an accelerometer to measure all forms and levels of physical activity. Both devices can provide a comprehensive view of an individual’s physical activity. Heart rate is another reliable form of physical activity measurement (Eckard, Kuwabara, & Van Camp, 2019). Heart rate has been used as a basis for assessment tools such as the Observational System for Recording Physical Activity in Children (OSRAC) to code the intensity of specific physical activities and movements, ranging from stationary to vigorous (Larson, Normand, & Hustyi, 2011; Larson, Normand, Morley, & Miller, 2014; Patel et al., 2019;Van Camp et al., 2021;Van Camp & Berth, 2017;Van Camp & Hayes, 2012). In comparison to step count metrics, heart rate allows for additional flexibility to assess physical activity levels beyond movements such as walking or running to activities such as weightlifting, yoga, or cycling. Heart rate has been associated with other derivative measures such as caloric expenditures (Donaldson & Normand, 2009), weight management (de Vries, Kooiman, van Ittersum, van Brussel, & de Groot, 2016), and exercise intensity levels (Jackson et al., 2016; Rosado et al., 2021). Heart rate as a metric should be individually assessed and interpreted with caution given



Practice and consultation in health, sport, and fitness

407

the nature of individualistic human characteristics (Van Camp et al., 2021). Overall, health-related metrics, whether by step count, heart rate, exercise minutes, or any combination of measures can provide a comprehensive view of physical activity levels. While earlier reviews of the accuracy of Fitbit activity trackers indicated high interdevice reliability across a variety of physical activity measures (e.g., steps, distance, energy expenditure; Evenson, Wen, Metzger, & Herring, 2015), more recent research suggests that Fitbits be used with caution, especially with clients other than adults with no limitations in mobility (Feehan et al., 2018).

Interventions Many behavior-change techniques, alone or in combination, can be used to promote physical activity behaviors, ideally targeted in both the contrived and natural environment (Normand, Dallery, & Ong, 2015). Many individuals rely on a fitness center or gym environment to adhere to a physical activity routine using daily social competitions within group classes or modified daily incentives such as money (Courtemanche, Hopson, & Groskreutz, 2021). For the youth population, behavior interventions are typically conducted during natural school activities within physical education classes or recess (Miller, Valbuena, Zerger, & Miltenberger, 2018). Cooper, Heron, and Heward (2020) describe the special applications of behavior change tactics including contingency contracts, token economy systems, group contingencies, and self-management that are highly attractive as part of any physical activity intervention (see Chapters 28 and 29 in Cooper et al., 2020, for a review of these concepts). One of the earliest studies using contingency contracts to promote physical activity featured a points-based system and a deposit contract method to increase exercise engagement of college students (Wysocki, Hall, Iwata, & Riordan, 1979). Deposit contracts have been successfully replicated with in-person deposits as well as digital formats, for example, cash versus electronic payment (Stedman-Falls & Dallery, 2020) for both individual (Krebs & Nyein, 2021) and group designs (McCurdy & Normand, 2022). Token economy systems Token economy systems are a form of contingency management that can be helpful for individuals who struggle with motivation by bridging the time delay between the desired goal (e.g., weight maintenance, sports performance skill) with a back-up reinforcer such as monetary incentives (Boerke & Reitman, 2013). The delivery of reinforcement on a variable

408

Applied behavior analysis advanced guidebook

ratio (VR) schedule (i.e., lottery-based) has been found effective to promote physical activity behaviors. A VR schedule not only addresses concerns about the cost-efficiency of traditional token economies but also more closely simulates the reality and probability of reinforcer delivery (Washington, Banna, & Gibson, 2014). A lottery-based token economy has also been replicated for groups of children (Galbraith & Normand, 2017), individuals with developmental disabilities (May & Treadwell, 2020; Nastasi, Sheppard, & Raiff, 2019), and sedentary employees (Batchelder & Washington, 2021). In some studies, token systems and monetary incentives proved useful to initially promote higher levels of physical activity (Batchelder & Washington, 2021; Courtemanche et al., 2021; HanashiroParson & Miltenberger, 2021; May & Treadwell, 2020; McCurdy & Normand, 2022; Nastasi et al., 2019; Patel et al., 2019; Washington et al., 2014). To program for generalization and maintenance of physical activity behaviors, practitioners might consider including group-based contingencies to establish social reinforcers such as encouraging healthy competition (Courtemanche et al., 2021; Normand & Burji, 2020) or verbal praise (McCurdy & Normand, 2022). Group contingencies One interdependent group contingency referenced in the literature is the Good Behavior Game (GBG) in which members of a group are divided into teams with specific criterion to earn reinforcement on the basis of evoking competition within and between the teams (Barrish, Saunders, & Wolf, 1969). Galbraith and Normand (2017) implemented the “Step It UP!” program, a group contingency based on the GBG, to earn raffle tickets to increase step count for third-grade students in general education. Reinforcer items that hold no significant exchangeable value (e.g., badge, trophy, ribbon) are also used as alternatives (Normand & Burji, 2020). Moreover, there is evidence to support that when adults also get physically involved in this program, the increased attention and interactive play can positively impact moderate-to-vigorous activity levels for both parties (Larson et al., 2014; Nieto & Wiskow, 2020). To maximize individual achievement, practitioners working to promote physical activity in a group setting might also consider implementing and publicly sharing the progression toward group goals (Kuhl, Rudrud, Witts, & Schulze, 2015), individual goals (Miller et al., 2018), or even viewing one assigned partner’s goals to hold each other accountable (Zerger et al., 2017). Zerger et al. (2017) had a



Practice and consultation in health, sport, and fitness

409

teacher assign children in pairs to self-monitor their own and partner’s steps to encourage each other’s step movements if one noticed a lack of steps. Each pairing competed against other pairs with the teacher later posting a bar graph of each team’s total progress. Given the expansion of technology and social media platforms, the accessibility to publicly shared information can both prompt and reinforce physical activity in real time (see Goodyear, Wood, Skinner, & Thompson, 2021, for a systematic review of social media interventions on physical activity). Self-management The focus on the single individual is one of the basic features of applied behavior analysis in health, sport, and fitness. The key detail of self-­management interventions is to teach the person to identify and manipulate relevant variables in their own environment. Self-management includes several behavior-change tactics that have shown promise as an effective and cost-efficient alternative to tangible-based interventions (Andrade et al., 2014; Junaid et al., 2020). The basis of self-management requires an individual to accurately monitor and record their behaviors (i.e., self-monitoring; see Page, Massey, Prado-Romero, & Albadawi, 2020, for a systematic review of self-monitoring to increase physical activity). Some effective components commonly included in a self-management intervention package are goal setting and feedback delivery (Gilson et al., 2009; Mias, Dittrich, & Miltenberger, 2021; Normand, 2008; Zarate et al., 2019). Within behavior management literature, the general consensus is for practitioners to ensure that a goal is specific, objective, and relevant to the desired outcome, systematically measured to evaluate progress, and ultimately achieved within a timely manner (Fellner & Sulzer-Azaroff, 1984; Locke & Latham, 1990, 2013), all of which can be utilized within fitness and behavior-based interventions. Along the lines of specificity, feedback should also be delivered with emphasis on positive reinforcement of the desired behavior rather than negative or even neutral feedback (Zarate et al., 2019). The primary literature around effective goal setting and feedback is often based within organizational behavior management literature (see Sleiman, Sigurjonsdottir, Elnes, Gage, & Gravina, 2020, for a quantitative review of performance feedback effects) as well as sports-related literature (see the following section on Sports). All in all, these special applications of behavior-change techniques can also be combined to produce an additive effect to promote physical activity behaviors.

410

Applied behavior analysis advanced guidebook

Sport What is sport? Clear agreement on the definition of “sport” and what constitutes “sport” is an ongoing challenge (Best, 1974; Borge, 2020; Guarino, 2015). The following section shares some general parameters for consideration. Many sources define sport by referring to characteristics of an activity that typically involves an individual engaging in physical activity for the purposes of enjoyment and/or competition. Further, sport can be considered competitive or recreational in nature, depending on the structure of the activity and the function for the individual. Due to the wide array of competitive and recreational sports activities and individual athletic preferences, many possible functions for participation in sport exist (e.g., health, social interactions, entertainment). O’Donnell, Philp, and Mahoney (2021) proposed direct and indirect competition to classify competitive sports according to existing contingencies. Direct competition involves an interaction between opponents wherein contingencies involve one team being successful at the cost of the opposing team’s success. These sports have an “offense” and a “defense” (e.g., football, basketball, soccer); the offensive team is successful if they score, and the defensive team has “failed” if their opponent scores. Indirect competition, on the other hand, involves multiple teams who compete without direct interaction between/among opponents. For example, both gymnastics and dance competitions involve evaluating the athlete’s individual performance, and while they are not concurrently competing, the opposing athlete/team’s results will impact the outcome. Some indirect sports even have rules that prohibit players from interfering with each other. For example, a golfer cannot touch an opponent’s ball and a swimmer cannot enter an opponent’s lane. We highlight that direct and indirect competition can be present within the same sport: gameplay in basketball involves direct competition and indirect competition exists when a player shoots a free throw. Sports can also be categorized as purposeful or aesthetic according to the way in which their outcome is achieved (Best, 1974). The outcome in purposeful sports is determined via objective measures for winning (e.g., scoring a goal, crossing home plate) but in contrast, the outcome for aesthetic sports like figure skating, gymnastics, and dance is determined through both objective measures and subjective judgment of the technical execution of the skills performed by a group of expert judges. Despite the many parameters of sport, all sports follow the same behavioral assessment process.



Practice and consultation in health, sport, and fitness

411

Assessment Behavioral assessment is a foundational process to the work of behavior analysts supporting athletes and coaches in the sport context. The purpose of assessment is to identify the target behavior, determine the contributing factors to the behavior, select the appropriate intervention to address the concerning behavior, and monitor the outcome of the intervention on the targeted behavior (Luiselli & Reed, 2011; Martin, 2019). By doing so, practitioners are effectively able to support proper skill development, which may also prevent athletic injuries from improper technique. A typical first step in the assessment process is collecting indirect data through behavioral interviewing with the athlete or coach. In the interview, the practitioner should seek relevant background information about the athlete’s participation in the sport and assist in identifying the athlete’s specific goal(s). Some athletes may be reluctant or have difficulty discussing their presenting concerns; in such situations, it may be helpful for the practitioner to utilize specific interview forms, such as the Athlete Information Form (Martin, Toogood, & Tkachuck, 1997). Alternatively, practitioners might adopt interview strategies such as asking the athlete to discuss their recent best and worst performances (Orlick, 1986, 1989) or leading the athlete through performance profiling exercises (Butler & Hardy, 1992) to facilitate discussion points. Practitioners may also use questionnaires or checklists, notably the Precompetition and Competition Behavior Inventory (Rushall, 1979) and Athlete Coping Skills Inventory (Smith, Schultz, Smoll, & Ptacek, 1995) that list practice and/or competition aspects of sports to supplement the information obtained from the athlete or coach in behavioral interviews. These assessment methods can be individualized to the sport and efficiently identify areas in which the athlete performs well and areas in need of improvement. Practitioners can validate the information obtained by monitoring the athlete’s performance in practice or competition using a sport-specific behavior checklist. The next step in the behavioral assessment process is direct data collection. At this point in the assessment process, a target behavior should be identified. The practitioner should then define the target behavior by describing it in observable and measurable terms. Doing so allows the practitioner to develop a method to accurately and reliably monitor the target behavior. Practitioners can choose topography, frequency (or rate), duration, intensity, or latency to measure a target behavior.When selecting a measurement method, the practitioner should consider the demands of

412

Applied behavior analysis advanced guidebook

the specific sport. For example, how quickly a runner initiates a 100 m sprint following the starter’s pistol is critical to their success, and latency may be the most relevant dimension to monitor. Conversely, in sports in which the athlete is judged on the quality of their performance (e.g., gymnastics, figure skating, dance), a slight variation in their technique can result in substantial deductions. Therefore, the measurement tool used must be sensitive enough to capture small changes in the topography of the athlete’s performance. Task analyzing is a common assessment method that breaks down complex sport skills into smaller, teachable, and potentially ­sequentially-ordered steps (Ennett, Zonneveld, Thomson,Vause, & Ditor, 2020; Luiselli et  al., 2013; Quinn, Miltenberger, & Fogel, 2015; Tai & Miltenberger, 2017). A task analysis allows the practitioner to accurately capture slight changes in an athlete’s performance. Task analyses are often an essential component of many intervention approaches but despite functionality and widespread utility, no standards exist for developing task analyses used in the sport context However, general best practice recommendations that can assist development do exist (Cooper et  al., 2020) and the ecological validity of the task analysis outcomes should be strongly considered. That is, the primary consideration when creating task analyses in sports is to ensure you are pinpointing relevant behaviors that allow the athlete to successfully and effectively perform the complex skill at a standard appropriate to their level of play. We recommend that practitioners and consultants work collaboratively with coaches/instructors and athletes to develop a meaningful task analysis. Circumstances might be encountered that warrant the assessment of multiple behaviors concurrently or in which the complexity of the behavior impedes on the accuracy of the assessment. In such situations, practitioners may benefit from assessment methods variations such as including visual references within a task analysis/checklist for clarification (Fitterling & Ayllon, 1983); teaching the athlete to self-monitor their performance (Giambrone & Miltenberger, 2019); and/or using video recording as a permanent product observational tool (Stokes, Luiselli, Reed, & Fleming, 2010). Many assessment methods exist, and practitioners will likely utilize an approach based on their individual preferences. No matter the method used, we recommend that practitioners assess the target behavior continuously. More specifically, practitioners should initially assess the baseline level of the target behavior and then continue to monitor this behavior after intervention has been implemented (Luiselli, 2012). If the athlete



Practice and consultation in health, sport, and fitness

413

and/or coach no longer believes this is necessary, the practitioner may want to facilitate discussion about the importance and benefits to performance monitoring.

Interventions Behavior analysis has been successfully applied in a variety of sport contexts to teach new skills, decrease persistent errors, enhance motivation, and address other concerns impeding athletic performance. Some research has even looked at teaching appropriate skills and improve technique to reduce the probability of athletic injuries (Harris et  al., 2020; Harrison & Pyles, 2013; Quinn et al., 2015; Quintero et al., 2020; Tai & Miltenberger, 2017). A recent systematic review of behavior analytic interventions in sports identified 101 articles across 21 sports (Schenk & Miltenberger, 2019). These results suggest a growth in this area of application, as a systematic review conducted only 15 years earlier identified 32 articles across 15 sports (Martin, Thomson, & Regehr, 2004). In addition to an increasing interest in applying behavior analysis to a wider array of sport contexts, interventions have evolved with innovative components that capitalize on technological advancements (e.g., video feedback; Partington, Cushion, Cope, & Harvey, 2015; video modeling; Mulqueen, Crosland, & Novotny, 2021). Future considerations may include adopting virtual simulation to support focused sport performance practice (see Farrow, 2013 for implementation suggestions). Furthermore, no studies exist that focus on the performance enhancement of professional athletes, suggesting a needed area for behavior analytic research to expand (Schenk & Miltenberger, 2019). Across the various types of sports, athletes and coaches consistently report meaningful behavior change and high levels of satisfaction with ABA methods. Yet, it is difficult to pinpoint which interventions and intervention components are most effective and beneficial for behavior change and performance enhancement because of the reliance on multicomponent interventions. Schenk and Miltenberger (2019) contend that packaged interventions are likely common because they produce desired outcomes, reduce athlete speculation, and are implemented with the goal to support improved performance versus identify the unique effect and contribution of individual intervention components. As such, caution should be used when interpreting findings presented in the remainder of this section. Based on the available research, we have summarized the available behavior analytic interventions applied in the sport context into the following four categories: (1) packaged interventions, (2) antecedent interventions, (3) consequence

414

Applied behavior analysis advanced guidebook

interventions, and (4) feedback interventions. Whether feedback functions as an antecedent or consequence operation is debated (Aljadeff-Abergel, Peterson, Wiskirchen, Hagen, & Cole, 2017; Bechtel, McGee, Huitema, & Dickinson, 2015); therefore, we discuss feedback interventions in a separate section. Package interventions Of the 101 studies Schenk and Miltenberger (2019) identified, 73 (72%) used interventions with more than one component, including behavioral skills training (BST) (Tai & Miltenberger, 2017); habit reversal (Allen, 1998); behavioral contracts (Simek, O’Brien, & Figlerski, 1994); ACT (Little & Simpson, 2000); and simulated practice (Fery & Ponserre, 2001; Scott, Scott, & Howe, 1998). To date, behavioral contracting has only been researched in indirect sports (e.g., golf); ACT and simulated practice have only been researched in direct sports (e.g., baseball, tennis, and football). Many of the package interventions include some combination of instructions and/or feedback. For example, Tai and Miltenberger (2017) used BST to teach safe tackling form in football by providing instructions and feedback according to a task analysis while players practiced. Precision teaching (Binder, 1993) as a package intervention has been quite effective in teaching both simple and complex skills. During teaching sessions, athletes receive instruction and feedback while engaging in repeated practice of the target behavior; goals are set according to fluency criteria, and fluency is evaluated during testing sessions. Precision teaching can be useful for teaching component skills such as club grip and hitting posture to improve golf performance, McDowell, McIntyre, Bones, & Keenan, 2002) or training component skills to fluency such as ballet movements(Lokke, Lokke, & Arntzen, 2008). In precision teaching, data are graphed on a standard celeration chart (for more information on plotting data on standard celeration charts, see Lindsley, 1992). Pocock, Foster, and McEwan (2010) demonstrated improvement in performance when athletes graphed their own fluency data, in conjunction with goal setting. Mindfulness and acceptance-based behavioral interventions such as ACT are an example of another package intervention in sports that have received more attention in recent years (Worthen & Luiselli, 2016). ACT has been used to help athletes both identify covert behaviors (e.g., distressing thoughts, feelings of performance anxiety, fear of injury) and respond effectively via overt behaviors aligned with their sport-related values (Gervis & Goldman, 2020; Lundgren et al., 2021; Lundgren, Reinebo,



Practice and consultation in health, sport, and fitness

415

Näslund, & Parling, 2020; Szabo, Willis, & Palinski, 2019). In their book, ACT in Sport, Hegarty and Huelsmann (2020) illustrate how athletes, coaches, and consultants can implement ACT strategies across a variety of sport-specific scenarios. While research supports the utility of ACT interventions in sport, few studies exist on ACT and adherence to physical rehabilitation programs following sport-related injury (Mahoney & Hanrahan, 2011; Moesch, Ivarsson, & Johnson, 2020). After a failed attempt to recruit injured student athletes to evaluate the effectiveness of their Return to ACTion protocol, Shortway, Wolanin, Block-Lerner, and Marks (2018) conducted a preliminary feasibility study of the protocol that does offer suggestions for implementation and evaluation. Antecedent interventions Both O’Donnell et al. (2021) and Schenk and Miltenberger (2019) identified a variety of antecedent interventions that effectively improved athlete performance, including but not limited to: instruction (Kladopoulos & McComas, 2001; Rogers, Hemmeter, & Wolery, 2010); modeling (Aiken, Fairbrother, & Post, 2012; Baudry, Leroy, & Chollet, 2006; Boyer, Miltenberger, Batsche, & Fogel, 2009); self-talk (Landin & Hebert, 1999; Rogerson & Hrycaiko, 2002); self-imagery (Lerner, Ostrow, Yura, & Etzel, 1996; Shapiro & Shapiro, 1985); and discrimination training (for direct competition only; Christina, Barresi, & Shaffner, 1990; Scott et al., 1998). Self-talk and self-imagery are two methods of intervening on covert verbal behavior in sports, warranting their inclusion as behavioral interventions. Given that goal characteristics can be adjusted, we provide some specific recommendations for practitioners when using and setting goals for athletes. For goal setting to be most effective, feedback on whether the goal was met should be provided (Erez, 1977). In the context of sport, feedback could be provided by a coach, trainer, peer, or via self-­assessment. In addition, practitioners should set short-term goals that build toward longterm goals (Locke & Latham, 1990, 2013) and reinforce new personal bests in performance if the goal is high and not yet attainable (Quinn, Miltenberger, Abreu, & Narozanick, 2017). Research has shown that goals set by athletes are just as effective as goals set by coaches (Brobst & Ward, 2002; Ward & Carnes, 2002). Unlike goal setting outside of sport, research on sport performance indicates that setting high performance goals (e.g., 90% correct) is not unusual because the targeted skills are those that the coach expects to be in a player’s repertoire (Smith & Ward, 2006; Ward & Carnes, 2002).

416

Applied behavior analysis advanced guidebook

Consequence interventions Though abundant, consequences are rarely studied in isolation in sports research. O’Donnell et al. (2021) noted that positive punishment and timeout (Koop & Martin, 1983) and token reinforcement (Quinn et al., 2015) have only been researched in indirect competition.This does not mean that it is ineffective for sport involving direct competition; rather, no research currently exists to support its efficacy. As such, more research on these types of consequence interventions in direct competition is warranted. Schenk and Miltenberger (2019) identified either positive (e.g., praise or helmet stickers in football) or negative reinforcement (e.g., removal of aversive coaching practices) in nearly 25% of the studies they reviewed. The general recommendations for use of reinforcement hold true in sports. For example, consequences are more effective when delivered immediately following the behavior and reinforcers should be delivered on a denser schedule when teaching a new skill. TAGteach, a consequence intervention that uses a sound to reinforce performance, has been used successfully to improve performance in football (Harrison & Pyles, 2013; Stokes et al., 2010), martial arts (Krukauskas, Miltenberger, & Gavonni, 2019), dance (Quinn et al., 2015), and rugby (Elmore, Healy, Lydon, & Murray, 2018). Practitioners using TAGteach will need to use a conditioning procedure to pair the sound of the mechanism used to provide feedback (e.g., a clicker) with a known reinforcer (e.g., verbal praise, positive feedback), as well as create a task analysis for the target behavior(s). An added benefit of TAGteach is that it can be implemented by peers, and that those peers who deliver the acoustical feedback (i.e., the sound of the clicker) can also acquire skill acquisition without rehearsing the behavior (Quinn et al., 2017). Feedback interventions All coaches and instructors provide feedback to their athletes although the form and quality may vary from coach to coach, or sport to sport. The impact of coaching behaviors on athlete performance is of critical importance, particularly for young athletes (Adams, 2019; McMullen, Henderson, Ziegenfuss, & Newton, 2020). We recommend that coaches, instructors, and practitioners who support sport coaches and instructors, review coaching guidelines and engage in reflective practice on an on-going basis (Ronkainen, Ryba, & Selänne, 2019; Santos et al., 2019). Feedback can take the form of spoken or written words, graphs, videos, gestures, or a combination of any of these forms to provide the player with information about past or future performance. Feedback is typically delivered after a performance



Practice and consultation in health, sport, and fitness

417

but as O’Donnell et al. (2021) mentions, feedback may also serve as a discriminative stimulus for specific behaviors that need to be changed in order to attain future reinforcement. Much of the behavior-analytic research on feedback in sport has simply demonstrated the efficacy of feedback, rather than evaluating how best to deliver a specific form or type of feedback. In their recent review and meta-analysis of performance feedback in organizational settings, Sleiman et al. (2020) identified that specific feedback is more effective than general feedback, privacy does not influence feedback effectiveness, and more frequent feedback is better than less frequent feedback. We contend that much of the research on performance feedback within the field of organizational behavior management likely holds true within the context of sport, though additional research is needed to identify unique differences. Outside of vocalized feedback, research on different forms of feedback in sport demonstrates that graphic (Anderson & Kirkpatrick, 2002; Brobst & Ward, 2002), publicly posted (Quinn et al., 2017; Smith & Ward, 2006), video (Aiken et al., 2012; Kelley & Miltenberger, 2016; Stokes et al., 2010), and written (Stokes & Luiselli, 2010) forms of feedback are effective. Practitioners using graphic feedback should review this modality with athletes to ensure understanding. Doing so also allows for the simultaneous delivery of praise for increases in performance or constructive feedback and words of encouragement when improvements are not made. If posting feedback publicly, ensure it is placed in a highly visible and frequently attended location such as the inside of the exit door of the locker room. Video feedback requires an athlete to engage in the target behavior while being recorded. Recorded performance can be reviewed by the athlete and their coach, who can be taught to provide positive and constructive feedback and other forms of reinforcement when appropriate. Alternatively, the video of the athlete’s performance can be shown alongside an expert model (Mulqueen et al., 2021). In both cases, the athlete can be taught to score their performance independently according to a checklist to ensure accurate self-assessment (Giambrone & Miltenberger, 2019), though discussion of the performance with an expert (e.g., coach) is highly recommended.

Adaptive sports The diverse utility of behavior analytic interventions that have positively impacted the performance and satisfaction of athletes described above is encouraging in broadening the scope of the field. As a discipline that is focused on supporting socially significant behavior change, we must promote

418

Applied behavior analysis advanced guidebook

opportunities that encourage diverse populations to participate in sport and gain its associated benefits. Surprisingly, Rotta, Li, and Poling (2020) identified that, in a review of 95 behavior-analytic articles focusing on sport performance, only five reported the inclusion of participants with a developmental disability and/or intellectual disability. These results suggest that we do not yet have equal representation within our research; we are hopeful, however, that the growing interest in this area of application extends to persons with disabilities (Luiselli, 2016; Martin, 2017). Given the large number of behavior analysts currently supporting individuals with disabilities within their area of practice, we encourage behavior analysts to consider teaching skills related to sport participation. Compelling evidence exists to support that participating in sport can offer persons with disabilities a supportive social environment and an engaging experience to meet physical activity guidelines (Allender, Cowburn, & Foster, 2006; Eime, Young, Harvey, Charity, & Payne, 2013; Luiselli, Woods, Keary, & Parenteau, 2013; Murcia & Kreutz, 2012). In adaptive programming, modifications are made to activities at the outset to allow flexibility for persons with disabilities to fully experience the sport. Given that our field has an ethical obligation to individualize programming (Behavior Analyst Certification Board, 2020), a natural synergy exists between our practice and adaptive programming. For example, Pontone, Vause, and Zonneveld (2021) identified that, of 19 adaptive dance programs for people with neurodevelopmental disabilities, half included at least one behavior-analytic informed teaching strategy (e.g., token economy, prompting hierarchies) to support participants’ needs. While variation in the curricula exists (e.g., choreography, dancercise, cultural dance forms), all of the adaptive dance programs that incorporated a behavior-analytic teaching strategy reported a positive impact on the participants performance. These results suggest that behavior analysts can contribute to effective adaptive programming; however, it is important for practitioners interested in adaptive programming to engage in multidisciplinary consultation with adaptive programming specialists to ethically expand scope of practice.

Additional considerations Ethical practice and consultation While referenced in earlier sections of this chapter, we want to again stress that practitioners and consultants are responsible for both scope of practice and scope of competence as described within the Ethics Code for Behavior



Practice and consultation in health, sport, and fitness

419

Analysts (Behavior Analyst Certification Board, 2020). The BACB Task List and state licensure laws describe the scope of practice in which credentialed and/or licensed behavior analysts may engage (see 1.02 Conforming with Legal and Professional Requirements). Scope of competence describes “the professional activities a behavior analyst can consistently perform with proficiency” (Behavior Analyst Certification Board, 2020). As a result, while activities might fall within one’s scope of practice, they may not necessarily be within the scope of competence for the individual practitioner (e.g., contingency contracting, TAG teaching, motivational interviewing, ACT and ACT Training). Practitioners have an ethical responsibility to be aware of the contingencies in place that might lead them to step outside of their personal scope of competence (Brodhead, Quigley, & Wilczynski, 2018). Therefore, we stress the importance of frequent and ongoing self-­ assessment to identify personal limitations as well as expanding one’s scope of competence and continuing to engage in opportunities to become fluent in one’s competence (see Core Principle 4 Ensure their Competence; 1.04 Practicing within a Defined Role; 1.05 Practicing within Scope of Competence; 1.06 Maintaining Competence; and 3.03 Accepting Clients). Normand and Donohue (2022) provide an overview of ethical considerations for practitioners who seek to conduct research.

Training in health, sport, and fitness At the time of this writing, no graduate training programs exist primarily to train behavior analysts to practice within the areas of health, sport, or fitness. While no published data exists on the prevalence of HSF-focused coursework and supervised fieldwork experiences within graduate training programs, we are aware that limited opportunities to learn about HSF applications, obtain supervised experience, and conduct research do exist. These opportunities exist primarily as a function of available faculty who are able to provide these experiences for students. As a result, some amount of additional training is necessary for behavior analysts wanting to practice in HSF. The level and type of training will depend heavily on the type of work you want to do as a health or fitness coach or sport performance consultant and the population with whom you want to work, possibly individuals with intellectual disabilities, youth or adult athletes, and older adults. Identifying a target client population and a specific area of focus (a ‘niche’) can inform the types of educational, professional, and supervision experiences that are needed to increase scope of competence (Brodhead et al., 2018). For example, the training activities necessary to work as a health coach for adults will

420

Applied behavior analysis advanced guidebook

be vastly different from what is required to design sport-specific training for collegiate athletes. Recent resources and publications exist to guide those seeking to enter professions that fall under the HSF umbrella (Holland & Slowiak, 2021; Normand & Bober, 2020; Normand, Dallery, & Slanzi, 2021). Holland and Slowiak, in particular, provide guidance for contacting relevant coursework, reading relevant literature both within and outside of behavior analysis, engaging with relevant professional groups (e.g., the HSF SIG), obtaining additional guidance from a supervisor, coach, or mentor, and identifying relevant and credible professional credentials. One of the most frequent questions that the HSF SIG receives from behavior analysts looking to increase their scope of competence in order to practice in HSF is related to the necessity of additional certifications and credentials. The short answer is that it depends on a number of factors, including: (a) current educational and training background, along with scope of competence in the desired area of practice; (b) desired target population (e.g., athletes, general population, adults, kids); (c) desired area of focus (e.g., nutrition coaching, health coaching, sports performance, general fitness); (d) whether a desired job position has specific requirements; and (e) whether reimbursement for services is sought. Generally-speaking, practitioners should identify and obtain certifications that are accredited by a trusted third-party, such as the National Commission for Certifying Agencies (NCCA) or the International Organization for Standardization (ISO) (Holland & Slowiak, 2021). Table 1, below, provides a list of recommended certifications, along with suggestions for interprofessional collaboration, for those interested in pursuing a career within ABA and HSF. Similar to other areas of practice within behavior analysis, practitioners and consultants in HSF may find it necessary and beneficial to collaborate with other professionals from whom a client is seeking care and or who have a vested interest in outcomes associated with your work.

Summary The information presented in this chapter is intended to support the growth of behavior-analytic practice within the areas of health, sport, and fitness. We highlighted health-related data, guidelines, and recommendations from global organizations in addition to recommendations for those who pursue practice and consultation to improve outcomes related to general health, eating and nutrition, general fitness and physical activity, and sports performance within competitive, recreational, and adaptive sports. We summarized



Practice and consultation in health, sport, and fitness

421

Table 1  Health, sport, and fitness certifications and opportunities for interprofessional collaboration. Focus area

Recommended certifications

Weight Management & General Health

National Board Certified Health and Wellness Coach (National Board for Health & Wellness Coaching); Certified Health Coach (National Society of Health Coaches); Health Coach Certification (American Council on Exercise) Certified Nutrition Coach (National Academy of Sports Medicine); Fitness Nutrition Specialist (American Council on Exercise); Certified Nutritionist (International Sports Sciences Association); Certified Nutrition Coach (Precision Nutrition) Certified Personal Trainer (National Academy of Sports Medicine, International Sports Sciences Association, American Council on Exercise, National Strength and Conditioning Association, American College of Sports Medicine) Certified Strength and Conditioning Specialist (National Strength and Conditioning Association); Strength and Conditioning Coach (Collegiate Strength and Conditioning Coaches Association); Performance Enhancement Specialist (National Academy of Sports Medicine); Sports Performance Specialist (American Council on Exercise); Certified Mental Performance Consultant (Association for Applied Sport Psychology)

Nutrition & Health Eating

Physical Activity & General Fitness

Athletic & Sport Performance

Interprofessional collaboration

Family Doctor; Psychologist; Therapist

Family Doctor; Registered Dietician; Licensed Nutritionist; Psychologist; Therapist Family Doctor; Physical Therapist; Occupational Therapist

Sport Coaches and Instructors; Kinesiologist; Exercise Physiologist; Athletic Trainer; Sport Psychologist

Note. The name of the certifying organization is identified within parentheses behind the name of each of the recommended certifications.

422

Applied behavior analysis advanced guidebook

and made recommendations from evidence-based literature, both within and outside of behavior analysis, to promote the use of effective interventions. We highlighted ethical considerations and emphasized the importance of remaining within one’s scope of practice, and we ended the chapter with training-related recommendations to increase both scope of competence and scope of practice. Our hope is that this chapter serves as a foundation for the future expansion of behavior analytic practice in HSF.

References Adams, J. J. (2019). Youth sports: Cost of coaching abuse is always high, and sometimes fatal. The Province. February 1 https://theprovince.com/sports/high-school/ youth-sports-cost-of-coaching-abuse-is-always-high-and-sometimes-fatal. Aiken, C. A., Fairbrother, J. T., & Post, P. G. (2012). The effects of self-controlled video feedback on the learning of the basketball set shot. Frontiers in Psychology, 3(1), 338–345. https://doi.org/10.3389/fpsyg.2012.00338. Aljadeff-Abergel, E., Peterson, S. M., Wiskirchen, R. R., Hagen, K. K., & Cole, M. L. (2017). Evaluating the temporal location of feedback: Providing feedback following performance vs. prior to performance. Journal of Organizational Behavior Management, 37(2), 171–195. https://doi.org/10.1080/01608061.2017.1309332. Allen, K. D. (1998). The use of an enhanced simplified habit‐reversal procedure to reduce disruptive outbursts during athletic performance. Journal of Applied Behavior Analysis, 31(3), 489–492. https://doi.org/10.1901/jaba.1998.31-489. Allender, S., Cowburn, G., & Foster, C. (2006). Understanding participation in sport and physical activity among children and adults: A review of qualitative studies. Health Education Research, 21(6), 826–835. https://doi.org/10.1093/her/cyl063. American Nutrition Association. (n.d.). State regulation of nutrition practice. https://theana. org/advocate. An, R., Shi,Y., Shen, J., Bullard, T., Liu, G.,Yang, Q., et al. (2021). Effect of front-of-package nutrition labeling on food purchases: A systematic review. Public Health, 191, 59–67. https://doi.org/10.1016/j.puhe.2020.06.035. Anderson, G., & Kirkpatrick, M. A. (2002).Variable effects of a behavioral treatment package on the performance of inline roller speed skaters. Journal of Applied Behavior Analysis, 35(2), 195–198. https://doi.org/10.1901/jaba.2002.35-195. Andrade, L. F., Barry, D., Litt, M. D., & Petry, N. M. (2014). Maintaining high activity levels in sedentary adults with a reinforcement-thinning schedule. Journal of Applied Behavior Analysis, 47(3), 523–536. https://doi.org/10.1002/jaba.147. Anzman-Frasca, S., Savage, J. S., Marini, M. E., Fisher, J. O., & Birch, L. L. (2012). Repeated exposure and associate conditioning promote preschool children's liking of vegetables. Appetite, 58(2), 543–553. https://doi.org/10.1016/j.appet.2011.11.012. Appelhans, B. M., French, S. A., Olinger,T., Bogucki, M., Janssen, I., Avery-Mamer, E. F., et al. (2018). Leveraging delay discounting for health: Can time delays influence food choice? Appetite, 126, 16–25. https://doi.org/10.1016/j.appet.2018.03.010. Aragona, J., Cassady, J., & Drabman, R. S. (1975). Treating overweight children through parental training and contingency contracting. Journal of Applied Behavior Analysis, 8(3), 269–278. https://doi.org/10.1901/jaba.1975.8-269. Barrish, H. H., Saunders, M., & Wolf, M. M. (1969). Good behavior game: Effects of individual contingencies for group consequences on disruptive behavior in a classroom. Journal of Applied Behavior Analysis, 2(2), 119–124. https://doi.org/10.1901/jaba.1969.2-119.



Practice and consultation in health, sport, and fitness

423

Bassett, D. R., Jr., Toth, L. P., LaMunion, S. R., & Crouter, S. E. (2017). Step count: A review of measurement considerations and health-related applications. Sports Medicine, 47(7), 1303–1315. https://doi.org/10.1007/s40279-016-0663-1. Batchelder, S. R., & Washington,W. D. (2021). Effects of incentives and prompts on sedentary walking behaviors in university employees. Behavior Analysis: Research and Practice, 21(3), 219–237. https://doi.org/10.1037/bar0000214. Baudry, L., Leroy, D., & Chollet, D. (2006).The effect of combined self-and expert-­modeling on the performance of the double leg circle on the pommel horse. Journal of Sports Sciences, 24(10), 1055–1063. https://doi.org/10.1080/02640410500432243. Bechtel, N. T., McGee, H. M., Huitema, B. E., & Dickinson, A. M. (2015). The effects of the temporal placement of feedback on performance. The Psychological Record, 65(3), 425–434. https://doi.org/10.1007/s40732-015-0117-4. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb. com/wp-content/ethics-code-for-behavior-analysts/. Behavior Analyst Certification Board. (n.d.). BACB certificant data. https://www.bacb.com/ bacb-certificant-data/. Berger, E., Garcia,Y., Catagnus, R., & Temple, J. (2021).The effect of acceptance and commitment training on improving physical activity during the COVID-19 pandemic. Journal of Contextual Behavioral Science, 20, 70–78. https://doi.org/10.1016/j.jcbs.2021.02.005. Best, D. (1974).The aesthetic in sport. British Journal of Aesthetics, 14(3), 197–213. https://doi. org/10.1093/bjaesthetics/14.3.197. Binder, C. (1993). Behavioral fluency: A new paradigm. Education Technology, 33(10), 8–14. http://www.jstor.org/stable/44428106. Bloomfield, B. S., Fischer, A. J., Clark, R. R., & Dove, M. B. (2019). Treatment of food selectivity in a child with avoidant/restrictive food intake disorder through parent teleconsultation. Behavior Analysis in Practice, 12, 33–43. https://doi.org/10.1007/ s40617-018-0251-y. Boerke, K. W., & Reitman, D. (2013). Token economies. In W. W. Fisher, C. C. Piazza, & H. S. Roane (Eds.), Handbook of applied behavior analysis (1st ed., pp. 370–382). Guilford. Borge, S. (2020). What is sport? Sport, Ethics, & Philosophy, 15(3), 308–330. https://doi.org/1 0.1080/17511321.2020.1760922. Boyer, E., Miltenberger, R. G., Batsche, C., & Fogel, V. (2009). Video modeling by experts with video feedback to enhance gymnastics skills. Journal of Applied Behavior Analysis, 42(4), 855–860. https://doi.org/10.1901/jaba.2009.42-855. Brakenridge, C. L., Fjeldsoe, B. S., Young, D. C., Winkler, E. A., Dunstan, D. W., Straker, L. M., et al. (2016). Evaluating the effectiveness of organisational-level strategies with or without an activity tracker to reduce office workers’ sitting time: A cluster-randomised trial. International Journal of Behavioural Nutrition and Physical Activity, 13, 115. https://doi. org/10.1186/s12966-016-0441-3. Brobst, B., & Ward, P. (2002). Effects of public posting, goal setting, and oral feedback on the skills of female soccer players. Journal of Applied Behavior Analysis, 35(3), 247–257. https:// doi.org/10.1901/jaba.2002.35-247. Brodhead, M. T., Quigley, S. P., & Wilczynski, S. M. (2018). A call for discussion about scope of competence in behavior analysis. Behavior Analysis in Practice, 11(4), 424–435. https:// doi.org/10.1007/s40617-018-00303-8. Butler, R. J., & Hardy, L. (1992). The performance profile: Theory and application. The Sport Psychologist, 6(3), 243–264. https://doi.org/10.1123/tsp.6.3.253. Butryn, M. L.,Webb,V., & Wadden,T.A. (2011). Behavioral treatment of obesity. The Psychiatric Clinics of North American, 34(4), 841–859. https://doi.org/10.1016/j.psc.2011.08.006. Cassey, H. J., Washio, Y., & Hantula, D. A. (2016). The good nutrition game: Extending the good behavior game to promote fruit and vegetable intake. Delaware Medical Journal, 88(11), 342–345.

424

Applied behavior analysis advanced guidebook

Center for Disease Control and Prevention. (2020). Higher daily step count linked with lower allcause mortality. https://www.cdc.gov/media/releases/2020/p0324-daily-step-count.html. Centers for Disease Control and Prevention. (2021). About chronic diseases. https://www.cdc. gov/chronicdisease/about/index.htm. Chaudhry, U. A., Wahlich, C., Fortescue, R., Cook, D. G., Knightly, R., & Harris, T. (2020). The effects of step-count monitoring interventions on physical activity: Systematic review and meta-analysis of community-based randomised controlled trials in adults. International Journal of Behavioral Nutrition and Physical Activity, 17, 129. https://doi. org/10.1186/s12966-020-01020-8. Chen, H. J., Weng, S. H., Cheng, Y. Y., Lord, A., Lin, H. H., & Pan, W. H. (2017). The application of traffic-light food labelling in a worksite canteen intervention in Taiwan. Public Health, 150, 17–25. https://doi.org/10.1016/j.puhe.2017.04.005. Chen, J., Gemming, L., Hanning, R., & Allman-Farinelli, M. (2018). Smartphone apps and the nutrition care process: Current perspectives and future considerations. Patient Education and Counseling, 101(4), 750–757. https://doi.org/10.1016/j.pec.2017.11.011. Christina, R. W., Barresi, J. V., & Shaffner, P. (1990). The development of response selection accuracy in a football linebacker using video training. The Sport Psychologist, 4(1), 11–17. https://doi.org/10.1123/tsp.4.1.11. Condrasky, M. D., Griffin, S. G., Catalano, P. M., & Clark, C. (2010). A formative evaluation of the cooking with a chef program. Journal of Extension, 48(2), 2FEA1. https://archives. joe.org/joe/2010april/pdf/JOE_v48_2a1.pdf. Cooke, L. J., Chambers, L. C.,Anez, E.V., Croker, H.A., Boniface, D.,Yeomans, M. R., et al. (2011). Eating for pleasure or profit: The effect of incentives on children's enjoyment of vegetables. Psychological Science, 22(2), 190–196. https://doi.org/10.1177/0956797610394662. Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Pearson/Merrill-Prentice Hall. Courtemanche, A. B., Hopson, A., & Groskreutz, N. C. (2021). An evaluation of daily competition and incentives on gym attendance. Behavior Analysis: Research and Practice, 21(3), 238–247. https://doi.org/10.1037/bar0000221. Cui, Q., Xia, Y., Wu, Q., Chang, Q., Niu, K., & Zhao, Y. (2021a). A meta-analysis of the reproducibility of food frequency questionnaires in nutritional epidemiological studies. The International Journal of Behavioral Nutrition and Physical Activity, 18(1), 12. https://doi. org/10.1186/s12966-020-01078-4. Cui, Q., Xia, Y., Wu, Q., Chang, Q., Niu, K., & Zhao, Y. (2021b). Validity of the food frequency questionnaire for adults in nutritional epidemiological studies: A systematic review and meta-analysis. Critical Reviews in Food Science and Nutrition, 1–19. https://doi. org/10.1080/10408398.2021.1966737. Da Luz, F. Q., Hay, P., Touyz, S., & Sainsbury, A. (2018). Obesity with comorbid eating disorders: Associated health risks and treatment approaches. Nutrients, 10(7), 829–838. https://doi.org/10.3390/nu10070829. De Lorenzo, A., Gratteri, S., Gualtieri, P., Cammerano, A., Bertucci, P., & Di Renzo, L. (2019).Why primary obesity is a disease? Journal of Translational Medicine, 17(1), 169–182. https://doi.org/10.1186/s12967-019-1919-y. de Vries, H. J., Kooiman,T. J., van Ittersum, M.W., van Brussel, M., & de Groot, M. (2016). Do activity monitors increase physical activity in adults with overweight or obesity? A systematic review and meta-analysis. Obesity, 24(10), 2078–2091. https://doi.org/10.1002/ oby.21619. Ding, D., Mutrie, N., Bauman, A., Pratt, M., Hallal, P. R., & Powell, K. (2020). Physical activity guidelines 2020: Comprehensive and inclusive recommendations to activate populations. The Lancet, 396(10265), 1780–1782. https://doi.org/10.1016/S0140-6736(20)32229-7. Donaldson, J. M., & Normand, M. P. (2009). Using goal setting, self-monitoring, and feedback to increase caloric expenditure in obese adults. Behavioral Interventions, 24(2), 73–83. https://doi.org/10.1002/bin.277.



Practice and consultation in health, sport, and fitness

425

Duan,Y., Shang, B., Liang, W., Du, G.,Yang, M., & Rhodes, R. E. (2021). Effects of eHealthbased multiple health behavior change interventions on physical activity, healthy diet, and weight in people with noncommunicable diseases: Systematic review and meta-analysis. Journal of Medical Internet Research, 23(2), e23786. https://doi.org/10.2196/23786. Eckard, M. L., Kuwabara, H. C., & Van Camp, C. M. (2019). Using heart rate as a physical activity metric. Journal of Applied Behavior Analysis, 52(3), 718–732. https://doi. org/10.1002/jaba.581. Eime, R. M., Young, J. A., Harvey, J. T., Charity, M. J., & Payne, W. R. (2013). A systematic review of the psychological and social benefits of participation in sport for children and adolescents: Informing development of a conceptual model of health through sport. The International Journal of Behavioral Nutrition and Physical Activity, 10(1), 98. https://doi. org/10.1186/1479-5868-10-98. Elmore, T., Healy, O., Lydon, S., & Murray, C. (2018). An evaluation of teaching with acoustical guidance (TAGteach) for improving passing skills among university rugby athletes. Journal of Sport Behavior, 41(4), 390–401. Ennett,T. M., Zonneveld, K. L. M.,Thomson, K. M.,Vause,T., & Ditor, D. (2020). Comparison of two TAGteach error-correction procedures to teach beginner yoga poses to adults. Journal of Applied Behavior Analysis, 53(1), 222–236. https://doi.org/10.1002/jaba.550. Epstein, L. H., Gordy, C. C., Raynor, H. A., Beddome, M., Kilanowski, C. K., & Paluch, R. (2001). Increasing fruit and vegetable intake and decreasing fat and sugar intake in families at risk for childhood obesity. Obesity Research, 9(3), 171–178. https://doi. org/10.1038/oby.2001.18. Epstein, L. H., Paluch, R. A., Beecher, M. D., & Roemmich, J. N. (2008). Increasing healthy eating vs. reducing high energy-dense foods to treat pediatric obesity. Obesity, 16(2), 318–326. https://doi.org/10.1038/oby.2007.61. Erez, M. (1977). Feedback:A necessary condition for the goal setting-performance relationship. Journal of Applied Psychology, 62(5), 627. https://doi.org/10.1037/0021-9010.62.5.624. Evenson, K. R., Wen, F., Metzger, J. S., & Herring, A. H. (2015). Physical activity and sedentary behavior patterns using accelerometry from a national sample of United States adults. International Journal of Behavioral Nutrition and Physical Activity, 12, 20. https://doi. org/10.1186/s12966-015-0183-7. Farrow, D. (2013). Practice-enhancing technology: A review of perceptual training applications in sport. Sports Technology, 6, 170–176. https://doi.org/10.1080/19346182.2013.875031. Feehan, L. M., Geldman, J., Sayre, E. C., Park, C., Ezzat, A. M.,Yoo, J.Y., et al. (2018). Accuracy of Fitbit devices: Systematic review and narrative syntheses of quantitative data. JMIR mHealth and uHealth, 6(8), e10527. https://doi.org/10.2196/10527. Fellner, D. J., & Sulzer-Azaroff, B. (1984). A behavioral analysis of goal setting. Journal of Organizational Behavior Management, 6(1), 33–51. https://doi.org/10.1300/J075v06n01_03. Fery, Y. A., & Ponserre, S. (2001). Enhancing the control of force in putting by video game training. Ergonomics, 44(12), 1025–1037. https://doi.org/10.1080/00140130110084773. Finkelstein, E. A., Khavjou, O. A., Thompson, H., Trogdon, J. G., Pan, L., Sherry, B., et al. (2012). Obesity and severe obesity forecasts through 2030. American Journal of Preventive Medicine, 42(6), 563–570. https://doi.org/10.1016/j.amepre.2011.10.026. Fitterling, J. M., & Ayllon, T. (1983). Behavioral coaching in classical ballet: Enhancing skill development. Behavior Modification, 7(3), 345–368. https://doi. org/10.1177/01454455830073004. Forman, E. M., & Butryn, M. L. (2015). A new look at the science of weight control: How acceptance and commitment strategies can address the challenge of self-regulation. Appetite, 84, 171–180. https://doi.org/10.1016/j.appet.2014.10.004. Forman, E. M., Butryn, M. L., Manasse, S. M., Crosby, R. D., Goldstein, S. P., Wyckoff, E. P., et al. (2016). Acceptance‐based versus standard behavioral treatment for obesity: Results from the mind your health randomized controlled trial. Obesity, 24(10), 2050–2056. https://doi.org/10.1002/oby.21601.

426

Applied behavior analysis advanced guidebook

Franckle, R., Levy, D., Macias-Navarro, L., Rimm, E., & Thorndike, A. (2018). Traffic-light labels and financial incentives to reduce sugar-sweetened beverage purchases by low-­ income Latino families: A randomized controlled trial. Public Health Nutrition, 21(8), 1426–1434. https://doi.org/10.1017/s1368980018000319. Galbraith, L. A., & Normand, M. P. (2017). Step it UP! Using the good behavior game to increase physical activity with elementary school students at recess. Journal of Applied Behavior Analysis, 50(4), 856–860. https://doi.org/10.1002/jaba.402. Galmiche, M., Déchelotte, P., Lambert, G., & Tavolacci, M. P. (2019). Prevalence of eating disorders over the 2000–2018 period: A systematic literature review. The American Journal of Clinical Nutrition, 109(5), 1402–1413. https://doi.org/10.1093/ajcn/nqy342. Garcia, A. L., Reardon, R., Hammond, E., Parrett, A., & Gebbie-Diben, A. (2017). Evaluation of the “eat better feel better” cooking programme to tackle barriers to healthy eating. International Journal of Environmental Research and Public Health, 14(4), 380. https://doi. org/10.3390/ijerph14040380. Geller, K., Lippke, S., & Nigg, C. R. (2017). Future directions of multiple behavior change research. Journal of Behavioral Medicine, 40, 194–202. https://doi.org/10.1007/ s10865-016-9809-8. Gervis, M., & Goldman, A. (2020). The flourishing footballers programme: Using ­psycho-education to develop resilience through ACT. Journal of Contextual Behavior Science, 18, 146–151. https://doi.org/10.1016/j.jcbs.2020.09.004. Giambrone, J., & Miltenberger, R. G. (2019). Using video self-evaluation to enhance performance in competitive dancers. Behavior Analysis in Practice, 13, 445–453. https://doi. org/10.1007/s40617-019-00395-w. Gilson, N. D., Puig-Ribera, A., McKenna, J., Brown, W. J., Burton, N. W., & Cooke, C. B. (2009). Do walking strategies to increase physical activity reduce reported sitting in workplaces: A randomized control trial. International Journal of Behavioral Nutrition and Physical Activity, 6, 43. https://doi.org/10.1186/1479-5868-6-43. Goodyear,V. A., Wood, G., Skinner, B., & Thompson, J. L. (2021). The effect of social media interventions on physical activity and dietary behaviours in young people and adults: A systematic review. International Journal of Behavioral Nutrition and Physical Activity, 18, 72. https://doi.org/10.1186/s12966-021-01138-3. Green, N., Sigurdsson, S., & Wilder, D. A. (2016). Decreasing bouts of prolonged sitting among office workers. Journal of Applied Behavior Analysis, 49(3), 717–722. https://doi. org/10.1002/jaba.309. Guarino, L. (2015). Is dance a sport? A twenty-first-century debate. Journal of Dance Education, 15(2), 77–80. https://doi.org/10.1080/15290824.2015.978334. Hanashiro-Parson, H., & Miltenberger, R. G. (2021). An evaluation of token and monetary reinforcement on physical activity exhibited by adults with intellectual disabilities in a group home setting. Behavior Analysis: Research and Practice, 21(3), 184–194. https://doi. org/10.1037/bar0000215. Harris, M., Case, L. B., Meindl, J. N., Powell, D., Hunter, W. C., & Delgado, D. (2020). Using behavioral skills training with video feedback to prevent risk of injury in youth female soccer athletes. Behavior Analysis in Practice, 13, 811–819. https://doi.org/10.1007/ s40617-020-00473-4. Harrison, A. M., & Pyles, D. A. (2013). The effects of verbal instruction and shaping to improve tackling by high school football players. Journal of Applied Behavior Analysis, 46(2), 518–522. https://doi.org/10.1002/jaba.36. Hausman, N. L., Borrero, J. C., Fisher, A., & Kahng, S. (2014). Improving accuracy of ­portion-size estimations through a stimulus equivalence paradigm. Journal of Applied Behavior Analysis, 47(3), 1–15. https://doi.org/10.1002/jaba.139. Hausner, J., Olsen, A., & Moller, P. (2012). Mere exposure and flavour-flavour learning increase 2-3  year-old children's acceptance of a novel vegetable. Appetite, 58(3), 1152– 1159. https://doi.org/10.1016/j.appet.2012.03.009.



Practice and consultation in health, sport, and fitness

427

Hayes, S. C., Strosahl, K. D., & Wilson, K. G. (1999). Acceptance and commitment therapy: An experiential approach to behavior change. Guilford. Healthy People 2020 (n.d.). Access to foods that support healthy eating patterns. https:// www.healthypeople.gov/2020/topics-objectives/topic/social-determinants-health/ interventions-resources/access-to-foods-that-support-healthy-eating-patterns. Hegarty, J., & Huelsmann, C. (2020). ACT in sport: Improve performance through mindfulness, acceptance, and commitment. Dark River. Holland, M. A., & Slowiak, J. M. (2021). Practice and ethical considerations for behavior analysts in health, sport, and fitness. Behavior Analysis: Research and Practice, 21(3), 314–325. https://doi.org/10.1037/bar0000188. Hyland, R., Stacy, R., Adamson, A., & Moynihan, P. (2006). Nutrition-related health promotion through an afterschool project: The responses of children and their families. Social Science & Medicine, 62(3), 758–768. https://doi.org/10.1016/j. socscimed.2005.06.032. Jackson, M. L., Williams, W. L., Hayes, S. C., Humphreys, T., Gauthier, B., & Westwood, R. (2016). Whatever gets your heart pumping: The impact of implicitly selected ­reinforcer-focused statements on exercise intensity. Journal of Contextual Behavior Science, 5(1), 48–57. https://doi.org/10.1016/j.jcbs.2015.11.002. Jeffery, R. W. (2011). Financial incentives and weight control. Preventive Medicine, 55, 61–67. https://doi.org/10.1016/j.ypmed.2011.12.024. Jones, B. A., Madden, G. J., Wengreen, H. J., Aguilar, S. S., & Desjardins, E. A. (2014). Gamification of dietary decision-making in elementary school cafeteria. PLoS One, 9(4), e93872. https://doi.org/10.1371/journal.pone.0093872. Junaid, H., Bulla, A. J., Benjamin, M. B., Wind, T., & Nazaruk, D. (2020). Using self-­ management and social media to increase steps in sedentary college students. Behavior Analysis in Practice, 14, 7340744. https://doi.org/10.1007/s40617-020-00445-8. Karekla, M., Nikolaou, P., & Merwin, R. M. (2022). Randomized clinical trial evaluating AcceptME – A digital gamified acceptance and commitment early intervention program for individuals at high risk for eating disorders. Journal of Clinical Medicine, 11(7), 1775. https://doi.org/10.3390/jcm11071775. Kelley, H., & Miltenberger, R. G. (2016). Using video feedback to improve horseback riding skills. Journal of Applied Behavior Analysis, 49(1), 138–147. https://doi.org/10.1002/ jaba.272. Keys, A., Karvonen, N., Kimura, N., & Taylor, H. L. (1972). Indices of relative weight and obesity. Journal of Chronic Disease, 25(3), 329–343. https://doi.org/10.1093/ije/dyu058. Kim, H. H., & Jung, J. H. (2021). Social isolation and psychological distress during the COVID-19 pandemic: A cross-national analysis. Gerontologist, 61, 103–113. https://doi. org/10.1093/geront/gnaa168. Kladopoulos, C. N., & McComas, J. J. (2001). The effects of form training on foul-­shooting performance in members of a women's college basketball team. Journal of Applied Behavior Analysis, 34(3), 329–332. https://doi.org/10.1901/jaba.2001.34-329. Koop, S., & Martin, G. L. (1983). Evaluation of a coaching strategy to reduce swimming stroke errors with beginning age‐group swimmers. Journal of Applied Behavior Analysis, 16(4), 447–460. https://doi.org/10.1901/jaba.1983.16-447. Krebs, C. A., & Nyein, K. D. (2021). Increasing physical activity in adults using self-tailored deposit contracts. Behavior Analysis: Research and Practice, 21(3), 174–183. https://doi. org/10.1037/bar0000222. Krukauskas, F., Miltenberger, R., & Gavonni, P. (2019). Using auditory feedback to improve striking for mixed martial arts. Behavioral Interventions, 34(3), 419–428. https://doi. org/10.1002/bin.1665. Kuhl, S., Rudrud, E. H., Witts, B. N., & Schulze, K. A. (2015). Classroom-based interdependent group contingencies increase children’s physical activity. Journal of Applied Behavior Analysis, 48(3), 601–612. https://doi.org/10.1002/jaba.219.

428

Applied behavior analysis advanced guidebook

Kurti, A. N., & Dallery, J. (2013). Internet-based contingency management increases walking in sedentary adults. Journal of Applied Behavior Analysis, 46(3), 568–581. https://doi. org/10.1002/jaba.58. Lakkakula, A., Geaghan, J., Zanovec, M., Pierce, S., & Tuuri, G. (2010). Repeated taste exposure increases liking for vegetables by low-income elementary school children. Appetite, 55(2), 226–231. https://doi.org/10.1016/j.appet.2010.06.003. Lancioni, G. E., & O’Reilly, M. F. (2002). Teaching food preparation skills to people with intellectual disabilities: A literature overview. Journal of Applied Research in Intellectual Disabilities, 15(3), 236–253. https://doi.org/10.1046/j.1468-3148.2002.00122.x. Landin, D., & Hebert, E. P. (1999). The influence of self‐talk on the performance of skilled female tennis players. Journal of Applied Sport Psychology, 11(2), 263–282. https://doi. org/10.1080/10413209908404204. Larson, T. A., Normand, M. P., & Hustyi, K. M. (2011). Preliminary evaluation of an observation system for recording physical activity in children. Behavioral Interventions, 26(3), 193–203. https://doi.org/10.1002/bin.332. Larson, T. A., Normand, M. P., Morley, A. J., & Miller, B. G. (2014). Further evaluation of a functional analysis of moderate-to-vigorous physical activity in young children. Journal of Applied Behavior Analysis, 47(2), 219–230. https://doi.org/10.1002/jaba.127. LeBlanc, L. A., Lerman, D. C., & Normand, M. P. (2020). Behavior analytic contributions to public health and telehealth. Journal of Applied Behavior Analysis, 53(3), 1208–1218. https://doi.org/10.1002/jaba.749. Lefferts, E. C., Saavedra, J. M., Song, B. K., & Lee, D. C. (2022). Effect of the COVID-19 pandemic on physical activity and sedentary behavior in older adults. Journal of Clinical Medicine, 11(6), 1568. https://doi.org/10.3390/jcm11061568. Lerner, B. S., Ostrow, A. C.,Yura, M. T., & Etzel, E. F. (1996). The effects of goal‐setting and imagery training programs on the free‐throw performance of female collegiate basketball players. The Sport Psychologist, 10(4), 382–397. https://doi.org/10.1123/tsp.10.4.382. Levin, L., & Carr, E. G. (2001). Food selectivity and problem behavior in children with developmental disabilities: Analysis and intervention. Behavior Modification, 25(3), 443–470. https://doi.org/10.1177/0145445501253004. Lindsley, O. R. (1992). Precision teaching. Discoveries and effects. Journal of Applied Behavior Analysis, 25(1), 51–57. https://doi.org/10.1901/jaba.1992.25-51. Little, L. M., & Simpson, T. L. (2000). An acceptance‐based performance enhancement intervention for collegiate athletes. In M. J. Dougher (Ed.), Clinical behavior analysis (pp. 231–244). Context Press/New Harbinger. Locke, E. A., & Latham, G. P. (1990). A theory of goal setting & task performance. Prentice-Hall. Locke, E. A., & Latham, G. P. (Eds.). (2013). New developments in goal setting and task performance Routledge/Taylor & Francis. https://doi.org/10.4324/9780203082744. Loewenstein, G., Price, J., & Volpp, K. (2016). Habit formation in children: Evidence from incentives for healthy eating. Journal of Health Economics, 45, 47–54. https://doi. org/10.1016/j.jhealeco.2015.11.004. Lokke, G. E. H., Lokke, J. A., & Arntzen, E. (2008). Precision teaching, frequency-building, and ballet dancing. Journal of Precision Teaching and Celeration, 24, 21–27. Lowe, C. F., Horne, P. J., Tapper, K., Bowdery, M., & Egerton, C. (2004). Effects of a peer modeling and rewards-based intervention to increase fruit and vegetable consumption in children. European Journal of Clinical Nutrition, 58, 510–522. https://doi.org/10.1038/ sj.ejcn.1601838. Luiselli, J. K. (2012). Behavioral sport psychology consulting: A review of some practice concerns and recommendations. Journal of Sport Psychology in Action, 3(1), 41–51. https:// doi.org/10.1080/21520704.2011.653048. Luiselli, J. K. (2016). Increasing exercise-physical activity. In J. K. Luiselli (Ed.), Behavioral health promotion and intervention for people with intellectual and developmental disabilities (pp. 73–93). Springer. https://doi.org/10.1007/978-3-319-27297-9_4.



Practice and consultation in health, sport, and fitness

429

Luiselli, J. K., Duncan, N. G., Keary, P., Nelson, E. G., Parenteau, R. E., & Woods, K. E. (2013). Behavioral coaching of track athletes with developmental disabilities: Evaluation of sprint performance during training and Special Olympics competition. Journal of Clinical Sport Psychology, 7, 264–274. https://doi.org/10.1123/jcsp.7.4.264. Luiselli, J. K., & Reed, D. D. (Eds.). (2011). Behavioral sport psychology: Evidence-based approaches to performance enhancement Springer. Luiselli, J. K., Ricciardi, J. N., & Gilligan, K. (2005). Liquid fading to establish milk consumption by a child with autism. Behavioral Interventions, 20(2), 155–163. https://doi. org/10.1002/bin.187. Luiselli, J. K., Woods, K. E., Keary, P., & Parenteau, R. E. (2013). Practitioner attitudes and beliefs about exercise, athletic, and recreational activities for children and youth with intellectual and developmental disabilities. Journal of Developmental and Physical Disabilities, 25, 485–492. https://doi.org/10.1007/s10882-012-9323-z. Lundgren,T., Reinebo, G., Fröjmark, M. J., Jäder, E., Näslund, M., Svartvadet, P., et al. (2021). Acceptance and commitment training for ice hockey players: A randomized controlled trial. Frontiers in Psychology, 12, 685260. https://doi.org/10.3389/fpsyg.2021.685260. Lundgren, T., Reinebo, G., Näslund, M., & Parling, T. (2020). Acceptance and commitment training to promote psychological flexibility in ice hockey performance: A controlled group feasibility study. Journal of Clinical Sport Psychology, 14(2), 170–181. https://doi. org/10.1123/jcsp.2018-0081. Mahoney, J., & Hanrahan, S. J. (2011). A brief educational intervention using acceptance and commitment therapy: Four injured athletes’ experiences. Journal of Clinical Sport Psychology, 5(3), 252–273. https://doi.org/10.1123/jcsp.5.3.252. Mann, R. A. (1972). The behavior-therapeutic use of contingency contracting to control an adult behavior problem:Weight control. Journal of Applied Behavior Analysis, 5(2), 99–109. https://doi.org/10.1901/jaba.1972.5-99. Martin, G. L. (2019). Applied sport psychology: Practical guidelines from behavior analysis (6th ed.). Sport Science Press. Martin, G. L., Thomson, K., & Regehr, K. (2004). Studies using single-subject designs in sport psychology: 30 years of research. The Behavior Analyst, 27, 123–140. https://doi. org/10.1007/BF03393185. Martin, G. L.,Toogood, A., & Tkachuck, G. (1997). Behavioral assessment forms for sport psychology consulting. Winnipeg, Canada: Sport Science Press. Martin, J. (2015). Behavior analysis in sport and exercise psychology. Behavior Analysis: Research and Practice, 15(2), 148–151. https://doi.org/10.1037/bar0000018. Martin, J. J. (2017). Handbook of disability sport and exercise psychology. Oxford University Press. https://doi.org/10.1093/oso/9780190638054.001.0001. May, B. K., & Treadwell, R. E. (2020). Increasing exercise intensity: Teaching high-­intensity interval training to individuals with developmental disabilities using a lottery reinforcement system. Behavior Analysis in Practice, 13, 826–837. https://doi.org/10.1007/ s40617-020-00428-9. Mazza, M. C., Dynan, L., Siegel, R. M., & Tucker, A. L. (2018). Nudging healthier choices in a hospital cafeteria: Results from a field study. Health Promotion Practice, 19(6), 925–934. https://doi.org/10.1177/1524839917740119. McCurdy, A. J., & Normand, M. P. (2022). The effects of a group-deposit prize draw on the step counts of sedentary and low active adults. Behavioral Interventions, 37(3), 700–712. https://doi.org/10.1002/bin.1869. McDowell, C., McIntyre, C., Bones, R., & Keenan, M. (2002). Teaching component skills to improve golf swing. Journal of Precision Teaching and Celeration, 18(2), 61–66. McMullen, B., Henderson, H. L., Ziegenfuss, D. H., & Newton, M. (2020). Coaching behaviors as sources of relation-inferred self-efficacy (RISE) in American male high school athletes. International Sport Coaching Journal, 7(1), 52–60. https://doi.org/10.1123/ iscj.2018-0089.

430

Applied behavior analysis advanced guidebook

Mendez, S., Kubota, J., Widaman, A. M., & Gieng, J. (2021). Advance quantity meal preparation pilot program improves home-cooked meal consumption, cooking attitudes, and self-efficacy. Journal of Nutrition Education and Behavior, 53(7), 608–613. https://doi. org/10.1016/j.jneb.2020.12.014. Mendez, S., Tseng, H. Y., Kubota, J., Widaman, A., & Gieng, J. (2020). A six-week groupbased advanced quantity meal prep program improves cooking attitudes, behaviors, and body composition. Current Developments in Nutrition, 4(Suppl 2), 1332. https://doi. org/10.1093/cdn/nzaa059_049. Mias, J. R., Dittrich, G. A., & Miltenberger, R. G. (2021). Effects of a behavioral coaching treatment package on physical activity and adherence. Behavior Analysis: Research and Practice, 22(1), 50–65. https://doi.org/10.1037/bar0000230. Miller, B. G.,Valbuena, D. A., Zerger, H. M., & Miltenberger, R. G. (2018). Evaluating public posting, goal setting, and rewards to increase physical activity during school recess. Behavioral Interventions, 33(3), 237–250. https://doi.org/10.1002/bin.1631. Moesch, K., Ivarsson, A., & Johnson, U. (2020). “Be mindful even though it hurts”: A ­single-case study testing the effects of a mindfulness- and acceptance-based intervention on injured athletes’ mental health. Journal of Clinical Sport Psychology, 14(4), 399–421. https://doi.org/10.1123/jcsp.2019-0003. Morrill, B. A., Madden, G. J., Wengreen, H. J., Fargo, J. D., & Aguilar, S. S. (2016). A randomized controlled trial of the food dudes program: Tangible rewards are more effective than social rewards for increasing short- and long-term fruit and vegetable consumption. Journal of the Academy of Nutrition and Dietetics, 116(4), 618–629. https://doi. org/10.1016/j.jand.2015.07.001. Mueller, M. M., Piazza, C. C., Patel, M. R., Kelley, M. E., & Pruett, A. (2004). Increasing variety of foods consumed by blending nonpreferred foods into preferred foods. Journal of Applied Behavior Analysis, 37(2), 159–170. https://doi.org/10.1901/jaba.2004.37-159. Mulqueen, D., Crosland, K. A., & Novotny, M. A. (2021). Using video modeling and video feedback to improve Olympic weightlifting technique. Behavior Analysis: Research and Practice, 21(3), 282–292. https://doi.org/10.1037/bar0000211. Murcia, C. Q., & Kreutz, G. (2012). Dance and health: Exploring interactions and implications. In R. MacDonald, G. Kreutz, & L. Mitchell (Eds.), Music, health and wellbeing (pp. 125–135). Oxford University Press. Nastasi, J. A., Sheppard, R. D., & Raiff, B. R. (2019). Token-economy-based contingency management increases daily steps in adults with developmental disabilities. Behavioral Interventions, 35(2), 315–324. https://doi.org/10.1002/bin.1711. National Eating Disorders Information Centre. (2022). General information. https://nedic.ca/ general-information/. Nieto, P., & Wiskow, K. M. (2020). Evaluating adult interaction during the Step It UP! Game to increase physical activity in children. Journal of Applied Behavior Analysis, 53(3), 1354–1366. https://doi.org/10.1002/jaba.699. Nigg, C. R., & Long, C. R. (2012). A systematic review of single health behavior change interventions vs. multiple health behavior change interventions among older adults. Translational Behavioral Medicine, 2(2), 163–179. https://doi.org/10.1007/s13142-012-0130-y. Nolan, C. M., Maddocks, M., Canavan, J. L., Jones, S. E., Delogu,V., Kaliaraju, D., et al. (2017). Pedometer step count targets during pulmonary rehabilitation in chronic obstructive pulmonary disease. American Journal of Respiratory and Critical Care Medicine, 195(10), 1344–1352. https://doi.org/10.1164/rccm.201607-1372OC. Normand, M. P. (2008). Increasing physical activity through self-monitoring, goal setting, and feedback. Behavioral Interventions, 23(4), 227–236. https://doi.org/10.1002/bin.267. Normand, M. P., & Bober, J. (2020). Health coaching by behavior analysts in practice: How and why. Behavior Analysis in Practice, 20(2), 108–119. https://doi.org/10.1037/ bar0000171.



Practice and consultation in health, sport, and fitness

431

Normand, M. P., & Burji, C. (2020). Using the Step It UP! Game to increase physical activity during physical-education class. Journal of Applied Behavior Analysis, 53(2), 1071–1079. https://doi.org/10.1002/jaba.624. Normand, M. P., Dallery, J., & Ong, T. (2015). Applied behavior analysis for health and fitness. In H. S. Roane, J. L. Ringdahl, & T. S. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 555–582). Academic Press/Elsevier. https://doi. org/10.1016/B978-0-12-420249-8.00022-8. Normand, M. P., Dallery, J., & Slanzi, C. M. (2021). Leveraging applied behavior analysis research and practice in the service of public health. Journal of Applied Behavior Analysis, 54(2), 457–483. https://doi.org/10.1002/jaba.832. Normand, M. P., & Donohue, H. E. (2022). Research ethics for behavior analysts in practice. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-022-00698-5. Normand, M. P., & Gibson, J. L. (2020). Behavioral approaches to weight management for health and wellness. Pediatric Clinics of North America, 67(3), 537–546. https://doi. org/10.1016/j.pcl.2020.02.008. Normand, M. P., & Osborne, M. R. (2010). Promoting healthier food choices in college students using individualized dietary feedback. Behavioral Interventions, 25(3), 183–190. https://doi.org/10.1002/bin.311. O’Donnell, S. A., Philp, A. C., & Mahoney, A. (2021). Direct versus indirect competition in sports: A review of behavioral interventions. Behavior Analysis: Research and Practice, 21(3), 293–313. https://doi.org/10.1037/bar0000229. Olstad, D. L.,Vermeer, J., McCargar, L. J., Prowse, R. J., & Raine, K. D. (2015). Using traffic light labels to improve food selection in recreation and sport facility eating environments. Appetite, 91, 329–335. https://doi.org/10.1016/j.appet.2015.04.057. Orlick, T. (1986). Coach’s training manual to psyching for sport. Human Kinetics. Orlick, T. (1989). Reflections on sport psych consulting with individual and team sport athletes at summer and winter Olympic games. The Sport Psychologist, 3(4), 358–365. https://doi.org/10.1123/tsp.3.4.358. Page, E. J., Massey, A. S., Prado-Romero, P. N., & Albadawi, S. (2020). The use of self-monitoring and technology to increase physical activity: A review of the literature. Perspectives on Behavior Science, 43, 501–514. https://doi.org/10.1007/ s40614-020-00260-0. Papies, E. K., & Veling, H. (2013). Healthy dining: Subtle diet reminders at the point of purchase increase low-calorie food choices among both chronic and current dieters. Appetite, 61(1), 1–7. https://doi.org/10.1016/j.appet.2012.10.025. Partington, M., Cushion, C. J., Cope, E., & Harvey, S. (2015). The impact of video feedback on professional youth football coaches’ reflection and practice behaviour: A longitudinal investigation of behaviour change. Reflective Practice, 16(5), 700–716. https://doi.org/10 .1080/14623943.2015.1071707. Patel, R. R., Normand, M. P., & Kohn, C. S. (2019). Incentivizing physical activity using token reinforcement with preschool children. Journal of Applied Behavior Analysis, 52(2), 499–515. https://doi.org/10.1002/jaba.536. Pocock, T. L., Foster, T. M., & McEwan, J. S. (2010). Precision teaching and fluency: The effects of charting and goal setting on skaters’ performance. Journal of Behavioral Health and Medicine, 1(2), 93–118. https://doi.org/10.1037/h0100544. Pontone, M., Vause, T., & Zonneveld, K. L. M. (2021). Benefits of recreational dance and behavior analysis for individuals with neurodevelopmental disorders: A literature review. Behavioral Interventions, 36(1), 195–210. https://doi.org/10.1002/bin.1745. Quinn, M., Miltenberger, R., Abreu, A., & Narozanick, T. (2017). An intervention featuring public posting and graphical feedback to enhance the performance of competitive dancers. Behavior Analysis in Practice, 10(1), 1–11. https://doi.org/10.1007/ s40617-016-0164-6.

432

Applied behavior analysis advanced guidebook

Quinn, M. J., Miltenberger, R. G., & Fogel,V. A. (2015). Using TAGteach to improve the proficiency of dance movements. Journal of Applied Behavior Analysis, 48(1), 11–24. https:// doi.org/10.1002/jaba.191. Quintero, L. M., Moore, J. W.,Yeager, M. G., Rowsey, K., Olmi, D. J., Britton-Slater, J., et al. (2020). Reducing risk of head injury in youth soccer: An extension of behavioral skills training for heading. Journal of Applied Behavior Analysis, 52(1), 237–248. https://doi. org/10.1002/jaba.557. Rafacz, S. (2019). Healthy eating: Approaching the selection, preparation, and consumption of healthy food as a choice behavior. Perspectives on Behavior Science, 42, 647–674. https:// doi.org/10.1007/s40614-018-00190-y. Vorland, C. J., Bohan Brown, M. M., Cardel, M. I., & Brown, A. W. (2022). Traffic light diets for childhood obesity: Disambiguation of terms and critical review of application, food categorization, and strength of evidence. Current Developments in Nutrition, 6(3), nzac006. https://doi.org/10.1093/cdn/nzac006. Raiff, B. R., Burrows, C. A., Nastasi, J. A., Upton, C. R., & Dwyer, M. J. (2020). A behavioral approach to the treatment of chronic illnesses. In P. Sturmey (Ed.), Functional analysis in clinical treatment (2nd ed., pp. 501–532). Elsevier Academic Press. https://doi. org/10.1016/B978-0-12-805469-7.00021-8. Rasmussen, N. (2019). Downsizing obesity: On Ancel Keys, the origins of BMI, and the neglect of excess weight as a health hazard in the United States from the 1950s to 1970s. Journal of the History of the Behavioral Sciences, 55(4), 299–318. https://doi.org/10.1002/jhbs.21991. Rogers, L., Hemmeter, M. L., & Wolery, M. (2010). Using a constant time delay procedure to teach foundational swimming skills to children with autism. Topics in Early Childhood Special Education, 30(2), 102–111. https://doi.org/10.1177/0271121410369708. Rogerson, L. J., & Hrycaiko, D.W. (2002). Enhancing competitive performance of ice hockey goaltenders using centering and self-talk. Journal of Applied Sport Psychology, 14(1), 14–26. https://doi.org/10.1080/10413200209339008. Ronkainen, N. J., Ryba, T. V., & Selänne, H. (2019). “She is where I’d want to be in my career”: Youth athletes’ role models and their implications for career and identity construction. Psychology of Sport and Exercise, 45, 101562. https://doi.org/10.1016/j. psychsport.2019.101562. Rosado, C. M., Jansz Rieken, C., & Spear, J. (2021). The effects of heart rate feedback on physical activity during treadmill exercise. Behavior Analysis: Research and Practice, 21(3), 209–218. https://doi.org/10.1037/bar0000223. Rotta, K., Li, A., & Poling, A. (2020). Participants in behavior-analytic sports studies: Can anybody play? Behavior Analysis in Practice, 13(4), 820–825. https://doi.org/10.1007/ s40617-020-00477-0. Rozin, P., Scott, S., Dingley, M., Urbanek, J. K., Jiang, H., & Kaltenbach, M. (2011). Nudge to nobesity I: Minor changes in accessibility decrease food intake. Judgment & Decision Making, 6(4), 323–332. Rushall, B. S. (1979). Psyching in sport.The psychological preparation for serious competition in sport. Pelham. Saini, V., Jessel, J., Iannaccone, J. A., & Agnew, C. (2019). Efficacy of functional analysis for informing behavioral treatment of inappropriate mealtime behavior: A systematic review and meta‐analysis. Behavioral Interventions, 34(2), 231–247. https://doi.org/10.1002/bin.1664. Santos, F., Corte-Real, N., Regueiras, L., Dias, C., Martinek, T. J., & Fonseca, A. (2019). Coaching effectiveness within competitive youth football: Youth football coaches’ and athletes’ perceptions and practices. Sports Coaching Review, 8(2), 172–193. https://doi.or g/10.1080/21640629.2018.1459356. Schenk, M., & Miltenberger, R. (2019). A review of behavioral interventions to enhance sport performance. Behavioral Interventions, 34(2), 248–279. https://doi.org/10.1002/ bin.1659.



Practice and consultation in health, sport, and fitness

433

Scott, D., Scott, L. M., & Howe, B. L. (1998).Training anticipation for intermediate tennis players. Behavior Modification, 22(3), 243–261. https://doi.org/10.1177/01454455980223002. Shapiro, E. S., & Shapiro, S. (1985). Behavioral coaching in the development of skills in track. Behavior Modification, 9(2), 211–224. https://doi.org/10.1177/01454455850092005. Shortway, K. M., Wolanin, A., Block-Lerner, J., & Marks, D. (2018). Acceptance and commitment therapy for injured athletes: Development and preliminary feasibility of the return to ACTion protocol. Journal of Clinical Sport Psychology, 12(1), 4–26. https://doi. org/10.1123/jcsp.2017-0033. Sigurdsson, V., Larsen, N. M., & Gunnarsson, D. (2014). Healthy food products at the point of purchase: An in-store experimental analysis. Journal of Applied Behavior Analysis, 47(1), 151–154. https://doi.org/10.1002/jaba.91. Simek, T. C., O’Brien, R. M., & Figlerski, L. B. (1994). Contracting and chaining to improve the performance of a college golf team: Improvement and deterioration. Perceptual and Motor Skills, 78(3 Pt 2), 1099–1105. https://doi.org/10.2466/pms.1994.78.3c.1099. Sleiman, A. A., Sigurjonsdottir, S., Elnes, A., Gage, N. A., & Gravina, N. E. (2020). A quantitative review of performance feedback in organizational settings (1998–2018). Journal of Organizational Behavior Management, 40(3–4), 303–332. https://doi.org/10.1080/01608061.2020.1823300. Smith, R. E., Schultz, R.W., Smoll, F. L., & Ptacek, J.T. (1995). Development and validation of a muli-dimensional measure of sport-specific psychological skills: The Athletic Coping Skills Inventory-28. Journal of Sport and Exercise Psychology, 17(4), 379–398. https://doi. org/10.1123/jsep.17.4.379. Smith, S. L., & Ward, P. (2006). Behavioral interventions to improve performance in collegiate football. Journal of Applied Behavior Analysis, 39(3), 385–391. https://doi.org/10.1901/ jaba.2006.5-06. Snelling, A. M., & Kennard,T. (2009).The impact of nutrition standards on competitive food offerings and purchasing behaviors of high school students. The Journal of School Health, 79(11), 541–546. https://doi.org/10.1111/j.1746-1561.2009.00446.x. Solberg, K. M., Hanley, G. P., Layer, S. A., & Ingvarsson, E. T. (2007). The effects of reinforcer pairing and fading on preschoolers' snack selections. Journal of Applied Behavior Analysis, 40(4), 633–644. https://doi.org/10.1901/jaba.2007.633-644. Sonnenberg, L., Gelsomin, E., Levy, D. E., Riis, J., Barraclough, S., & Thorndike, A. N. (2013). A traffic light labeling intervention increases consumer awareness of health and healthy choices at the point-of-purchase. Preventive Medicine, 57(4), 253–257. https:// doi.org/10.1016/j.ypmed.2013.07.001. Stamos, A., Lange, F., & Dewitte, S. (2018). Promoting healthy drink choices at school by means of assortment changes and traffic light coding: A field study. Food Quality and Preference, 71, 415–421. https://doi.org/10.1016/j.foodqual.2018.08.016. Stedman-Falls, L. M., & Dallery, J. (2020). Technology-based versus in-person deposit contract treatments for promoting physical activity. Journal of Applied Behavior Analysis, 53(4), 1904–1921. https://doi.org/10.1002/jaba.776. Stokes, J. V., & Luiselli, J. K. (2010). Functional analysis and behavioral coaching intervention to improve tackling skills of a high school football athlete. Journal of Clinical Sport Psychology, 4, 150–157. Stokes, J.V., Luiselli, J. K., Reed, D. D., & Fleming, R. K. (2010). Behavioral coaching to improve offensive line pass-blocking skills of high school football athletes. Journal of Applied Behavior Analysis, 43(3), 463–472. https://doi.org/10.1901/jaba.2010.43-463. Sykes-Muskett, B. J., Prestwich, A., Lawton, R. J., & Armitage, C. J. (2015).The utility of monetary contingency contracts for weight loss: A systematic review and meta-analysis. Health Psychology Review, 9(4), 434–451. https://doi.org/10.1080/17437199.2015.1030685. Szabo, T. G., Willis, P. B., & Palinski, C. J. (2019). Watch me try: ACT for improving athletic performance of young adults with ASD. Advances in Neurodevelopmental Disorders, 3, 434–449. https://doi.org/10.1007/s41252-019-00129-7.

434

Applied behavior analysis advanced guidebook

Tai, S., & Miltenberger, R. (2017). Evaluating behavioral skills training to teach safe tackling skills to youth football players. Journal of Applied Behavior Analysis, 50(4), 849–855. https://doi.org/10.1002/jaba.412. Tiger, J. H., & Hanley, G. P. (2006). Using reinforcer pairing and fading to increase the milk consumption of a preschool child. Journal of Applied Behavior Analysis, 39(3), 399–403. https://doi.org/10.1901/jaba.2006.6-06. Tomiyama, A. J., Hunger, J. M., Nguyen-Cuu, J., & Wells, C. (2016). Misclassification of cardiometabolic health when using body mass index categories in NHANES 2005-2012. International Journal of Obesity, 40(5), 883–886. https://doi.org/10.1038/ijo.2016.17. Tremblay, M. S., Aubert, S., Barnes, J. D., Saunders, T. J., Carson, V., Latimer-Cheung, A. E., et  al. (2017). Sedentary behavior research network (SBRN) - terminology consensus project process and outcome. International Journal of Behavioral Nutrition and Physical Activity, 14(75), 1–17. https://doi.org/10.1186/s12966-017-0525-8. Trucil, L. M.,Vladescu, J. C., Reeve, K. F., DeBar, R. M., & Schnell, L. K. (2015). Improving portion-size estimation using equivalence-based instruction. The Psychological Record, 65, 761–770. https://doi.org/10.1007/s40732-015-0146-z. Tudor-Locke, C., Craig, C. L., Brown,W. J., Clemes, S. A., De Cocker, K., Giles-Corti, B., et al. (2011). How many steps/day are enough? For adults. International Journal of Behavioral Nutrition and Physical Activity, 8, 79. https://doi.org/10.1186/1479-5868-8-79. United States Department of Health and Human Services. (2018). Physical activity guidelines for Americans (2nd ed.).Washington, D.C.: U.S. Department of Health and Human Services. https://health.gov/our-work/nutrition-physical-activity/physical-activity-guidelines/ current-guidelines. Van Camp, C. M., & Berth, D. (2017). Further evaluation of observational and mechanical measures of physical activity. Behavioral Interventions, 33(3), 284–296. https://doi. org/10.1002/bin.1518. Van Camp, C. M., Blejewski, R. C., Ruby, A. D., & Gordon, L. E. (2021). Physical activity in children: An evaluation of an individualized heart rate assessment. Behavior Analysis: Research and Practice, 21(3), 195–208. https://doi.org/10.1037/bar0000212. Van Camp, C. M., & Hayes, L. B. (2012). Assessing and increasing physical activity. Journal of Applied Behavior Analysis, 45(4), 871–875. https://doi.org/10.1901/jaba.2012.45-871. van der Ploeg, H. P., & Bull, F. C. (2020). Invest in physical activity to protect and promote health: The 2020 WHO guidelines on physical activity and sedentary behaviour. International Journal of Behavioral Nutrition and Physical Activity, 17, 145. https://doi. org/10.1186/s12966-020-01051-1. VanWormer, J. J. (2004). Pedometers and brief e-counseling: Increasing physical activity for overweight adults. Journal of Applied Behavior Analysis, 37(3), 421–425. https://doi. org/10.1901/jaba.2004.37-421. Wadden, T. A., Webb, V. L., Moran, C. H., & Bailer, B. A. (2012). Lifestyle modification for obesity: New developments in diet, physical activity, and behavior therapy. Circulation, 125(9), 1157–1170. https://doi.org/10.1161/CIRCULATIONAHA.111.039453. Wang, Y., Beydoun, M. A., Liang, L., Caballero, B., & Kumanyika, S. K. (2008). Will all Americans become overweight or obese? Estimating the progression and cost of the US obesity epidemic. Obesity, 16(10), 2323–2330. https://doi.org/10.1038/oby.2008.351. Warburton, D. R., Jamnik, V. K., Bredin, S. D., & Gledhill, N. (2011). The physical activity readiness questionnaire for everyone (PAR-Q+) and electronic physical activity readiness medical examination (ePARmed-X+). The Health & Fitness Journal of Canada, 4(2), 3–17. https://doi.org/10.14288/hfjc.v4i2.103. Ward, P., & Carnes, M. (2002). Effects of posting self-set goals on collegiate football players’ skill execution during practice and games. Journal of Applied Behavior Analysis, 35(1), 1–12. https://doi.org/10.1901/jaba.2002.35-1.



Practice and consultation in health, sport, and fitness

435

Washington, W. D., Banna, K. M., & Gibson, A. L. (2014). Preliminary effects of prize-based contingency management to increase activity levels in healthy adults. Journal of Applied Behavior Analysis, 47(2), 231–245. https://doi.org/10.1002/jaba.119. Whelan, C. M., & Penrod, B. (2019). An evaluation of sequential meal presentation with picky eaters. Behavior Analysis in Practice, 12, 301–309. https://doi.org/10.1007/ s40617-018-00277-7. Wong, S. L., & Leatherdale, S. T. (2009). Association between sedentary behavior, physical activity, and obesity: Inactivity among active kids. Preventing Chronic Disease, 6(1), 26. http://www.cdc.gov/pcd/issues/2009/jan/07_0242.htm. World Health Organization. (2020). Guidelines on physical activity and sedentary behaviour.World Health Organization. https://www.who.int/publications/i/item/9789240015128. World Health Organization. (2021a). Noncommunicable diseases (Fact Sheet) https://www. who.int/news-room/fact-sheets/detail/noncommunicable-diseases. World Health Organization. (2021b). Obesity and overweight (Fact Sheet) https://www.who. int/news-room/fact-sheets/detail/obesity-and-overweight. Williamson, D. A. (2017). Fifty years of behavioral/lifestyle interventions for overweight and obesity: Where have we been and where are we going? Obesity, 25(11), 1867–1875. https://doi.org/10.1002/oby.21914. Worthen, D., & Luiselli, J. K. (2016). Attitudes and opinions of female high school athletes about sports-focused mindfulness training and practices. Journal of Clinical Sport Psychology, 10, 177–191. Wysocki, T., Hall, G., Iwata, B., & Riordan, M. (1979). Behavioral management of exercise: Contracting for aerobic points. Journal of Applied Behavior Analysis, 12(1), 55–64. https:// doi.org/10.1901/jaba.1979.12-55. Yong, J.Y., Tong, E. M., & Liu, J. C. (2020). When the camera eats first: Exploring how mealtime cell phone photography affects eating behaviours. Appetite, 154, 104787. https:// doi.org/10.1016/j.appet.2020.104787. Zarate, M., Miltenberger, R., & Valbuena, D. (2019). Evaluating the effectiveness of goal setting and textual feedback for increasing moderate-intensity physical activity in adults. Behavioral Interventions, 34(4), 553–563. https://doi.org/10.1002/bin.1679. Zerger, H. M., Miller, B. G., Valbuena, D., & Miltenberger, R. G. (2017). Effects of student pairing and public review on physical activity during school recess. Journal of Applied Behavior Analysis, 50(3), 529–537. https://doi.org/10.1002/jaba.389.

This page intentionally left blank

CHAPTER 17

Conducting and disseminating research James K. Luisellia, Frank Birdb, Helena Maguirea, and Rita M. Gardnerb a

Clinical Development and Research, Melmark New England, Andover, MA, United States Melmark, Berwyn, PA, United States

b

Research informs the practice and professional development of behavior analysts in many ways. Consider, for example, how much research you have sampled from reading journal articles and book chapters and from listening to presentations at conferences and continuing education (CE) events. The literature and didactic experiences communicated evidence-based methods that have guided your work with children, youth, and adults, professional training, consultation, and systems evaluation within human services settings. Suffice it to say, research-to-practice translation is fundamental to the discipline of applied behavior analysis (ABA), acquiring requisite skills, and fulfilling ethical codes of conduct pertaining to scope of competence (Behavior Analysis Certification Board, 2020). However, most behavior analysts do not conduct and disseminate research despite many advantages to our field. In this chapter, we discuss how research can be integrated within routine practice and reported to the professional community. The chapter outlines actual and perceived barriers to implementing research, solutions for overcoming impediments, and recommendations that can be followed effectively in most situations. We focus on research dissemination through publication and public speaking including procedures to promote success in both endeavors.

The behavior analyst as researcher In an effort to encourage research among behavior analysts, Kelley et  al. (2015) interviewed seven nonacademic affiliated authors who had published articles in the Journal of Applied Behavior Analysis, Journal of Organizational Behavior Management, and Behavior Analysis in Practice. The main points addressed during interviews were the factors that promoted and interfered Applied Behavior Analysis Advanced Guidebook https://doi.org/10.1016/B978-0-323-99594-8.00017-9

Copyright © 2023 Elsevier Inc. All rights reserved.

437

438

Applied behavior analysis advanced guidebook

with research productivity in applied settings. Interviewees commented that facilitating research can be accomplished by turning clinical work into disseminated projects whenever possible. A key to this recommendation is enlisting practitioners in research design and implementation that is consistent with routine practice. Discussed later in the chapter, it is useful to formalize “research time” among the responsibilities of supervising professionals, particularly the work devoted to organizing, completing, and reporting research projects. The authors interviewed in Kelley et  al. (2015) identified limited resources including space and equipment as a barrier to conducting research. Lack of administrative support was another challenge encountered when dedicated time for research is not recognized and the relevance of research is devalued among employees. Other limitations common to applied settings were lack of financial compensation for research, absence of a research review committee (RRC), and demands imposed by dissemination such as writing manuscripts and submitting conference presentation proposals. Recommendations from the interviewees in Kelley et al. (2015) were first, to engage in “face-to-face” contact with peer researchers in order to choose and collaborate on reasonable projects that have likelihood for success. For data recording, research practitioners should focus on practical methods that can be implemented reliably by natural care providers. Third, time has to be set aside to plan, conduct, and write the results of research projects, ideally on a weekly schedule. This recommendation also suggests that with regard to research dissemination via publication, manuscripts should be written collaboratively and persons in charge must “stay close to your project from the initial planning stage through journal submission” (p. 15). Three other conclusions from Kelley et  al. (2015) deserve mention. When considering publication, practitioners might aim for journals with submission categories devoted to practice-focused articles. For example, “brief practices” are offered in Behavior Analysis in Practice, “reports from the field” in Journal of Organizational Behavior Management, “concise reviews” in Journal of Applied Behavior Analysis, and “brief reports” in Behavioral Interventions. There should be motivation to write such articles because the criteria that define these categories are less rigid than methodologically sophisticated experimental studies, editorial reviews can be more flexible, and manuscript page length-word count is reduced. Next, the contingencies that operate in academia to promote research productivity do not exist in applied settings. That is, promotions, employment security (tenure), and financial support from grants are absent.



Conducting and disseminating research

439

Kelley et al. (2015) emphasized that applied research will be underrepresented until sources of reinforcement for practitioner behavior are identified. In a later section of the chapter, we present several recommendations to incentivize research activity and dissemination in service settings. Finally, practitioner-directed research should be included as a topic in behavior analysis graduate training programs. Guidance in how to conduct and report research is a valuable addition to coursework, field placement, and supervision.The same topic is relevant to programs of continuing education mandated for behavior analysis certification (Behavior Analysis Certification Board, 2022). Administrative service providers can also consider research training as a component of professional development in their settings (Love, Carr, LeBlanc, & Kisamore, 2013; Luiselli, Gardner, Bird, Maguire, & Harper, 2022; Maguire, Gardner, Bird, & Luiselli, 2022a). This tactic builds a reputation which further attracts practitioners interested in research. Additional support for conducting research in applied settings comes from a report by Valentino and Juanico (2020). The participants were 834 respondents to an online survey comprised of behavior analysts credentialed as a BCBA (79.8%), BCBA-D (14.4%), and BCaBA (5.5%). The majority of respondents had been certified between 4 and 5 years and 90.7% served persons with autism spectrum disorder (ASD). With regard to research opportunities at their settings, 68.1% of respondents replied that they had not participated in projects while 31.8% had some involvement. Despite the relatively low participation percentage, 71.8% and 69.6% of respondents expressed interest in publishing and presenting research respectively. Valentino and Juanico (2020) inquired about barriers to research participation and 47.5% of respondents endorsed lack of time. The next most common obstacle was competing contingencies (26.7%), followed by absence of a research leader (13.2%), research mentorship (12.5%), research opportunity (11.9%), and research community (5.3%). A follow-up question asked would respondents conduct research if no barriers existed. The results were telling in that 83.8% responded affirmatively. Concerning a survey item about the importance of research, the respondents judged that it was important (36.3%), somewhat important (34.2%), very important (19.5%), and not at all important (9.7%).Thus, a combined 90% of behavior analysts completing the survey recognized the relevance of applied research within the scope of professional practice. Valentino and Juanico (2020) concluded that the practitioner workforce they sampled expressed interest in contributing to research but motivation

440

Applied behavior analysis advanced guidebook

was thwarted by several barriers. However, “if those barriers did not exist, an overwhelming majority of practitioners would conduct research in their clinical settings” (p. 897). We found it noteworthy that the barriers Valentino and Juanico (2020) identified paralleled many of the constraints Kelley et al. (2015) discussed. These similar findings suggest validity to perceived impediments and need to overcome them. Notably, protections for research participants is paramount, which human services settings can ensure by evaluating strength of research infrastructure (e.g., resources, expertise, management) and establishing research review-institutional review board committees. Constraints on research can be overcome if dedicated time is approved, the setting sets small achievable goals, and productivity is evaluated. Building a research culture demands leadership support, identifying capable mentors, consulting with experts, and assembling employees into project teams. A major focus is to find research opportunities within multiple facets of clinical practice.

Implementation strategies As it pertains to the recommendations and strategies proposed in this chapter, behavior analysts have two interrelated roles. One set of activities is to advocate for research in applied settings by appealing to administrative and operations managers. The objectives of advocacy are to delineate the benefits of conducting and disseminating research while building support at the highest levels and with sustainable productivity. Second, behavior analysts must contribute to the design and implementation of a research program as well as evaluating performance effectiveness. The following implementation strategies emphasize behavior analysts as advocates for and architects of applied research in service settings. We highlight building leadership direction and support, merging research with practice, establishing research teams, having a process of formal research review, arranging incentives that motivate research activity, and planning dissemination activities with productive outcomes.

Leadership direction and support Regardless of setting, leaders and administrators must be convinced of the value research brings to service-recipients, employees, stakeholders, and the professional community. For one, applied research is particularly valuable because it occurs under naturalistic conditions. The more that research approximates the “real world” of practice, the more likely findings



Conducting and disseminating research

441

are ­generalizable and important to practitioners. Put succinctly, what better way to learn about and use the most effective behavioral methods than from professionals who design, implement, supervise, and evaluate them? Second, conducting research contributes to personnel training, a common activity of behavior analysts that benefits service provision (DiGennaro Reed, Hirst, & Howard, 2013; Lerman, LeBlanc, & Valentino, 2015; Reid, 2017). Imagine a research project carried out by teachers at a school for students with intellectual and developmental disabilities. The teachers would be trained to conceptualize a research question that pertains to performance concerns such as improving the integrity of academic instruction or intervening to reduce challenging behavior of students in their classrooms. Other training foci would be selecting dependent measures, recording data, assessing interobserver agreement (IOA), and following suitable research design (Kazdin, 2011). Further teacher training could be directed at functional behavior assessment (FBA), data analysis, intervention formulation, procedural fidelity, and social validity. Through research participation, the teachers will acquire skills that make them more effective educators and produce successful students. Love et  al. (2013) exemplifies one approach to research training with practitioners. The participants were 24 clinical supervisors and senior therapists at a service setting for children with autism. The setting arranged seminars with the participants that approximated a graduate-level course in single-case research methods. Training content was presented in eight modules with accompanying lectures, visual media, and reading assignments focused on (a) measurement, (b) IOA assessment, (c) intervention integrity, (d) data recording, (e) experimental methodology, (f) graphing, and (g) ethics. The training model approached each topic through didactic instruction combined with performance measures. Love et al. (2013) reported that as the result of training, participants acquired research skills demonstrated on knowledge tests and competency evaluations. The participants also rated training positively and the setting produced more research activities and conference presentations in posttraining years. To further accelerate the benefits of training, it was advised that “on the job training models in which a seasoned researcher is paired with an inexperienced one may offer the organization some flexibility in terms of the intensity of training” (p. 155). Third, as elucidated by Kelley et al. (2015) and Valentino and Juanico (2020), many behavior analysts are interested in conducting and disseminating research. Providing these opportunities within service settings should be brought to the attention of leadership. In addition to the practice

442

Applied behavior analysis advanced guidebook

i­mplications emphasized previously, conducting and disseminating research addresses professional aspirations and career development of many behavior analysts which can be aligned with CE pursuits and future employment objectives. In a most pragmatic way, research strengthens professional competencies, improves practice and outcomes with service-recipients, and builds a strong CV! Yet another salient point is that behavior analysts conducting research adds visibility and prestige to service settings. A positive reputation based on research productivity can forge relationships with colleges and universities toward student field placements, affiliations with faculty, and joint funding. A setting known for high quality research will attract capable behavior analysts and other professionals in search of advancement. Similarly, the opportunity to do research contributes to retention of valued employees committed to best practices. There are even business advantages to research when referral sources recognize settings known for meaningful research-to-practice translation.

Merging research and practice Valentino and Juanico (2020) proposed that “Many of the problems that practitioners face in their daily clinical activities would make excellent research questions” (p. 899). The merging of behavior analysis practice and research is reasonable in that both activities require similar competencies. For example, when treating a child or adult with challenging behavior, you would be expected to conduct measurement, assess IOA, document baseline responding, derive intervention from functional assessment, evaluate outcome, and document procedural integrity. With preintervention planning, you could decide on a single-case design that is sensitive to clinical priorities as well as steps toward generalization and maintenance. These clinical practices are no different from research exigencies in most cases. Put another way, research can be viewed as a process for formally evaluating most facets of service delivery. In illustration, some human services research topics that have been studied and published include assessment-measurement (Gil & Carter, 2016; Graff & Karsten, 2012; Lipshultz, Vladescu, Reeve, Reeve, & Dipsey, 2015), attendance (Luiselli et al., 2009; Merritt, DiGennaro Reed, & Martinez, 2019), health and wellness (Lillie, Harman, Hurd, & Smalley, 2021; Shlesinger, Bird, Duhanyan, Harper, & Luiselli, 2018), instruction-teaching (Belisle, Rowsey, & Dixon, 2016; Bowe & Sellers, 2018), environmental care (Carr, Wilder, Majdalany, Mathisen, & Strain, 2013; Goings, Carr, Maguire, Harper, & Luiselli, 2019),



Conducting and disseminating research

443

personnel training (Erath et al., 2020; Parsons, Rollyson, & Reid, 2013), and safety (Cruz et al., 2019; Ditzian,Wilder, King, & Tanz, 2015; Gianotti, Kahl, Harper, & Luiselli, 2021). There is guidance to help distinguish research from practice.The Belmont Report (Department of Health, Education, and Welfare, 1979) clarified that practice generally refers to procedures that will enhance the wellbeing of individuals with reasonable expectation of success whereas research is designed to test hypotheses and contribute to knowledge theory. A departure from standard and accepted practice does not necessarily constitute research and the two domains are often integrated when evaluating the safety and efficacy of procedures. Other regulatory requirements provide exemptions for research concerned with commonly accepted educational practices, comparative effectiveness of instructional techniques, curricula, tests, surveys, and publicly available and identity-protected data (Department of Health and Human Services Title 45-Part 46 Common Rule; https://www.hhs.gov). More insight into merging research and practice can be found in LeBlanc, Nosik, and Petursdottir (2018) who contrasted treatment evaluation to treatment research.Treatment evaluation emphasizes the needs of clients, is not intended to fill a knowledge gap, reflects everyday clinical work, and entails retrospective review of previously collected data. Conversely, treatment research concentrates on generalizable knowledge, limitations found in the published literature, methods that exceed routine practice, and rigorous experimental methodologies. Like other research guidelines, LeBlanc et al. (2018) specified that commonly accepted practices and review of archival data are exempt from formalized research review, discussed later in the chapter. Understanding the defining characteristics of research and practice should enable behavior analysts to design projects that produce useful data for the professional community.We suggest several strategies toward this end and with behavior analysts taking the lead as senior clinicians, supervisors, program managers, and consultants on research projects. 1. Because conducting research is such a collaborative effort, propose topics that resonate with your interests and those of your peers. For example, if you hold a supervisory position at a residential school for students with learning and behavior challenges, many direct care providers would want to study methods that improve instruction, increase social skills, and reduce interfering problems. Staff at an early childhood center may be attracted to assessment strategies and approaches for engaging more purposefully with families. Topics of high interest motivate individuals to contemplate research activity they might otherwise discount.

444

Applied behavior analysis advanced guidebook

2. Research projects will develop when they are aligned with the mission of service settings: the advancement of children and adults, employees, families, and society-at-large. Behavior analysts should confer with leaders and administrators to explain how research in the best interests of ­service-delivery can be conceived, planned, and executed. As an illustration, we evaluated personnel training and performance management with human services care providers required to implement risk-­ mitigation protocols in response to the early stages of the COVID-19 pandemic Maguire, Harper, Gardner, & Luiselli, 2022b. This research included social validity assessment which measured satisfaction and approval of procedures by training and performance management supervisors to further evaluate organizational preparedness for and handling of this unanticipated health crisis. 3. Behavior analysts in practice can achieve research success when service oriented projects have reasonable chance of success. Even with administrative support, be sure your setting has the necessary resources, materials, personnel, and time for research. For example, are there data recording and graphing software, what staff positions enable them to participate in research, when can research meetings be scheduled without interrupting services, and is there access to the peer-review literature (Mattson, 2017)? 4. Another step toward a fruitful research agenda is targeting topics that appear in contemporary publications and are presented at conferences, training seminars, and CE events. Pursuing areas of current interest within the behavior analysis community increases the probability that presentation proposals and publication submissions will be approved by reviewers. 5. Your research can also begin with or concentrate on clinical cases that document meaningful skill acquisition and behavior reduction. Personnel training is also commonplace among most behavior analysts and can be shaped into research projects. Another strategy within applied settings is to conduct replication studies which may be easier to formulate and complete compared to more elaborate and experimentally sophisticated endeavors. Again, start by considering research that matches closely with what you do in practice. In summary, functioning as a scientist-practitioner enables you to merge research and practice in productive ways. Our recommended strategies capitalize on the motivations of behavior analysts who are trained to approach service with an empirical eye fixed on solutions to problems that



Conducting and disseminating research

445

can be evaluated objectively and are socially valid. Demystifying research from practice is both a self-assessment task and behavior you should model among professional colleagues.

Research teams Research teams can be assembled at human services settings comprised of employees who have similar interests and a desire to pursue them collaboratively. One role for behavior analysts is to serve as a team chairperson with specific performance responsibilities and objectives. At the earliest stages, teams meet to discuss goals, operations, and direction. Team composition depends on individuals who can commit time to research projects and contribute to formulation, implementation, supervision, and dissemination. Leadership support, discussed previously, is essential for research teams to function productively and in line with service delivery priorities. The practical steps of research teams deciding about projects, enlisting participants, and managing performance enhances collaboration among members and is a source of motivation. Many persons with research interests, for example, may have little-to-no relevant experience but with the support of collaborative peers are able “to learn the ropes” comfortably. We recommend staffing research teams with permanent members so that projects can be managed by familiar contributors while also encouraging persons to join at will.This open enrollment attracts individuals who otherwise may not recognize the value of research. A team concept toward research further exposes members to training on many levels such as selecting topics, posing questions, completing proposals, designing protocols, analyzing data, preparing presentations, and writing manuscripts. There are two strategies behavior analysts can adopt to focus research teams on projects, starting with clinical cases that have been completed and may qualify for conference presentation or publication submission. In illustration, team members might have implemented procedures to improve a student’s compliance with transition requests during the school day. Perhaps they designed an innovative antecedent intervention subsequent to a baseline phase and included postintervention follow-up. If data in this case were strong, the team could consider preparing a report for one or more practitioner-focused journals. In this example, the “research” was not prospective but the case description could be a viable contribution to the literature. Many human services professionals have such cases that, with retrospective review, can be selected as research team projects worthy of dissemination.

446

Applied behavior analysis advanced guidebook

A second strategy is focusing a research team on writing review papers that may lead to research projects. An example here is team members who are interested in designing a study that improves implementation integrity of care providers. The first step in such a project would be surveying the literature from journal articles and book chapters, summarizing findings, and rendering practice conclusions. Having this information in hand lends itself to a formalized review suitable for publication. Although a review does not constitute research, it is applicable to the research literature and another way to familiarize human services professionals with dissemination opportunities. Directing research teams to survey the literature is consistent with journal clubs and reading groups that advance the professional knowledge of practitioners (DiGennaro Reed et al., 2021; Mattson, 2017; Parsons & Reid, 2011). Team, club, and group members select publications bearing on their work, complete reviews, and present findings. A standardized review should describe the objectives of a published study, methodology, experimental design, and outcomes as well as relative strengths and weaknesses of the research. The presenter is queried about contribution of the study in relation to the current literature, how it could have been improved, and recommendations for future research on the topic. These forums are usually highly regarded by participants (Parsons & Reid, 2011) and a practical way to organize learning opportunities that promote formation of human services research programs (Luiselli et al., 2022). Research teams and dissemination activity usually are most effective when meetings occur on a regular schedule. A one-time-per-month meeting lasting 60 to 90 min should be possible at most settings. Behavior analysts who serve as research team chairpersons prepare a meeting agenda month-to-month with input from participants, distribute meeting invitations, record meeting notes, and summarize action plans. Shown in Fig. 1, it is useful to have a meeting agenda-summary form which structures content and can be delivered to participants as a concise report of ongoing projects and plans.

Research ethics Behavior analysts must be cognizant about practice and research ethics within human services settings (Cox, Brodhead, & Quigley, 2022; Pollard, Quigley, & Woolf, 2021). Recall that Kelley et al. (2015) and Valentino and Juanico (2020) identified absence of committee review as a barrier to practitioner conducted applied research. Accordingly, ethics education is a­ nother



Conducting and disseminating research

447

Meeting Date: Research Group: Chairperson: Attendees: AGENDA ITEMS Review-Update of Approved Research Projects Project

Supervisor

Discussion Points-Action Plans 1: 2: 3: 4:

Review of New Research Proposals Proposed Project: Supervisor: Status: Discussion Points-Action Plans:

Reject

Revise-Resubmit

Accept w/Revision

Accept

Presentations-Publications Title-Authorship: Targeted Presentation: Targeted Publication: Status-Action Plans:

Fig. 1  Research team meeting agenda-summary notes.

strategy behavior analysts should pursue at human services settings in order to make research a reality among the workforce. LeBlanc et al. (2018) advised behavior analysts about the process of obtaining research review and approval within clinical settings. Following the model of an Institutional Review Board (IRB), organizations can form internal RRCs to guarantee consumer protections amidst vulnerable populations. A RRC should have a minimum of five members with defined responsibilities, reflect diversity of group composition, and include internal

448

Applied behavior analysis advanced guidebook

and external participants. Committee members are selected who demonstrate professional competence to review all aspects of proposed and approved organizational research. Having nonaffiliated and nonscientific members brings the unbiased perspective of average citizens to bear on the merits of research. The general functions of a RRC are to thoroughly examine the relative strengths and weaknesses of research proposals, weigh risks and benefits, guard against potential conflicts of interest, protect research participants, and ensure that projects do not deviate from human services objectives. Some areas of research appraised earlier are exempt from review, principally conventional educational practices, tests and surveys, and analysis of de-­identified archival records. LeBlanc et al. (2018) designated expedited review for research that presents no-to- minimal risk and inquiry that is within the DHHS list of categories. The process of full review is required for all other research proposals with committee members voting to approve, disapprove, or request revisions that receive further consideration. The operation of a RRC must follow policies and procedures that define guidelines governing committee composition, establish written protocols, and engage in research knowledge and performance training activities. Further recommendations by LeBlanc et al. (2018) are for human services settings to seek consultation from established researchers and collaborate with similarly-minded agencies, local colleges, and behavior analysis associations. A RRC requires a competent chairperson who is able to coordinate meetings, agendas, policy mandates, reporting, training, and records maintenance. Most critically, “the RRC should remain focused on oversight of ethics and human participant protections rather than specific context of the project” (p. 454). Thus, behavior analysts interested in conducting and disseminating research should be knowledgeable about the ethical mandate of consumer protection. Research review stratifies a process of procedural safeguards for evaluating the appropriateness of proposals that benefit service-recipients, organizations, and other consumers. Woven through this chapter, behavior analysts are able to design and contribute to systems which make it possible to promote research at the highest levels. Helping form and operate a RRC is one such example consistent with efforts to gain leadership support for research and build research teams fully integrated with standards of exemplary practice.



Conducting and disseminating research

449

Writing for publication and public speaking Research dissemination through oral presentation and publication requires written product in the form of conference proposals, workshop outlines, and manuscripts. It is necessary to establish a writing repertoire in order for your research to reach professional audiences. Luiselli (2017) recommended that behavior analysts focus on writing not only to produce presentations and publications but as a means toward knowledge acquisition, an activity that improves expository skills, and continuing education. However, many practitioners consider writing an “add on” burden to their already busy practice schedules. They also cite inexperience with public speaking and publishing as reasons for inactivity. And yet, there are behavioral strategies to overcome resistance to writing and build fundamental skills (Johnson, Perrin, Salo, Deschaine, & Johnson, 2016; McDougall, 2006; Porritt, Burt, & Poling, 2006; Skinner, 1981). Several general audience and academic writing guides which offer practical advice are other helpful resources (King, 2010; Lamott, 2007; Silvia, 2018; Zinsser, 2012). Designing a writing plan begins with conscientious reading of the professional literature.You should set aside dedicated time each month to read books, edited book chapters, and journal articles on topics of interest both personal and in line with research team members. The activity of reading enables you to sample the direction of contemporary research and areas represented in empirical studies and reviews. Through reading you also learn about writing basics such as narrative construction and order within manuscripts. As well, studying the writing style of authors is a fruitful exercise for self-development and there is nothing wrong with emulating the behavior of highly regarded models! Carr and Briggs (2010) and Mattson (2017) suggested several tactics to access the peer reviewed literature. Personal subscriptions to journals or through an organization provide regular contact although cost is involved. Most journals provide “contents alerts” that list the articles in recent issues and it is a relatively easy task to request copies from authors via email. Other free options are to search reputable journals that offer open-access articles and sites such as PubMed and Research Gate. Depending on college and university status, you may retain library privileges which permit access to publications across multiple media. Journal clubs and reading groups are other vehicles for obtaining, sharing, and reading published research. With reference to journals, use your reading to confirm the types of articles that are published. Letters-to-the-Editor, case reports, single-case

450

Applied behavior analysis advanced guidebook

and group-design studies, surveys, reviews, program descriptions, and position papers are choices but the selection varies from journal to journal. Being clear about article specificity will avoid devoting time to manuscript preparation and submission that is unsuccessful due to misguided choice of journal. This detail is readily available from the “Purpose and Scope” and “Information for Authors” sections that all peer-reviewed journals post on their website. Writing is influenced by antecedent and consequence variables that evoke, shape, and maintain other behavior. An essay by Skinner (1981) titled “How to Discover What You Have to Say” is particularly instructive about arranging contingencies to support productive writing. The steps below summarize briefly a writing plan grounded in learning principles. 1. Schedule “writing time” the same way you arrange mandatory activities such as program meetings with staff, observations, supervision contacts, parent conferences, and consultation visits. These activities form the backbone of your practice responsibilities and you would not overlook or postpone them cavalierly. With the same intent, plan writing strategically in your schedule. 2. Set aside one hour for writing each week that can be fitted to your activity schedule, be it morning, afternoon, or evening, as long as the time will not be compromised. If you are serious about writing, finding and allocating one hour in the week should not be difficult. 3. The setting for your one-hour weekly writing session is important— pick a location that is free from distractions, comfortable, and contains all the materials needed to be productive. Be sure the location allows you to get the work done without interruption whether the office, home, library, meeting room, or favorite café-eatery. 4. Beyond scheduled writing time and writing location, be prepared to write “opportunistically.”That is, take advantage of unexpected writing time made available when meetings are canceled and missed appointments occur. Even a few extra minutes devoted to writing each week increases productivity. And yes, added writing is possible during evenings and on weekends at home! 5. Poor preparation stalls and interferes with performance. Facilitative prewriting tasks include setting achievable goals each writing session, working from an outline, assembling writing materials, and having copies of publications (hardcopy or electronic) accessible that are content relevant.



Conducting and disseminating research

451

6. Writing is a sequence of successive approximations in which sentences and paragraphs are shaped over time before a final product is achieved. Hence, presentation proposals and manuscripts should be written in small segments and gradually pieced together as drafts “for your eyes only”. Editing, re-writing, editing, and re-writing some more is the key to good writing formed from beginning steps. 7. Regarding goal-setting, you can choose words written per session, pages produced, or simply time spent writing per weekly schedule. Whatever the goal, recording and plotting writing data function as performance feedback and objective progress monitoring. 8. Reinforcement of writing such as acceptance of a conference presentation proposal and manuscript submitted for publication is delayed, typically many months from inception. Treating yourself to a favorite snack, watching a movie, or having dinner with friends are more immediate self-managed consequences you can arranged contingent on completion of incremental writing tasks. 9. Most research writing is done collaboratively among contributing supervisors and colleagues. Assigning a lead author to coordinate tasks and deadlines is advisable. DiGennaro Reed et al. (2021) commented on establishing writing groups at human services settings to support collaboration among less experienced researchers. A writing group makes it possible to set mutually acceptable goals and hold members accountable for assignments. Writing groups can be an off-shoot of research teams or integrated with same. 10. When the first draft of a presentation proposal and manuscript is completed, put it aside for a brief period and come back for a next round of review and revision. A strategic pause will freshen your perspective on what was written and sharpen subsequent editing. 11. Consider sending your final draft of written product to objective thirdparty readers who can provide feedback and suggestions. This step is especially helpful when the people you seek out have presented and published previously. 12. The foundation of a writing plan is that it works for you through continuous monitoring, adjustments as needed, personal reflection, and acceptance.Writing guidelines and recommendations notwithstanding, they must be suited to personal preferences in order to be successful. Maintaining a writing schedule faithfully that puts words to paper is essential but how you get there assumes individuality and flexibility.

452

Applied behavior analysis advanced guidebook

Research dissemination through public speaking is similar to writing for publication in two ways. First, public speaking is behavior that can be defined and learned. Second, acquiring this behavior is accomplished by following a plan with goals and desired outcomes. The usual public speaking venues where behavior analysts disseminate research are training groups, seminars, college classrooms, and conferences. Each one has respective context variables that should be considered. For example, conference presentations include topical symposia with individual papers, workshops, discussion panels, and poster sessions. It is necessary that conference proposals fulfill the criteria for each presentation format such as allotted time, learning criteria, and required media. Because conference presentations and other types of oral research dissemination are invariably collaborative, the same group contingencies described for writing apply to public speaking. Friman (2017) suggested several strategies for behavior analysts engaging in public speaking. Preparation tasks are clarifying precisely the purpose and content of your presentation, outlining key points, creating a visual representation with slides and other media, and fitting everything within time parameters. Practice your presentation many times by speaking aloud, first alone, and later in front of one or more persons simulating an audience. Repeated practice and feedback from listeners allows you to refine the presentation to a close-to-finished version. As well, speakers should familiarize themselves with the physical space where a presentation will occur. Some locations such as a conference room at your work site will be easy to navigate but in novel settings it is beneficial to preview room size, where the audience will be situated, and the position you will take when presenting (e.g., behind a podium). Lighting, acoustics, audio-visual technology, and available assistance also should be checked during preparation. A well-prepared presentation must be followed by effective in-themoment delivery and Friman (2017) offered 15 distinct recommendations. “Presentation style” covers verbal and nonverbal behavior that influences what an audience understands, learns, and likes. One guide to fluent communication is knowing when to use and when to avoid technical language. Consider that a group of fellow behavior analysts can handle “concurrent schedules”, “multiple baseline design”, and “stimulus control” but teachers unfamiliar with ABA will be confused by such terms. Even speaking to the most staunch behavior analysis audiences is aided when language moves “from technical jargon to plain English” (Lindsley, 1991). Attention to nonverbal behavior should emphasize posture, facial expression, attire, and movement, being aware of what is generally ­pleasing



Conducting and disseminating research

453

to listeners (­smiling, dressing for the occasion) and inhibiting actions that are distracting (fidgeting, averting eye-contact with the audience). Voice quality, volume, and tempo are integral for effective public speaking. Our perspective is that what you say should resemble a conversation in which you modulate your voice between low and high tones, refrain from shouting, speak at a pace that is neither excessively slow nor rapid, and enunciate clearly to facilitate listener comprehension. In particular, be aware of speech disfluencies categorized as filled pauses, notably (a) nonsense syllables that lack meaning (“eh, um, ah”), (b) tongue clicking, and (c) the words-phrases “like, you know, does that make sense.” Unfortunately, filled pauses during public speaking are ubiquitous in the current culture and the insidious intrusion into our common language diminishes speaker credibility, disrupts the flow of speech, and makes audiences uncomfortable (Agarwal, 2007; Bell, 2011; Clark & Fox Tree, 2002; Henderson, 2007).You can eliminate filled pauses and other speech disfluencies through awareness training, response detection, and competing behavior shown to be effective in habit reversal research (Luiselli, 2022; Mancuso & Miltenberger, 2016; Oritz, Deshais, Miltenberger, & Reeve, 2022; Perrin et al., 2021). Listening to and viewing your audio-video recorded practice and actual presentations also helps greatly. An additional guide to public speaking comes from Heinicke, Juanico, Valentino, and Sellers (2021) who interviewed ten frequently invited presenters at ABA conferences and conventions. All interviewees agreed “that it was important for behavior analysts to be effective public speakers for reasons that centered on themes of public speaking as a general skill set, the power of dissemination and reach, and the history of behavior analysts as disseminators” (p. 206). Preparation was endorsed as a necessary step that starts with reflection about your message, presentation objectives, and audience expectations. When constructing a presentation, keep visual stimuli simple and clear, maintain a consistent theme (e.g., color, font, background), minimize the words-graphics on slides, and coordinate what the audience sees with what you are saying. Consistent with Friman (2017), the interviewees in Heinicke et  al. (2021) stressed practice that gradually approximates the final presentation. Ideally, limit notes and scripts when presenting in order to cultivate spontaneity and not alienate listeners who generally disdain hearing content that is read to them. Pleasing speakers use a conversational tone, vary intonation, volume, and pitch, are authentic, and control distracting behavior.The interviewees were aware of eliminating speech disfluencies as well as ­simplifying

454

Applied behavior analysis advanced guidebook

a presentation, speaking a common language, and not “taking down” to audiences. Seeking performance feedback from colleagues and mentors was another recommended strategy for improving public speaking skills. Whether communicating orally or in writing, there are guidelines for ethically disseminating research (American Psychological Association, 2020; Behavior Analyst Certification Board, 2020; Oda, Luiselli, & Reed, 2022). Behavior analysts must not make public statements that are “false, deceptive, misleading, exaggerated, or fraudulent, either because of what they state, convey, or suggest or because of what they omit concerning their research, practice, or other work activities or those of persons or organizations with which they are affiliated” (Behavior Analyst Certification Board, 2020, p. 16). Second, authorship on presentations and publications should consist of persons who made “substantial contribution” to the research (American Psychological Association, 2020), which typically applies to formulating objectives and experimental design, analyzing data, interpreting findings, organizing-­ supervising implementation, and completing writing assignments (Smith & Master, 2017). Authorship credit is decided best at the onset of a research project or nonempirical publication (e.g., systematic review) including the order and approval of contributors. Oda et al. (2022) explained some other areas of ethical misconduct when disseminating research. Inaccurate reporting refers to presenting misleading findings, falsifying data, and omitting information that bears on interpretation of research results. Plagiarism, either deliberate or unintentional, is not citing the words, ideas, and work of other persons or you improperly taking credit for original material that belongs to or has been presented by someone else. Note that self-plagiarism must also be avoided (American Psychological Association, 2020). A failure to protect the confidentiality of research participants during presentations and in publications also falls under ethical misconduct. Given these concerns, it is not surprising that conference committees reviewing presentation proposals and journals responsible for peer-review routinely request that submissions include contributor approval, declaration of informed consent from research participants, conflict of interest disclosure, and data availability statement.

Incentives-positive reinforcement Recall our recommendation to arrange reinforcement contingent on completion of writing tasks and meeting planned objectives. We also discussed how academic incentives for conducting and disseminating research do not apply to human services settings. Some often encountered negative



Conducting and disseminating research

455

reactions from human services professionals including behavior analysts are that doing research is not in their job description, they are not paid to be researchers, and time is not available for research activity. Previous sections of the chapter countered these arguments by clarifying the many positive outcomes of research and methods to merge research and practice, enlist leadership support, and teach sustainable research skills to practitioners. Incentives and positive reinforcement are used as performance improvement interventions within human services settings (Luiselli, 2021) and can be applied to research as well. For most practitioners, delivering a presentation, seeing their name appear in conference proceedings, and being a publication author could be labeled “generalized research reinforcers.” However, other sources of reinforcement should be considered and identified through preference assessment extended to service practitioners on a larger scale (Wine, 2017).The central question posed in such assessment is “What would motivate you to participate in research?” Assessment can be completed during face-to-face interviews with practitioners and distribution of written surveys-­questionnaires. One component of assessment is listing several preselected incentives that are rated numerically (e.g., 1 = not at all an incentive….0.4 = very much an incentive) and rank ordered from highest-to-lowest preferences. Qualitative assessment would have respondents nominate incentives that were not presented during interviews and in written format. To our knowledge, preference assessment specific to research participation and productivity within human services settings has not been conducted, notwithstanding several possible motivators. Monetary “rewards” for accepted presentations and publications can be given based on authorship, for example, $200 to the lead author and $100 to each listed contributor thereafter. Public recognition also function as reinforcement in several ways. First, presentations and publications can be recognized on an organization’s website and announced on social media. Second, other website alternatives are links to brief descriptions such as “Research Highlights” and presentation-­publication lists of downloadable PDF documents.Third some human services settings recognize research contributions of employees with public posting in the form of visual displays. Examples are display-boards of conference poster presentations and synopses of journal publications with author highlights. Like other elements involved in building and supporting a program of applied research, behavior analysts must confirm that their setting has the necessary financial and personnel resources to manage incentives and

456

Applied behavior analysis advanced guidebook

p­ ositive reinforcement. The contingencies specific to research have to be defined explicitly so that all employees understand operations without complaints of inequality and favoritism. For example, administrators may decide that some or all of the expenses for attending a conference are reimbursed to employees scheduled to make a research presentation but not to nonpresenting employees seeking attendance. Finally, tracking presentations-­ publications and assessing social validity among participants are measures to evaluate the effectiveness of incentives and positive reinforcement within a research program. A simple bar graph representing the number of publications and presentation each year is easily interpretable. Social validity indicators would be the value respondents place on research “rewards” and the perceived success of such motivators.

Summary and conclusions Behavior analysts are interested in and have the capability of conducting and disseminating research which contributes to practice standards, the welfare of service-recipients, and professional development (DiGennaro Reed et al., 2021; Kelley et al., 2015;Valentino & Juanico, 2020). This chapter presented strategies, summarized below, for advancing research at applied settings in which behavior analysts function as program advocates and architects. We recommend that you adopt these strategies, evaluate efficacy, and strive to fulfill the role of behavior analyst as researcher. • Leadership direction and support is necessary to establish a viable and productive program of applied research in human services settings. • Appealing to leadership means articulating the benefits of research for service delivery, training, employee retention, forging academic affiliations, acquiring funding, and enhancing reputation. • Select research topics that address primary service obligations, can be integrated with practice, are approved by administrators, supervisors and colleagues, and reflect contemporary foci within behavior analysis and other disciplines. • Assemble teams composed of employees who are interested in research and can commit to planning, implementing, evaluating, and disseminating projects. • Take the role of team chairperson devoted to research coordination, oversight, and training. • Form a research review committee that follows strict protocols for the protection of research participants and approval of projects.



Conducting and disseminating research

457

• Ensure that completed research projects are disseminated through presentations and publications by constructing a writing plan and honing public speaking skills. • Work collaboratively with colleagues to facilitate group effort and cooperation in conducting and disseminating research. • Consider incentives and positive reinforcement that motivate and are approved by employees to participate in research.

References Agarwal, V. (2007). Calibrating competence for professional excellence. Retrieved from: www.icsi. edu/WebModules/Programmes/35NC/35thSOUVENIR.pdf. American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.). https://doi.org/10.1037/0000165-000. Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb. com/wp-content/ethics-code-for-behavior-analysts/. Behavior Analyst Certification Board. (2022). BACB March 2022 newsletter. Retrieved from https://www.bacb.com/wp-content/uploads/2022/01/BACB_March2022_ Newsletter-220330-4.pdf. Belisle, J., Rowsey, K. E., & Dixon, M. R. (2016). The use of in situ behavioral skills training to improve staff implementation of the PEAK relational training system. Journal of Organizational Behavior Management, 36, 71–79. https://doi.org/10.1007/ s40614-017-0119-4. Bell, R. L. (2011). Is your speech filled with um? 3 tips to eliminate filled pauses from your professional presentation. Supervision, 72, 10–13. Bowe, M., & Sellers, T. P. (2018). Evaluating the performance diagnostic checklist-human services to assess incorrect error-correction procedures by preschool paraprofessionals. Journal of Applied Behavior Analysis, 51, 166–176. https://doi.org/10.1002/jaba.428. Carr, J. E., & Briggs, A. M. (2010). Strategies for making regular contact with the scholarly literature. Behavior Analysis in Practice, 3(2), 12–18. https://doi.org/10.1007/BF03391760. Carr, J. E., Wilder, D. A., Majdalany, L., Mathisen, D., & Strain, L. A. (2013). An assessment-­ based solution to a human-service employee performance problem: An initial evaluation of the performance diagnostic checklist—Human services. Behavior Analysis in Practice, 6, 16–32. https://doi.org/10.1007/bf03391789. Clark, H. H., & Fox Tree, J. E. (2002). Using uh and um in spontaneous speaking. Cognition, 84, 73–111. https://doi.org/10.1016/S0010-0277%2802%2900017-3. Cox, D. J., Brodhead, M. T., & Quigley, S. P. (Eds.). (2022). Research ethics in behavior analysis: From laboratory to clinic and classroom Elsevier/Academic Press. Cruz, N. J., Wilder, D. A., Phillabaum, C., Thomas, R., Cusick, M., & Gravina, N. (2019). Further evaluation of the performance diagnostic checklist-safety (PDC-safety). Journal of Organizational Behavior Management, 39, 266–279. https://doi.org/10.1080/0160806 1.2019.1666777. Department of Health, Education, and Welfare. (1979). Belmont report: Ethical principles and guidelines for the protection of human subjects of biomedical and behavioral research. Federal Register, 44(76). DiGennaro Reed, F. D., Hirst, J. M., & Howard, V. J. (2013). Empirically supported staff selection, training, and management strategies. In D. D. Reed, F. D. DiGennaro Reed, & J. K. Luiselli (Eds.), Handbook of crisis intervention and developmental disabilities (pp. 71–85). Springer.

458

Applied behavior analysis advanced guidebook

DiGennaro Reed, F. D., Pellegrino, A. J., Blackman, A. L., Erath, T. G., Ruby, S., & Harbison, M. J. (2021). Advancing OBM practice and research within IDD service settings. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities (pp. 291–314). Routledge. Ditzian, K.,Wilder, D. A., King, A., & Tanz, J. (2015). An evaluation of the performance diagnostic checklist—Human services to assess an employee performance problem in a center-based autism treatment facility. Journal of Applied Behavior Analysis, 48, 199–203. https://doi. org/10.1007/s40617-018-0243-y. Erath,T. G., DiGennaro Reed, F. D., Sundermeyer, H.W., Brand, D., Novak, M. D., Harbison, M. L., et al. (2020). Enhancing the training integrity of human service staff using pyramidal behavioral skills training. Journal of Applied Behavior Analysis, 53, 449–464. https:// doi.org/10.1002/jaba.608. Friman, P. C. (2017). Practice dissemination: Public speaking. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook: A manual for professional practice (pp. 349–365). Elsevier/ Academic Press. Gianotti, J., Kahl, T., Harper, J. M., & Luiselli, J. K. (2021). Behavioral safety assessment and intervention among residential care providers of students with intellectual and developmental disabilities. Journal of Developmental and Physical Disabilities, 33, 789–798. https:// doi.org/10.1007/s10882-020-09773-7. Gil, P. J., & Carter, S. L. (2016). Graphic feedback, performance feedback, and goal setting increased staff compliance with a data collection task at a large residential facility. Journal of Organizational Behavior Management, 36, 56–70. https://doi.org/10.1080/01608061.2 016.1152207. Goings, K., Carr, L., Maguire, H., Harper, J. M., & Luiselli, J. K. (2019). Improving classroom appearance and organization through a supervisory performance improvement intervention. Behavior Analysis in Practice, 12, 430–434. https://doi.org/10.1007/ s40617-018-00304-7. Graff, R. B., & Karsten, A. M. (2012). Evaluation of a self-instruction package for conducting stimulus preference assessments. Journal of Applied Behavior Analysis, 45, 69–82. https:// doi.org/10.1901/jaba.2012.45-69. Heinicke, M. R., Juanico, J. F., Valentino, A. L., & Sellers, T. P. (2021). Improving behavior analysts’ public speaking: Recommendations from expert interviews. Behavior Analysis in Practice. https://doi.org/10.1007//s40617-020-00538-4. Henderson, J. (2007). There’s no such thing as public speaking: Making any presentation or speech as persuasive as a one-on-one conversation. Prentice Hall Press. Johnson, P. E., Perrin, C. J., Salo, A., Deschaine, E., & Johnson, B. (2016). Use of an explicit rule decreases procrastination of university students. Journal of Applied Behavior Analysis, 49, 1–13. Kazdin, A. E. (2011). Single case research design in clinical and applied settings. New York: Oxford University Press. Kelley, D. P.,Wilder, D. A., Carr, J. E., Rey, C., Green, N., & Lipschultz, J. (2015). Research productivity among practitioners in behavior analysis: Recommendations from the prolific. Behavior Analysis in Practice, 8, 201–206. https://doi.org/10.1007/s40617-015-0064-1. King, S. (2010). On writing: A memoir of the craft. Scribner. Lamott, A. (2007). Bird by bird: Some instructions on writing and life. Anchor. LeBlanc, L. A., Nosik, M. R., & Petursdottir, A. (2018). Establishing consumer protections for research in human service agencies. Behavior Analysis in Practice, 11, 445–455. https://doi. org/10.1007/s40617-018-0206-3. Lerman, D. C., LeBlanc, L. A., & Valentino, A. L. (2015). Evidence-based application of staff and caregiver training procedures. In H. Roane, J. E. Ringdahl, & T. Falcomata (Eds.), Clinical and organizational applications of applied behavior analysis (pp. 321–351). New York: Elsevier Inc. https://doi.org/10.1016/B978-0-12-420249-8.00014-9.



Conducting and disseminating research

459

Lillie, M. A., Harman, M. J., Hurd, M., & Smalley, M. R. (2021). Increasing passive compliance to wearing a facemask in children with autism spectrum disorder. Journal of Applied Behavior Analysis, 54, 582–599. https://doi.org/10.1002/jaba.829. Lindsley, O. R. (1991). From technical jargon to plain English for application. Journal of Applied Behavior Analysis, 24, 449–458. https://doi.org/10.1901/jaba.1991.24-449. Lipshultz, J. L., Vladescu, J. C., Reeve, K. F., Reeve, S. A., & Dipsey, C. R. (2015). Using video modeling with voiceover instruction to train staff to conduct stimulus preference assessments. Journal of Developmental and Physical Disabilities, 27, 505–523. https://doi. org/10.1007/s10882-015-9434-4. Love, J. R., Carr, J. E., LeBlanc, L. A., & Kisamore, A. N. (2013). Training behavioral research methods to staff in an early and intensive behavioral intervention setting: A program description and preliminary evaluation. Education and Treatment of Children, 3(1), 139–160. https://doi.org/10.1353/etc.2013.0003. Luiselli, J. K. (2017). Practice dissemination: Writing for publication. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook: A manual for professional practice (pp. 325–347). https://doi.org/10.1016/B978-0-12-811122-2.00014-0. Luiselli, J. K. (2021). Performance management interventions. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities Routledge. Luiselli, J. K. (2022). Public speaking disfluencies: A review of habit training and research. Journal of Applied Behavior Analysis. https://doi.org/10.1002/jaba.948. Luiselli, J. K., DiGennaro Reed, F. D., Christian,W. P., Markowski,A., Rue, H. C., & St.Amand, C., & Ryan, C. J. (2009). Effects of an informational brochure, lottery-based financial incentive, and public posting on absenteeism of direct-care human services employees. Behavior Modification, 33, 175–181. https://doi.org/10.1177/0145445508320624. Luiselli, J. K., Gardner, R. M., Bird, F., Maguire, H., & Harper, J. M. (2022). Organizational behavior management in human services settings: Conducting and disseminating research that improves client outcomes, employee performance, and systems development. Journal of Organizational Behavior Management. https://doi.org/10.1080/01608061.2022 .2027319. Maguire, H., Gardner, R. M., Bird, F., & Luiselli, J. K. (2022a). Training, supervision, and professional development in human services organizations: EnvisionSMART: A Melmark model of administration and operation. Elsevier. Maguire, H., Harper, J. M., Gardner, R. M., & Luiselli, J. K. (2022b). Behavioral training and performance management of human services organization care providers during the COVID-19 pandemic. Advances in Neurodevelopmental Disorders, 6, 340–348. https://doi. org/10.1007/s41252-021-00234-6. Mancuso, M., & Miltenberger, R. G. (2016). Using habit reversal to decrease filled pauses in public speaking. Journal of Applied Behavior Analysis, 49, 188–192. https://doi. org/10.1002/jaba.267. Mattson, J. G. (2017). Continuing education: Accessing the per-reviewed literature. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook: A manual for professional practice (pp. 309–324). New York: Elsevier/Academic Press. McDougall, D. (2006). The distributed changing criterion design. Journal of Behavioral Education, 15, 237–247. Merritt, T. A., DiGennaro Reed, F. D., & Martinez, C. E. (2019). Using the performance diagnostic checklist-human services to identify an indicated intervention to decrease employee tardiness. Journal of Applied Behavior Analysis, 52(4), 1034–1048. https://doi. org/10.1002/jaba.643. Oda, F. S., Luiselli, J. K., & Reed, D. D. (2022). Ethically communicating research findings. In D. J. Cox, M. T. Brodhead, & S. P. Quigley (Eds.), Research ethics in behavior analysis: From laboratory to clinic and classroom Elsevier/Academic Press.

460

Applied behavior analysis advanced guidebook

Oritz, S. M., Deshais, M. A., Miltenberger, R. G., & Reeve, K. F. (2022). Decreasing nervous habits during public speaking: A component analysis of awareness training. Journal of Applied Behavior Analysis, 55, 230–248. https://doi.org/10.1002/jaba.882. Parsons, M. B., & Reid, D. H. (2011). Reading groups: A practical means of enhancing professional knowledge among human service practitioners. Behavior Analysis in Practice, 4(2), 53–60. https://doi.org/10.1007/BF03391784. Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2013). Teaching practitioners to conduct behavioral skills training: A pyramidal approach for training multiple human service staff. Behavior Analysis in Practice, 6, 4–16. https://doi.org/10.1007/BF03391798. Perrin, C. J., Hensel, S. A., Lynch, D. L., Gallegos, L. R., Bell, K., & Carpenter, K. (2021). Using brief habit reversal and an independent group contingency to reduce public speaking speech disfluencies. Journal of Applied Behavior Analysis, 54, 1–13. https://doi. org/10.1002/jaba.867. Pollard, J. S., Quigley, S. P., & Woolf, S. (2021). Organizational ethics. In J. K. Luiselli, R. M. Gardner, F. L. Bird, & H. Maguire (Eds.), Organizational behavior management approaches for intellectual and developmental disabilities Routledge. Porritt, M., Burt, A., & Poling, A. (2006). Increasing fiction writers' productivity through an internet‐based intervention. Journal of Applied Behavior Analysis, 39(3), 393–397. https:// doi.org/10.1901/jaba.2006.134-05. Reid, D. H. (2017). Competency-based staff training. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook: A manual for professional practice (pp. 21–40). Elsevier/Academic Press. Shlesinger, A., Bird, F., Duhanyan, K., Harper, J. M., & Luiselli, J. K. (2018). Evaluation of a comprehensive health-wellness intervention on weight and BMI of residential students with neurodevelopmental disorders. Advances in Neurodevelopmental Disorders, 2, 425–432. https://doi.org/10.1007/s41252-018-0081-5. Silvia, P. J. (2018). How to write a lot: A practical guide to productive academic writing. American Psychological Press. Skinner, B. F. (1981). How to discover what you have to say:Talk to students. Behavior Analyst, 4, 1–7. https://doi.org/10.1007/BF03391847. Smith, E., & Master, Z. (2017). Best practice to order authors in multi/interdisciplinary health sciences research publications. Accountability in Research, 24(4), 243–267. Valentino, A. L., & Juanico, J. F. (2020). Overcoming barriers to applied research: A guide for practitioners. Behavior Analysis in Practice, 13, 894–904. https://doi.org/10.1007/ s40617-020-00479-y. Wine, B. (2017). Incentive-based performance improvement. In J. K. Luiselli (Ed.), Applied behavior analysis advanced guidebook (pp. 117–134). Elsevier. Zinsser, W. (2012). On writing well: An informal guide to writing nonfiction. Harper Perennial.

Index Note: Page numbers followed by f indicate figures and t indicate tables.

A ABA. See Applied behavior analysis (ABA) ABAI. See Association for Behavior Analysis International (ABAI) Acceptance and commitment therapy (ACT), 173 Active student responding (ASR), 165–166, 168 Adaptive sports, 417–418 Adverse events and effects, 354, 354t BACB Code Principle, 354–355 behavioral system, reducing, 355–356, 356f behavioral treatment, 353 coercive intervention, 356–357 consumer protection, 355 intended behavioral treatment, 354 observable and measurable, 355 skin shock, 352 American Psychological Association (APA), 194–195 American Telemedicine Association (ATA), 194–195 APA. See American Psychological Association (APA) Applied behavior analysis (ABA), 3, 133 agencies and organizations, 326 applications, 155–156 BACB Code, 341 behavioral analytic training systems, 164–166 behavioral community psychologists, 179–180 and behavioral interventions, 342 behavior change projects, 174–175 context, 176–177 contingencies, 177–179 COVID-19 pandemic, 179–180 equivalencebased instruction (EBI), 170 higher education, 159–160 history of, 155

innovations, 160–166 instructional design, 177–179 instructional systems, 160–166, 179 instructional technology labs, 164–166 interteaching, 163–164 laboratory-based experiences, 174 learning environments, 179–180 macrosystems, 176–177, 179 metacontingencies, 176–177 participation, engagement and responding, 168–169 practical skills training, 173–174 precision teaching, 171 procrastination and motivation, 167–168 service delivery model, 321 syllabi, 177–179 systems-level contingencies, 169–170 teaching behavior frequencies, 158 changing behavior, 156–157 college instructors, 158 direct instruction (DI), 157 educators and psychologists, 158–159 learning, 156 practices, 157–158 total performance system, 163 treatments, 155–156 ASD. See Autism spectrum disorder (ASD) ASIB. See Automatically reinforced selfinjurious behavior (ASIB) ASR. See Active student responding (ASR); Automatic speech recognition (ASR) Association for Behavior Analysis International (ABAI), 177–178 ATA. See American Telemedicine Association (ATA) Autism spectrum disorder (ASD), 36 Automatically reinforced self-injurious behavior (ASIB), 75–76 Automatic speech recognition (ASR), 236

461

462

Index

B BACB. See Behavior Analyst Certification Board (BACB) BATS. See Behavior Analytic Training System (BATS) BCaBAs. See Board Certified Assistant Behavior Analysts (BCaBAs) BCBA. See Board certified behavior analyst (BCBA) Behavioral assessment process, 411–412 Behavioral consultation, 372–373 idiographic methods, 374 and OBM, 374 plan implementation, 373–374 problem analysis, 373 relationship, consultees, 372 Behavioral skills training (BST), 24, 85–86, 125–128, 137, 143–144, 175, 255 component, 257, 261 conventional, 349 literature, 348–349 training workshop, 351 Behavioral systems analysis (BSA), 342, 364 ABA values, 343–344 benefit, 344 definition, 342 ethical behavior employee support, 345 functional assessment, 343 rewards and incentives consumer preference, 346, 347f monitoring, 346–347 steps, 344, 344f terms and concepts, 342–343 Behavior analysis in practice (BAP), 134–135 Behavior Analyst Certification Board (BACB), 138 Behavior analysts, 437 administrative support, 438 caregiver characteristics and preferences, 298–299 caregiver preference, 305 challenges, 306 challenging behaviors, 297–298 clients, connectivity and hardware, 299–300

competence and practice setting, 300–301 contingencies, 439 cultural awareness and humility, 311 development, 437 FA and FCT procedures, 291–297 mitigation strategies, 297 neurodevelopmental disabilities, 297 practice and research ethics, 446–447 practice cultural responsiveness, 310–311 practitioner workforce, 439–440 research, 437–438 service delivery models, 301–302 telehealth-based services, 303–307 Behavior analytic supervision. See also Board certified behavior analyst (BCBA) content/ensuring competency BDT, 256 guided notes, 257 in remote supervision format, 256–257 relationship culture, 254–255 meeting, 253 stressors, 253–254 supervisor and trainee, 252 supervisory, 254 remote supervision, 247 supervision effectiveness component, 259 social validity, 259 systems, promote practices, 248–259 technology, 248–251, 251f Behavior analytic training system (BATS), 164–165 Behavior-based safety (BBS) human services, 348 procedural refinements, 346–347 training, 347–348 Behavior-change techniques, 407 Behaviorists for Social Responsibility Special Interest Group (BFSR SIG), 177–178 Belmont Report, 443 BFSR SIG. See Behaviorists for Social Responsibility Special Interest Group (BFSR SIG)

BMI. See Body mass index (BMI) Board Certified Assistant Behavior Analysts (BCaBAs), 133 Board certified behavior analyst (BCBA), 66–67, 133, 164–165, 273 remote supervision, 247 Body mass index (BMI), 394 BSA. See Behavioral systems analysis (BSA) BST. See Behavioral skills training (BST)

C CABAS. See Comprehensive Application of Behavior Analysis to Schooling (CABAS) Coalition for Technology in Behavioral Science (CTiBS), 194–195 Coercion, 351–352 College teaching. See Applied behavior analysis (ABA) Comprehensive Application of Behavior Analysis to Schooling (CABAS), 157 Computer-based instruction (CBI), 83–84 Conflict of interest BACB Code Standard, 341 BSA, 342 ethics, 341 CTiBS. See Coalition for Technology in Behavioral Science (CTiBS)

D Data-based decision-making, 359, 359f Data recording ABC recording, 222–223 additional data types, 229–230 analysis, 217–218 artificial intelligence, 238–239 behavior analysts, 238–239 behavior-environment relations, 238–239 conflicts of interest, 239 contextualizing data, 218–220 different types of aggregates, 232, 233f duration, 221 financial support, 239 frequency and rate, 220–221 latency, 221 machine learning, 238–239 nonnumeric datum, 230

Index

463

numeric event recording, 222 percentage, 221 permanent product, 223 quantitative analyses of behavior, 228 technology behavior, 235–237 environment, 237–238 time sampling methods, 223–224 trials to criterion, 222 visual analysis and baseline logic, 224, 225–227f visual displays, 218–220 DEI. See Diversity, equity, and inclusion (DEI) DI. See Direct instruction (DI) Differential reinforcement of alternative behavior (DRA), 35 DiGennnaro-Reed and Henley study, 134–135 Direct assessment checklists, 44 event recording, 46, 47f occurrence and nonoccurrence data collection, 39–42, 41f occurrence and nonoccurrence of components, 42 outcomes of treatment integrity measures, 39–42, 43t performance feedback, 39–42 potential limitations, 49 task analysis, 39–42, 40t time sampling, 49 trial/opportunity, 47–48, 48f Direct instruction (DI), 157 Discrete-trial instruction (DTI), 42 Discrete trial teaching (DTT), 84, 255–256 Diversity, equity, and inclusion (DEI), 322 to ABA curriculum, 329 checklist, 330, 331–332t components, 326 curriculum and coursework, 329–330 education units, 327 and ethics, 325 standards for higher education programs, 327, 328t topics and training, 329 training and supervision, 325 University training and preparation, 327

464

Index

DRA. See Differential reinforcement of alternative behavior (DRA) DTT. See Discrete trial teaching (DTT)

E EBI. See Equivalence based instruction (EBI) Enhanced written instructions (EWI), 100–101 Equivalence based instruction (EBI), 170 EWI. See Enhanced written instructions (EWI)

F FA. See Functional analysis (FA) Facilitated communication (FC), 348–349 Family Educational Rights and Privacy Act of 1974 (FERPA), 123, 248–249 and HIPAA, 250 FCT. See Functional communication training (FCT) Feedback loops, 358–361 collaborative, 361–362 FERPA. See Family Educational Rights and Privacy Act of 1974 (FERPA) Fitness tracking technology, 405–406 Functional analysis (FA) abbreviated, 69–70 behavioral assessment, 33, 66 behavior analysis, 76 behavior modification, 76 caregiver-child interactions, 64–65 child’s problem behavior, 65–66 control conditions, 65–66 environmental conditions, 65–66 function of behavior, 76 healthcare provider, 67–68 high-risk circumstances, 67–68 inconclusive outcomes, 71 interview-informed, synthesized contingency analysis (IISCA), 70 latency-based, 68–69 methodology, 65–66, 76–77 multidimensional assessment, 64–65 problem behavior, 33, 63–64 socially-mediated reactions, 63–64 strategies assessment of automatically maintained self-injurious behavior, 74

automatic and social function, 73 conducting preference assessments, 71–72 data analysis, 73 descriptive assessment, 71–72 high-rate behavior, 74 indirect assessments, 71–72 modifications, 72 treatment services, 33 trial-based, 69 Functional communication training (FCT), 19–20, 348

G General learning outcomes (GLOs), 177 Global Positioning Systems (GPS), 235 GLOs. See General learning outcomes (GLOs) Good behavior game (GBG), 408–409 Google Sheets, 122–123 GPS. See Global Positioning Systems (GPS) Graduated electronic decelerator (GED), consumers, 352 Graphing behavior analysts, 108 behavior-analytic community, 128–129 electronic graph, 107 features, 128–129 interventions, 128 GraphPad Prism, 123–124

H Health Insurance Portability and Accountability Act of 1996 (HIPAA), 123, 248–249, 274 Health, sport, and fitness (HSF) ABA and ASD, 393 ethical practice and consultation, 418–419 healthcare system, 394 healthy eating, 398–404 interventions ABA methods, 413–414 antecedent, 415 components, 413 consequences, 416 feedback, 416–417 mindfulness and acceptance-based behavioral, 414–415

package, 414 precision teaching, 414 sport behavioral assessment, 411 definition, 410 practitioners, 411 purposeful/aesthetic, 410 training programs, 419–420 weight management, 394–398 Healthy eating assessment and interventions, 399 and measurement, 404–405 fitness heart rate, 406–407 physical activity, 404 practitioners, 404–405 fruits and vegetables, 398–399 information and data, 399–400 interventions consumption, 401, 403 preparation, 402–403 selection, 401 TLL, 401–402 nutrition app, 400 nutrition assessment, 399 rigid and picky, 403–404 HIPAA. See Health Insurance Portability and Accountability Act of 1996 (HIPAA) HSF. See Health, sport, and fitness (HSF) Human service, 84, 102 Hybrid service model behavior analytic, 308–309 cultural and national boundaries, 309 FA + FCT model, 307–308 interpreters, 310 telehealth and in-vivo services, 307

I IISCA. See Interview-informed, synthesized contingency analysis (IISCA) Incentives-positive reinforcement practitioners, 455 preference assessment, 455 recommendation, 454–455

Index

465

Indirect assessment permanent products, 52, 53f rating scales, 50, 51f Intervention-level decision-making, 359 Interview-informed, synthesized contingency analysis (IISCA), 70 Iowa telehealth model behavior analyst, 291 development, 289–290 FA and FCT procedures, 290 preservice and clinical service, 291, 292–296t

L Licensed clinical psychologist with appropriate training, 66 Living arrangements, 4–5, 20, 22–23

M Measuring treatment integrity. See also Direct assessment; Indirect assessment evaluation, 54 intervention, 54 treatment-integrity errors, 54–55 Microsoft Excel, 122 Morningside Model of Generative Instruction (MMGI), 157 Multiculturalism assessment, 323 in behavior analysis, 335 culturally and linguistically diverse learners, 322–323 DEI, 322 and diversity, 321 families and caregiver training, 324 language, 323 leadership development, 333 mentoring, 333–335 supervisory practices, 330–333 Multiple-behavior FA (MBFA), 73 Multiple stimulus without replacement preference assessment (MSWO), 7–10, 9f

N Noncontingent reinforcement (NCR), 19–21

466

Index

O Organizational behavior management (OBM), 164–165 and ABA, 369, 371–372 and BACB, 369–370 behavior analyst certification, 383–384 and BSA, 369 consultation model behavioral, 372 behavior analysis, 370 human services domains, 370 BBS, 376 performance diagnostic assessment, 374 safety, 376 social validity, 379–380 training and performance management, 377–378 turnover and incentives, 382 literature, 384 performance incentives, 382–383 practitioners and researchers, 371 review, 371 roles, 384 technology-assisted and telehealth modalities, 384–385

P Performance analysis (PA). See Performance diagnostic assessment Performance diagnostic assessment, 374. See also Performance diagnostic checklist-human services (PDC-HS) conducting BSA, 374 PDC, 375 Performance diagnostic checklist (PDC), 375 Performance diagnostic checklist-human services (PDC-HS) procedural refinements, 346 research, 346 Performance feedback, 57–58 Personalized System of Instruction (PSI), 157, 162–163 PI. See Programmed Instruction (PI) Point of view (POV), 93–95 Practitioner-directed research, 439 Precision teaching (PT), 157

Preference assessments accounting for cultural differences, 17–19 applications, 23–24 assets and potential barriers, 10–12, 12f attention-maintained problem behavior, 10 behavior analysts, 24 conditions, 15 conduct, 16 decision-making model, 10–12, 13–14f free operant preference assessment (FO), 10, 11f graph of, 3–4, 5f indices of happiness, 20–21 interventions, 19–20 item selection and motivational variables, 12 nontangible items, 16–17 paired stimulus, 7, 8f positive reinforcement, 3 preference hierarchy, 3–4 research-based methods, 25 single stimulus, 5–7 social validity, 4–5, 19–23 strategies, 7 therapist, 3 training people to conduct, 24 transition services, 21–23 types, 3, 4f, 5 Problem-solving model behavior analytic procedures, 270 caregivers, 269–270 consultants, 269 multiple-baseline design, 268–269 school-based teleconsultation literature, 271 smartphones and tablet-laptop computers, 270–271 teleconsultation, 269–270 Programmed Instruction (PI), 160–161 PSI. See Personalized System of Instruction (PSI) PT. See Precision teaching (PT)

R Radio Frequency Identification (RFID) tags, 236 Registered behavior technician (RBT), 118, 133

Reinforcer Assessment for Individuals with Severe Disabilities (RAISD), 5, 6f Remote supervision advantages, 247–248 challenges and requirements, 248 component, 248–249 practice, 260 service delivery, 248 Research ethics behavior analysts, 446–447 RRC, 448 Research review committee (RRC), 438 functions, 448 operation, 448 Research teams behavior analysts, 445 and dissemination activity, 446 at human services settings, 445 literature, 446 practical steps, 445 on writing review, 446 Response interruption and redirection (RIRD), 36 RFID tags. See Radio Frequency Identification (RFID) tags

S Say All Fast Minute Every Day Shuffled (SAFMEDS), 171–173 SCEDs. See Single-case experimental designs (SCEDs) Self-injurious behavior (SIB), 67–68 Self-management interventions, 409 Self-Paced, Personalized, Interactive and Networked (SPIN) system, 165 Service delivery models legal and professional boundaries, 302–303 telehealth, 301–302 Single-case experimental designs (SCEDs), 108 accurate data entry and sourcing, 113–115 axes and labels, 115–116 bar graphs, 109–111 behavior analysts, 108–109 behavior-analytic researchers, 112 checklist, 112, 114–115t

Index

467

cumulative records, 111 data representation, 116–117 essential and quality features, 112, 113f figure-caption text, 118–119 graphing software behavior-analytic researchers, 124 Formative Grapher, 124 Google Sheets, 122–123 GraphPad Prism, 123–124 Microsoft Excel, 122 Systat SigmaPlot, 123–124 TherapyScience, 124 graphing training accurate data entry and sourcing, 124–125 behavioral skills training (BST), 127–128 formative graphing templates, 126 task analyses (TAs), 125–126 video modeling, 126–127 graph types, 108–109 legend, 117–118 line graphs, 109, 110f phase-change lines and labels, 118 quality features aspect ratio, 121 behavior analysts, 119 chartjunk, 119–120 data-ink ratio, 119–120 formatting considerations, 120–121 side-by-side comparison, 112 types, 111–112 Social interaction preference assessment (SIPA), 16–17 Social validity data ABA services, 350–351 collaborative, 361–362 within human services, 351 practical, 360 questionnaire, 349t, 351–352 Staff training, 84–86, 89–94, 96, 100–102 Stereotypic Movement Disorder with SelfInjurious Behavior, 74 Stimulus preference assessments (SPAs), 87 Supervision amount of, 139–140 basic requirements, 139 behavior analysis, 147

468

Index

Supervision (Continued) behavior analytic supervisor, 134 clinical content, 141–142 credential options, 138–139 critical tracking elements, 150 effective, 133–134 ethics, 136 high quality, 135–136 influence, 133–134 literature, 134–135 location of services, 140 modality, 141 requirement, 134–135 research inquiry, 133–134, 146–147 role and expectations, 137 sample supervision roadmap, 148 sample transition checklist, 151 service delivery, 133–134, 142–146 structure, 142 supervisee’s status, 139 supervisor checklist, 148 training and requirements, 141 training/teaching, 137–138 type, 140–141 virtual supervision, 136–137 visual of system, 150 volume and capacity, 141 Systat SigmaPlot, 123–124

T TA/TF system. See Teaching Assistant/ Teaching Fellow (TA/TF) system TBH. See Telebehavioral health (TBH) TBH C-CASS. See Trumpet Behavioral Health’s Clinical Competency Assessment and Support System (TBH C-CASS) Teaching Assistant/Teaching Fellow (TA/ TF) system, 165–166 Teaching fellows (TFs), 165–166 Teaching Science Lab (TSL), 166 Telebehavioral health (TBH) barriers, 192 benefits, 193 COVID-19 pandemic, 191, 210 equipment and software, 195 implementation, 191 legal and regulatory factors, 196–197

mental and behavioral health services, 191 organizations, 194–195 provider expectations clients, consultees and supervisees, 202, 203f emergency management plan, 205 information technology (IT) support staff, 206 informed consent, 201–202 preparation, 203–204 safety/mental health emergency, 205 translating services, 204–205 troubleshooting documents, 206, 207f recommendations, 210 services, 200–201 skills training, 209–210 system specifications, 196 telepresence, 198–199 training, 199 videoconferencing, 209 Teleconsultation BACB Code, 273 behavior analysts, 273 benefits, 266 caregivers and service providers, 266 communicating with consultees, 278–279 confidentiality and privacy, 272–273 consultants, 265 consultative model, 265 electronic communication, 274–275 literature review consultant’s interaction, 266–267 consultative arrangement, 267 consultees, 268 evidence-based model, 267–268 problem-solving consultation, 267–268 in rapport building, 271–272 consultant and consultees, 272 cultural factors, 272 process, 272 therapeutic alliance, 271–272 recommendations consultant and consultee, 275–276 equipment and software, 276 hardware, 277

HIPAA-compliant, 277 technology, 277 videoconferencing software, 276 roles and responsibilities, 279 scheduling and planning sessions, 278 and telehealth, 265–266 Telehealth. See also Iowa telehealth model in behavior analysis, 286 behavior analysts, 291–303 COVID-19 pandemic, 288–289 remote healthcare services, 286 technology, 285–286 use, 287–288 Telesupervision, 209 Token economy systems, lottery-based, 407–408 Traffic-light labeling (TLL), interventions, 401–402 Trainee skill recording schedule, 258f Treatment integrity. See also Measuring treatment integrity behavior-analysis services, 59 dependent variable (DV), 33 errors of commission, 36 independent variable (IV), 33 multidimensional construction, 33–34 practice, 56–58 programmed-error trials, 36 reporting, 55–56 researchers, 34–35 skill-acquisition interventions, 35 treatment-integrity data, 59 treatment-integrity errors, 34–35, 37–39 Trumpet Behavioral Health’s Clinical Competency Assessment and Support System (TBH C-CASS), 149 TSL. See Teaching Science Lab (TSL)

U University, 155–156, 162–165, 169, 172, 174, 176–177

V Video-based instruction (VBI), 83–84 Video-based training (VBT), 350 Videoconferencing, 191–193, 196, 198–201, 205, 209

Index

469

Video modeling (VM) active responding, 101–102 applications in staff training, 84–86 benefits, 87–89, 102 instructional materials, 100–101 interventions, 83 number of exemplars, 95–96 number of video viewings, 97–98 on-screen text, 98–99 package of procedures, 84 point-of-view/perspective, 93–95 selection of performer, 89–93 staff training programs, 102 standalone training/packaged intervention, 86–89 use of nonexamples, 96–97 variations, 89–102, 90–92t voiceover instruction, 99–100 Visual displays. See Graphing VM. See Video modeling (VM) VMVO. See VM with voice over instruction (VMVO) VM with voice over instruction (VMVO), 84

W Wearable devices, 236 Weight management acceptance-based interventions, 398 behavioral, 395 BMI, 394 optimal weight, 396 traditional behavioral interventions, 396–397 Western, Educated, Industrialized, Rich, and Democratic (WEIRD), 159–160 Writing, publication and public speaking antecedent and consequence, 450 guide, 453 journals, 449–450 learning principles, 450 literature, 449 oral presentation, 449 preparation tasks, 452 presentation style, 452–453 research dissemination, 452 substantial contribution, 454 voice quality, 453

This page intentionally left blank