125 56 7MB
English Pages 552 [571] Year 2023
ACADEMIC SKILLS PROBLEMS
ACADEMIC SKILLS PROBLEMS Direct Assessment and Intervention FIFTH EDITION
Edward S. Shapiro Nathan H. Clemens Foreword by Jay Shapiro, with Dan Shapiro and Sally Shapiro
THE GUILFORD PRESS New York London
Copyright © 2023 The Guilford Press A Division of Guilford Publications, Inc. 370 Seventh Avenue, Suite 1200, New York, NY 10001 www.guilford.com All rights reserved No part of this book may be reproduced, translated, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without written permission from the publisher. Printed in the United States of America This book is printed on acid-free paper. Last digit is print number: 9 8 7 6 5 4 3 2 1 The authors have checked with sources believed to be reliable in their efforts to provide information that is complete and generally in accord with the standards of practice that are accepted at the time of publication. However, in view of the possibility of human error or changes in behavioral, mental health, or medical sciences, neither the authors, nor the editor and publisher, nor any other party who has been involved in the preparation or publication of this work warrants that the information contained herein is in every respect accurate or complete, and they are not responsible for any errors or omissions or the results obtained from the use of such information. Readers are encouraged to confirm the information contained in this book with other sources. Library of Congress Cataloging-in-Publication Data Names: Shapiro, Edward S. (Edward Steven), 1951–2016 author. | Clemens, Nathan H., author. Title: Academic skills problems : direct assessment and intervention / Edward S. Shapiro, Nathan H. Clemens ; Foreword by Jay Shapiro, with Dan Shapiro and Sally Shapiro. Description: Fifth edition. | New York, NY : The Guilford Press, [2023] | Includes bibliographical references and index. Identifiers: LCCN 2023011231 | ISBN 9781462551194 (hardcover) Subjects: LCSH: Remedial teaching. | Basic education. | Educational tests and measurements. | BISAC: PSYCHOLOGY / Assessment, Testing & Measurement | EDUCATION / Special Education / Learning Disabilities Classification: LCC LB1029.R4 S5 2023 | DDC 372.4/3—dc23/eng/20230331 LC record available at https://lccn.loc.gov/2023011231 Guilford Press is a registered trademark of Guilford Publications, Inc.
About the Authors
Edward S. Shapiro, PhD, until his death in 2016, was Director of the Center for Promoting Research to Practice and Professor in the School Psychology Program at Lehigh University. Best known for his work in curriculum-based assessment and nonstandardized methods of assessing academic skills problems, Dr. Shapiro was the author or coauthor of numerous books and other publications, and presented papers, chaired symposia, and delivered invited addresses at conferences around the world. Dr. Shapiro’s contributions to the field of school psychology have been recognized with the Outstanding Contributions to Training Award from Trainers of School Psychologists, the Distinguished Contribution to School Psychology Award from the Pennsylvania Psychological Association, the Eleanor and Joseph Lipsch Research Award from Lehigh University, and the Senior Scientist Award from the Division of School Psychology of the American Psychological Association, among other honors. Nathan H. Clemens, PhD, is Professor in the Department of Special Education and Dean’s Distinguished Faculty Fellow at The University of Texas at Austin. He is a member of the Board of Directors of the Meadows Center for Preventing Educational Risk. Dr. Clemens’s research is focused on intervention and assessment for students with learning difficulties, with an emphasis on word reading difficulties among students in early elementary grades and reading comprehension difficulties for students in secondary grades. He has published over 60 peer-reviewed journal articles and chapters in edited books and is leading several federally funded research projects investigating interventions for academic skills. Dr. Clemens is a recipient of the Outstanding Dissertation Award and the Lightner Witmer Award for Early Career Scholarship from the Division of School Psychology of the American Psychological Association.
v
Foreword Jay Shapiro with Dan Shapiro and Sally Shapiro
I
am not an academic. I am not a scholar in the field of Psychology. Nor am I a student of School Psychology. I am not an expert on any of the aspects of this book save for one detail, the first name listed on the author line: Edward S. Shapiro. Professor Shapiro was my father. He died in 2016, at the age of 64. I wish you could have known him in person, but in many ways, by reading this book, you will get to spend time with a version of him that few of us outside of his academic family got to witness. To say that he was devoted to his students would be a drastic understatement. Growing up around adult conversations I could hardly understand, the full names of his incoming class or his most promising students grew as familiar to me as the name of the family dog. Those student names would come in waves stretched out a few years apart from the moment of their acceptance into his academic program to the day of their graduation—a day my father would never miss. Even in his final year of sickness while his immune system was severely compromised, he was sure to participate in this tradition and perform a small additional ritual. From the confines of his office, he donned his doctoral chevron- lined robe and gifted each of his graduates with a personalized desk nameplate engraved with those three hard-earned letters, “PhD.” As a kid, I never understood the specifics of what my father was up to while he worked away all hours of the day and night in the office, or if I happened to see him being interviewed on local television. I just knew that whatever the words meant, they must have been about helping kids. That’s the general part that I did understand because he was also my baseball coach. He always found his reward in the capacity as a teacher. He loved all kinds of teaching but there was a specific kind of feedback that really excited him. We had some good players on our baseball teams and my dad certainly liked helping them improve and advance, but what he really cherished was finding the struggling, less talented players and giving them the confidence to succeed. He would constantly trust the team’s weakest players at big moments in the game. The rest of us had no choice but to cheer them on and pull for them to come through. More often than not, they would. My
vii
viii
Foreword
favorite memories of my dad coaching me and my friends are these kinds of moments— when the smallest kid would poke a ground ball through the infield to bring home the winning run. My dad would be waiting at first base to deliver a serious high five to the team’s latest hero. That same person wrote this book. My dad didn’t just pull for the underdog; he obsessed over all the strategies and ways to give the underdog a boost, on the field, in the classroom, and even in our family. When my brother’s son was born after a difficult pregnancy and presented early signs of potential learning differences, my dad sprang into action and served as a sounding board and semi-impartial consultant as my brother prepared to navigate the waters of special education. I would be remiss not to mention that my brother’s son is doing incredibly well and that my dad would be thrilled to witness this regardless of his minor contribution to the outcome. I don’t mean to romanticize the details of my father too much. My dad’s coaching was no laid-back affair. He expected a lot out of us on that baseball field. And from the stories I heard, he could be a very demanding professor, too. I only impart that knowledge to you in case you find any of the work in front of you frustrating or difficult. Know that he would be there if he could, urging you toward excellence in a way that you might resist. There are times when, as his son, I would rebel against his persistence. I’d be tempted to give up the task and walk away. I’d have to tell him I just couldn’t do it. I still sometimes feel those pangs in my adult life without him, but I keep a rubber stamp with his name on it on my desk as a little reminder that he has my back and knows I’m capable of much more than I think. He was always sure that there was a little something more in all of us, and we usually proved him right. In his final months of illness, he only occasionally allowed himself to ponder the legacy of his life and work. When he did, his message to his students was succinct: “Take the work, build on it, and go forth.” One of the students who did exactly that is the other author listed for this edition, Nathan Clemens. At the risk of embarrassing Nathan, my father would use the word protégé without irony to describe what he saw in him. My mom remarked that if Dad could have picked anyone to write the next edition of this book, it would have been Nathan. As a family, we were thrilled to learn that this book would be updated by Nathan and that it was in good hands. My father published several books throughout his career, but this one was a source of particular pride. In fact, he requested a copy of the previous edition to be laid to rest with him, and, of course, a baseball. So, let this foreword serve as a message to Nathan and to all of the readers of this book: The pressure is still on from Edward S. Shapiro to really discover what we are capable of. And he’s still rooting for all of us.
Preface
T
he invitation to revise this text for a fifth edition was an incredible honor, but something I considered with significant trepidation. The book has meant so much to so many people. It changed the way that school psychologists thought about academic difficulties, how they approached assessment, and how they recommended (and implemented) intervention. It changed the way that people thought about school psychology and the roles that psychologists play in supporting students, teachers, schools, and families. My consideration of revisions to something so revered was not taken lightly. Dr. Edward Shapiro was my doctoral advisor and mentor while I was a graduate student in the School Psychology program at Lehigh University. To say he had a profound effect on my career is an understatement. I first read this book in his course on academic assessment (the second edition, at that time). Early on, Ed taught me principles that guide me to this day: the critical connection between assessment and intervention—one is incomplete, and often of lower quality, without the other; the fact that the most effective forms of assessment and intervention focus explicitly on skills relevant to the academic skill of interest; the critical importance of data and research evidence for decision making; and the idea that the assessment and intervention process is a form of empirical hypothesis testing. Ed taught me to think like a scientist and to be skeptical of claims without evidence. Assessments that require a high level of inference are of limited utility. No student who took Ed’s course in behavior assessment ever will forget his annual lecture on projective tests. The only way he was able to discuss the merits of such measures was to arrive in costume, complete with an ink-blot tie, and go by the name of “Dr. Rorschach” for one evening. But, he was able to argue so effectively that many of us, as behaviorally oriented first-year graduate students, began to wonder if he actually believed what he was saying—until he took off the costume. I didn’t have a name for it at the time, but this was my first experience with the “ideological Turing test”: the ability to present an opponent’s position so accurately and fairly that someone unfamiliar would not know on what side of the debate you fell.
ix
x
Preface
Ed was passionate about improving systems and supports for students. Perhaps nothing better exemplified his interest in instructionally driven assessment, and targeted intervention taken to scale, than multi-tiered systems of support (MTSS). It truly is a natural extension of his work. Beginning in 2006, I had the opportunity to work on a federally funded project, in which Ed was a principal investigator, that examined the implementation of MTSS. This was still the early days of such an approach. Under Ed’s leadership, we helped schools implement universal screening procedures and trained teachers to make better use of data. Working in partnership with teachers and principals, we helped to identify areas of the daily schedule in which targeted, intensifying intervention could be delivered on a daily basis. We also helped to support a systematic process for collecting progress monitoring data, and monthly meetings where teachers met and made instructional decisions based on their students’ progress monitoring data. Although not an experiment, several years of implementation revealed that the percentage of students considered to be at risk for reading failure declined over time. For students with the lowest levels of reading skills, average rates of reading skill growth tended to increase the longer the model was implemented, indicating that targeted intervention was successful in improving skill acquisition for students most in need. This project demonstrated how Ed’s perspective of direct assessment and intervention applies not only to individual students, but also as a way to improve entire systems. Working with him across this period, it became clear that this project was some of the work in which he was most proud. Being a part of that project had a profound impact on my career and subsequent work. I took on the task of revising this text because Ed’s work must live on. I belabored a number of decisions about what and how to revise, but updates were needed. In the 30+ years since the first edition of this text appeared, much has changed in our understanding of how academic skills develop, why academic difficulties occur and their common characteristics, availability of assessment measures, and perspectives on intervention. Ultimately, I came to the realization that the most respectful way to honor Ed, and the influence of his work and this text, was to situate his vision of assessment and intervention within a contemporary context of theory and evidence. The result is a significant revision, but one that I believe will revitalize Ed’s model and spark new interest among future generations of school professionals. The core aspects of his perspective are just as relevant today as they were in 1989 when the first edition was published. I hope that you will see that Ed’s vision and perspective of direct assessment and intervention, the importance of considering the academic environment, testing only what is relevant and meaningful for intervention, and the non-negotiable links between assessment and evidence-based intervention—all remain firmly in place. I have added significant discussion of theory and evidence on how skills in reading, mathematics, and writing develop, and the ways in which they break down. Central to this discussion are “keystone” models of reading, mathematics, and writing designed to guide assessment and intervention efforts in the context of a developmental sequence. Too often, school psychologists learn how to administer tests, but are not taught why certain assessments are important, or what they really mean. As a graduate student, I was mentored to seek understanding of the “why” behind assessment and intervention of academic skills, and it forever benefited my career. I hope this new content benefits others in similar ways. Curriculum-based types of measures and procedures are still prominent across the text, but readers familiar with previous versions will note a more inclusive view of the role that standardized tests of relevant academic skills can play. As understanding of reading,
Preface xi
mathematics, and writing has evolved and become more comprehensive, so too has the availability of tests that directly assess relevant skills and provide information useful for designing intervention. I believe what matters most is whether a test provides clear information on a relevant academic skill that informs intervention design, not whether the test is standardized or curriculum-based. Ed’s model of academic assessment is unique in its emphasis on the academic environment for understanding the skill difficulties of individual students. Now, more than ever, we are reminded of the importance of this perspective. Children enter school with a profound range of individual differences in language, experiences, and opportunities. Many children must interact with multiple languages across home and school environments. Often, the language or dialect spoken by the teachers is not the one with which the student is most familiar. Readers will note that, in this revision, language plays a central role in understanding skill development, proficiency, and difficulties in reading, mathematics, and writing. Children also come to school with different learning experiences and opportunities to build knowledge. In many cases, these differences are due to systemic inequality and the marginalization of children and families in culturally and linguistically diverse communities. Such individual differences in knowledge can result in a mismatch between the knowledge a student brings to the classroom and the kind of background knowledge needed to benefit fully from instruction. I hope that readers will see how Ed’s model is inherently set up to understand an individual student’s strengths relative to their difficulties, understand the interaction between the classroom environment and the knowledge and skills the student brings to the situation, and how a direct approach to assessment and intervention offers the best chances to bolster a student’s strengths, mitigate or eliminate skill difficulties, and ensure that all students can learn. I would like to thank several people who helped make this edition possible. Sally Shapiro, Ed’s wife, reinforced my confidence in taking on the revision, and helped assure me that this is what Ed would have wanted. Three outstanding scholars provided feedback on an earlier draft of the revision: Dr. Milena Keller-Margulis (also one of Ed’s former doctoral students), Dr. David Klingbeil, and Dr. Matthew Burns. Their feedback was exceedingly helpful. Dr. Sarah Powell and Dr. Christian Doabler, leading scholars in mathematics intervention and my colleagues at The University of Texas at Austin, endured my numerous questions about mathematics. Natalie Graham, my editor at The Guilford Press, initially reached out to me about the possibility of the revision and was incredibly supportive and helpful along the way. Senior Production Editor Anna Nelson tirelessly worked through my revisions and provided outstanding editorial support. In addition to Ed, I have had the honor of being mentored by several amazing scholars and scientists, including Dr. Lee Kern while I was a doctoral student at Lehigh, Dr. Deborah Simmons while I was an assistant professor at Texas A&M University, and Dr. Sharon Vaughn through the present at The University of Texas at Austin. In previous editions of this text, Ed always acknowledged the people who influenced his work, and they deserve recognition here as well: Stanley Deno, Phyllis Mirkin, Jim Ysseldyke, Lynn Fuchs, Doug Fuchs, Dan Reschly, Sylvia Rosenfield, Mark Shinn, Jeff Grimes, Greg Robinson, Jim Stumme, Randy Allison, Jim Tucker, and Joe Kovaleski. In 2017, in preparing for a talk at a conference held in Ed’s honor at Lehigh University, I reached out to principals and administrators in the school district where Ed led our implementation of the early MTSS model more than a decade earlier. I was interested to see what aspects of the model, if any, were still in place. I spoke with Erica Willis, who has been a principal in the participating schools from the start. A decade is a long time
xii
Preface
in education; we are lucky to get something to stick for 3 years, let alone 10, so I was not optimistic that some practices would still be in place. But, I was curious to know what was still there, long after the researchers had left. Her response was, “Everything.” The three tiers, the process, the intervention groups, the progress monitoring, the data teaming, the grade-level meetings, even some of the forms we had created to support decision making and data analysis (discussed in this text) were still in use. Not only that, the model was better; the school took it and made it their own. It had become part of the fabric and everyday language of the school. The model has now been implemented districtwide across all 13 elementary schools, and in some has been expanded to include tiered support for behavior. Principal Willis indicated that what mattered most in sustaining the practice was what Ed had fostered: a university partnership, a culture of collaboration and consistent dialogue, acknowledging shared expertise, a thoughtful and controlled rollout, and partnerships with school and district administrators who believed in the model and actively supported its implementation and expansion. Our work can make a difference. Ed was a role model in showing us how to do that. He showed us how to build collaborations with schools and to take care of those critical relationships. He was true to his convictions, he spoke his mind, and he was never afraid to question the status quo. We have been, and will always be, inspired by his passion for doing what’s right: for making decisions that are based on data, for evidence-based practice, for encouraging us to think differently about assessment and intervention, and, most importantly, for always seeking new ways to improve the lives of children and youth. He left this world a far better place than he found it. I am honored to have been Ed’s advisee and deeply grateful for the opportunity to offer this new edition. I hope this text will have at least a fraction of the impact on new school professionals as its earlier edition had for me. I end here with a quote from Ed, one that appeared in the last paragraph of the preface to the fourth edition. It is perfectly fitting of his passion and reflects something shared by so many of us who have been shaped by his work. “My final thought is to express thanks that I have been able to offer a piece of myself to improve the lives of children who struggle in school. They remain my passion, my reason for the work I do, and the reason why all of us are so dedicated to this field” (Shapiro, 2011, p. xii). Nathan H. Clemens
Contents
1. Introduction
1
The Focus of This Text 5 Background, History, and Rationale for Academic Assessment and Intervention 5 Assessment and Decision Making for Academic Problems 7 Types of Individual Assessment Methods 8 Direct Assessment of Academic Skills: Curriculum‑Based Assessment 18 Intervention Methods for Academic Skills 24
2. Choosing Targets for Academic Assessment and Intervention
33
Assumptions Underlying the Selection of Skills and Behaviors for Assessment 34 Sources of Influence for the Model 37 Identifying Targets for Assessment and Intervention in the Academic Environment 41 Identifying Academic Skills Targets for Assessment and Intervention 48 Skills and Processes That Are Rarely Good Targets for Academic Assessment and Intervention 87 Summary and Conclusions: Identifying Targets for Assessment and Intervention 87
3. Step 1: Assessing the Academic Environment Assessing the Academic Environment: Overview 92 Teacher Interviews 92 Direct Observation 109 Student Interview 125 Permanent Product Review 126 Hypothesis Formation and Refinement 130 xiii
90
xiv
Contents
4. Step 2: Direct Assessment of Academic Skills and Assessing Instructional Placement
139
Major Objectives in Directly Assessing Academic Skills 141 Direct Assessment of Reading Skills 141 Direct Assessment of Mathematics Skills 179 Direct Assessment of Writing Skills 192 Considering Skill versus Performance Deficits (“Can’t Do” vs. “Won’t Do”) in Academic Skills 202 Summarizing the Data Collection Process and Revisiting the Hypothesis 203
5. Step 3: Instructional Modification I: General Strategies and Enhancing Instruction
220
Background 220 General and Simple Strategies for Academic Problems: Less Intensive Approaches That Do Not Require Changing Instruction 223 Moderately Intensive Strategies and Approaches: Enhancements to Instruction to Make It More Effective 235 Summary and Conclusions 247
6. Step 3: Instructional Modification II: Specific Skills and More Intensive Interventions
249
Reading Interventions 250 I. Interventions for Basic (Beginning) Reading: Integrating Phonemic Awareness, Alphabetic Knowledge, Decoding, and Spelling 252 II. Intervention and Strategies for Advanced Word Reading Skills 264 Practice Strategies for Improving Word and Text Reading Efficiency 269 III. Interventions for Reading Comprehension and Content‑Area Reading 282 Interventions for Mathematics 294 Interventions for Writing Skills 317 Summary and Conclusions: Specific and Intensive Interventions 325
7. Step 4: Progress Monitoring
328
Curriculum‑Based Measurement 328 Characteristics of CBM and Other Forms of Progress Monitoring 330 Step 1: Selecting a Progress Monitoring Measure 333 Step 2: Setting a Progress Monitoring Goal 366 Step 3: Determining How Often to Monitor Progress 375 Step 4: Graphing Data 375 Step 5: Making Data‑Based Instructional Decisions 377 Summary and Conclusions: Ongoing Progress Monitoring and Data‑Based Decision Making 389
8. Academic Assessment within Response‑to‑Intervention and Multi‑Tiered Systems of Support Frameworks Assessment for Universal Screening 394 Progress Monitoring within MTSS 409 Comprehensive MTSS Programs 418 Summary and Conclusions 418
390
Contents xv
9. Case Illustrations
420
Case Examples for Academic Assessment 420 Case Examples of Intervention and Progress Monitoring 443 Case Examples of the Four‑Step Model of Direct Academic Assessment 453 Case Example of Direct Assessment within an MTSS Model 469 Summary and Conclusions 473
References 475
Index
537
CHAPTER 1
Introduction
rian, a second-grade student at Salter Elementary School, was referred to the B school psychologist for evaluation. The request for evaluation from the multidisciplinary team noted that he was easily distracted and was having difficulty
in most academic subjects. Background information reported on the referral request indicated that he was retained in kindergarten and was on the list this year for possible retention. As a result of his difficulties sitting still during class, his desk has been removed from the area near his peers and placed adjacent to the teacher’s desk. Brian currently receives supplemental (i.e., Tier 2) intervention in mathematics. Susan was in the fifth grade at Carnell Elementary School. She had been in a self-contained classroom for students with learning disabilities since second grade and was currently doing very well. Her teacher referred her to determine her current academic status and potential for increased inclusion. Jorgé was in the third grade at Moore Elementary School. He was referred by his teacher because he was struggling in reading and had been receiving support through the school’s emergent bilingual program for the past year since arriving from Puerto Rico. Jorgé’s teacher was concerned that he was not achieving the expected level of progress compared to other students with similar backgrounds. Dell was a third-grade student at Post Oak Elementary. He was referred for an evaluation for special educational eligibility by his teacher because of persistent reading difficulties and her observation that “he doesn’t understand anything he reads.” All of these cases are samples of the many types of referrals for academic problems faced by school personnel. How should the team proceed to conduct the evaluations? The answer to this question lies in how the problems are conceptualized. Years ago, a school psychologist would administer an individual intelligence test (usually, the Wechsler Intelligence Scale for Children—Fourth Edition [WISC-IV; Wechsler, 2003]), an individual achievement test (such as the Peabody Individual Achievement Test—Revised/Normative Update [PIAT-R/NU; Markwardt, 1997], or the Wechsler Individual Achievement 1
2
Academic Skills Problems
Test—Second Edition [WIAT-II; Wechsler, 2001]), and a test of visual–motor integration (usually, the Bender–Gestalt). In the past, and still today in some situations, the psychologist might add some measure of personality, such as projective drawings. Currently, it is common to see school psychologists administer a large battery of subtests that measure cognitive processes assumed to underlie academic skills, and based on the magnitude of the differences in the student’s scores across subtests (i.e., their “pattern of strengths and weakness”), determine eligibility for special education. Other professionals, such as educational consultants or educational diagnosticians, might assess the child’s specific academic skills by administering norm-referenced achievement tests, such as the Woodcock– Johnson Psycho-Educational Battery—Revised (Woodcock et al., 2001), the KeyMath—3 Diagnostic Assessment (KeyMath-3; Connolley, 2007), or other instruments. Based on these test results, a determination of eligibility (in the cases of Brian, Jorgé, and Dell) or evaluation of academic performance (in the case of Susan) would be made. When Brian was evaluated in this traditional way, the results revealed that his scores on tests of academic achievement were not discrepant enough from his scores on tests of intellectual and cognitive functioning, and therefore he was not eligible for special education. Not surprisingly, Brian’s teacher requested that the multidisciplinary team make some recommendations for remediating his skills. From this type of assessment, the team found it very difficult to make specific recommendations. The team suggested that because Brian was not eligible for special education, he was probably doing the best he could in his current classroom. They did note that his decoding skills appeared weak and recommended that some consideration be given to switching him out of a phonics-based approach to reading. When Susan was assessed, the data showed that she was still performing substantially below grade levels in all academic areas. Despite having spent the last 3 years in a self-contained classroom for students with learning disabilities, Susan had made minimal progress when compared to peers of similar age and grade. As a result, the team decided not to increase the amount of time that she be included in general education for academic subjects. When Jorgé was assessed, the team also administered measures to evaluate his overall language development in addition to a standard psychoeducational test battery. Specifically, the Woodcock–Muñoz Language Survey—T hird Edition (WMLS-III; Woodcock et al., 2017) was given to assess his English and Spanish language proficiency in academically related tasks such as listening, speaking, reading, and writing. The data showed that his poor reading skills were a function of less than expected development in English, rather than a general underdeveloped ability in Spanish. Jorgé was therefore not considered eligible for special education services other than programs for second language learners. Dell, the third grader evaluated for a learning disability in reading, was assessed using a large battery of cognitive skills tests and standardized tests of achievement. Despite significant reading difficulties evident in the assessment, he did not demonstrate a specific pattern of strengths and weaknesses in cognitive skills that, under the school district’s model of identification, would make him eligible for special education. Based on the test results, it was determined that he did not have a learning disability and therefore did qualify for special education services. Dell’s school did not offer supplemental interventions for students that were not formally identified with a disability; therefore, Dell remained in general education where his teacher did the best they could to support him. In contrast to viewing the referral problems of Brian, Susan, Jorgé, and Dell only from the question of diagnosis or identification, one could also conceptualize their referrals as questions of “which intervention strategies would be likely to improve their
Introduction 3
academic skills?” Seen in this way, assessment becomes a problem-solving process and involves a very different set of methodologies. First, to identify intervention strategies, one must have a clear understanding of the child’s accuracy and fluency with skills that are foundational to the achievement domain deemed to be a “problem,” the student’s mastery of skills that have already been taught, the rate at which learning occurs when the child is taught appropriate to their achievement level, and a thorough understanding of the instructional environment in which instruction has occurred. To do this, one must be able to select measures of academic achievement that are relevant to (1) the student’s areas of difficulty and (2) skills targeted in instruction. The assessment process must be dynamic and evaluate how the child progresses across time when effective instruction and sufficient practice opportunities are provided. Such an assessment requires an understanding of the developmental nature of academic skills, the precursor skills that are foundational to the development of proficiency within and across academic skill domains, and how to identify specific skill deficits that may explain broader academic difficulties. An understanding is required of the behaviors that facilitate learning, such as task engagement, and the instructional environments that best occasion these skills. The evaluator must understand how to identify measures that are sensitive to the impact of instructional interventions and inform subsequent decision making. A clear understanding of the student’s academic difficulties and instructional ecology (i.e., the classroom environment and the variables within it that promote or impede learning) is most clearly achieved through methods of direct assessment, direct observation, teacher and student interviews, and examination of student-generated products. When the assessment is conducted from this perspective, the results are more directly linked to developing effective intervention strategies that are customized to the individual needs of the student. When Brian was assessed in this way, it was found that he was appropriately placed in the curriculum materials in both reading and math. A lack of fluency in basic addition and subtraction facts was identified, which indicated the need for practice strategies to build his automaticity. Specific problems in spelling and written expression were noted, and specific recommendations for instruction in capitalization and punctuation were made. The assessment also revealed that Brian was not receiving enough opportunities to practice reading new word types with feedback from the teacher as they were taught, and not enough time to practice reading text. Thus, his difficulties with decoding were not a reason to stop a phonics-based reading program, but to increase explicit instruction in reading longer and more complex words and provide more opportunities to practice reading with feedback from a teacher. Moreover, the assessment team members suggested that Brian’s seat be moved from next to the teacher’s desk to sitting among his peers because the data showed that he really was not as distractible as the teacher had indicated. When these interventions were implemented, Brian showed gains in performance in reading, writing, and mathematics that equaled those of his general education classmates. The results of Susan’s direct assessment were more surprising and in direct contrast to the traditional evaluation. Although it was found that Susan was appropriately placed in the areas of reading, mathematics, and spelling, an examination of her skills in the curriculum showed that she probably could be successful in the general education classroom and that she would benefit from a reading intervention that her teacher provided to three other students in her classroom with similar reading skill needs. In particular, it was found that she had attained fifth-grade mathematics skills within the curriculum, despite scoring below grade level on the standardized test. When her reading group was adjusted, Susan’s data over a 13-week period showed that she was making the same level of progress as her fifth-grade classmates without disabilities.
4
Academic Skills Problems
Jorgé’s assessment was also a surprise. Although his lack of familiarity with English was evident, Jorgé showed that he was successful in learning, comprehending, and reading when the identical material was presented in Spanish. In fact, Jorge’s comprehension of spoken English was comparable to his comprehension of written English, and it was determined that Jorgé was much more successful in learning to read in English once he was able to read the same material in Spanish. In Jorgé’s case, monitoring his reading performance in both English and Spanish showed that he was making slow but consistent progress. Although he was still reading English at a level equal to that of first-grade students, goals were set for Jorgé to achieve a level of reading performance similar to middle second grade by the end of the school year. Data collected over that time showed that at the end of the third grade, he was reading at a level similar to students at a beginning second-grade level, having made almost 1 year of academic progress over the past 5 months. For Dell, the third grader with significant reading difficulties, consider an alternative history in which Dell had attended Rushing Springs Elementary School since kindergarten. This school implemented a schoolwide, multi-tiered system of support (MTSS). In this model, assessment is part of a dynamic, ongoing effort at providing high-quality instruction and intervention intensification. All students in the school are assessed periodically during the school year to identify those who may not be achieving at levels consistent with grade-level expectations. Students who are below grade-level benchmarks are provided with supplemental intervention designed to address their skill needs. The intervention is provided beyond the core instruction offered to all students, and Dell’s progress is carefully monitored in response to the interventions. Students who still are struggling despite this added intervention are provided with a more intensive level of intervention. Student eligibility for special education services is determined through a decision-making process called response to intervention (RTI), embedded within the MTSS framework. RTI considers the progress students make in evidence-based interventions as a component of an evaluation for special education eligibility. For Dell in this alternative history, the multidisciplinary team was alerted to his reading difficulties much earlier than third grade. Data from universal screening measures at the beginning of first grade indicated that he was below expected levels in early reading skills. From September through December of first grade, Dell received an additional 30 minutes of intervention (Tier 2) that specifically targeted skills that were low at the fall assessment. Dell’s progress during the intervention was monitored, which revealed that despite this additional level of intervention, he was still not making progress at a rate that would close the achievement gap between him and his same-grade classmates. From January through March, Dell received an even more intensive level of intervention more focused on his specific needs (Tier 3). The ongoing progress monitoring revealed that Dell was still not making enough progress, despite the increased intensity of instruction. As a result, the team decided that Dell should be evaluated to determine if he was eligible for special education services. The school psychologist conducting the comprehensive evaluation used Dell’s data from (1) universal screening, (2) progress monitoring during the intervention intensification process, (3) other measures of specific academic skills relevant to understanding Dell’s reading difficulties, and (4) evaluation of the classroom environment. All of these data would contribute to decisions regarding Dell’s eligibility for special education services. In Dell’s case, unlike the previous cases, the focus of the evaluation was not on a diagnostic process but on a problem-solving process. Dell presented with an achievement
Introduction 5
profile and rate of learning that, if allowed to continue, would have led to significant reading deficits. These difficulties would have negative implications across other academic domains, including mathematics (e.g., word-problem solving), writing, and achievement in science and social studies. The MTSS framework allowed for intervention to be implemented as soon as academic difficulties were observed, the data collected through the process drove the selection of interventions, and the outcomes of the intervention process informed the next steps in the evaluation process. One possibility is that when Dell was found to not have responded to instructional intervention offered through general education and supplemental support as part of the MTSS model, he might then be considered as potentially eligible for the most intensive level of service: that of a student identified as having a specific learning disability.
THE FOCUS OF THIS TEXT This text provides a direct assessment and intervention methodology for the evaluation and remediation of academic problems with students like Dell. Specifically, the text includes detailed descriptions of conducting a behavioral assessment of academic skills (as developed by Shapiro, 1989, 1990, 2004, 2011; Shapiro & Lentz, 1985, 1986), assessment frameworks that carefully consider relevant skills and alignment of measures to the curriculum of instruction (Hosp et al., 2014; Howell et al., 1993), evidence-based interventions focused on teaching skills critical for academic success, and interpretation and integration of data collected as part of the intervention process for determining the need for intervention intensification.
BACKGROUND, HISTORY, AND RATIONALE FOR ACADEMIC ASSESSMENT AND INTERVENTION The percentage of children who consistently experience academic problems has been of continued concern to school personnel. Over the past two decades, the national percentage of students receiving special education services has increased. According to the National Center for Education Statistics (NCES, 2022), in the 2000–2001 school year, 6.3 million children in the United States between 3 and 21 years of age (13% of total school enrollment) were identified as eligible for special education services. By the 2020–2021 school year, the number increased to 7.2 million (15% of school enrollment). Students identified as having learning disabilities comprise the largest classification of those receiving special education, making up approximately 33% of those classified and receiving special education services in 2020–2021, followed by speech and language impairment at 19% (which also plays a role in academic difficulties and learning disability). The past 30 years have been marked by considerable debate regarding the methods used to identify students as having a learning disability. In particular, the early 2000s saw significant challenges to the validity and practical utility of discrepancy formulae (i.e., making eligibility decisions based on the discrepancies between attained scores on intelligence and achievement tests; Fletcher et al., 2003; Francis et al., 1996, 2005; Peterson & Shinn, 2002; Sternberg & Grigorenko, 2002; Stuebing et al., 2002). Debate persists regarding the role and relevance of tests of cognitive processing in learning disability identification (e.g., Fletcher & Miciak, 2017; Schneider & Kaufman, 2017).
6
Academic Skills Problems
Among alternative approaches to identification, RTI is legally permitted through the 2004 passage of the Individuals with Disabilities Education Act (IDEA). This method requires identifying the degree to which a student responds to academic interventions that are known to be highly effective. Considered in the context of other assessment data and information, students who do not respond positively to such interventions would be considered as potentially eligible for special education services related to learning disability (Fletcher et al., 2002; Fletcher et al., 2019; Fletcher & Vaughn, 2009; Gresham, 2002). Evidence indicates that, when implemented appropriately, RTI-based decision making can improve the rates at which students who are genuinely in need of special education services are identified (Burns & VanDerHeyden, 2006; O’Connor et al., 2013; VanDerHeyden et al., 2007). However, concerns persist regarding schools’ ability to implement core instruction, invention, and assessment effectively that make RTI decisions possible (Balu et al., 2015; D. Fuchs & L. Fuchs, 2017; Ruffini et al., 2016), and inconsistency between school districts and state-level policies on procedures and implementation (Hudson & McKenzie, 2016). Others are not convinced that the RTI approach will ultimately improve the identification of students with learning disabilities (O’Connor et al., 2017; Scruggs & Mastropieri, 2002; Kavale & Spaulding, 2008; Reynolds & Shaywitz, 2009). Nevertheless, directly measuring a difficulty to learn in situations where most children are successful is closer to the idea of learning disability, in contrast to models that only consider cognitive and achievement discrepancy at a single point in time (Fletcher et al., 2019). Data have shown that academic skills problems are consistently a major focus of referrals for evaluation. With a nationally representative sample of school psychologists, Benson et al. (2019) found that 25% of all reported referral concerns were for suspected learning disabilities, which was at least twice as frequent as any other referral concern. Bramlett and colleagues (2002) examined the patterns of referrals made for school psychological services. Surveying a national sample of school psychologists, they ranked academic problems as the most frequent reasons for referral, with 57% of total referrals made for reading problems, 43% for written expression, 39% for task completion, and 27% for mathematics. These rates are historically stable; Ownby et al. (1985) examined the patterns of referrals made for school psychological services within a small school system (school population = 2,800). Across all grade levels except preschool and kindergarten (where few total referrals were made), referrals for academic problems exceeded referrals for behavior problems by almost 5 to 1. Clearly, there are significant needs for effective assessment and intervention strategies to address academic problems among school-age children. Indeed, the number of commercially available standardized achievement tests (e.g., Salvia et al., 2016) suggests that evaluation of academic progress has been a longstanding need among educators. Over the years, survey studies have observed an increase in the use of measures of academic achievement (Benson et al., 2019; Goh et al., 1981; Hutton et al., 1992; Stinnett et al., 1994; Wilson & Reschly, 1996). Most recently, the survey data published by Benson and colleagues (2019) indicated that five of the ten most administered assessment instruments by school psychologists on a monthly basis were measures of academic skills (three of which were curriculum-based measures). These recent data also noted an increase in the use of behavior rating scales and a marked decrease in the use of projective personality tests in contrast to previous survey studies. Despite the historically strong concern about assessing and remediating academic problems, controversy remains regarding the most effective methods for conducting
Introduction 7
useful assessments and choosing the most effective intervention strategies. Historically, educational professionals have expressed dissatisfaction with commercially available, norm-referenced tests (e.g., Donovan & Cross, 2002; Heller et al., 1982; Hively & Reynolds, 1975; Wiggins, 1989). Likewise, strategies that attempt to remediate deficient learning processes identified by these measures have historically not been useful in effecting change in academic performance (e.g., Arter & Jenkins, 1979; Good et al., 1993). However, the landscape of academic assessment has changed a great deal in the time since the first edition of this text was published in 1989. A greater understanding of how academic skills like reading, mathematics, and written expression develop (and how they break down) has led to the development of commercially available, norm-referenced tests that more directly measure relevant skills (e.g., the Comprehensive Test of Phonological Processing; Wagner et al., 2013), as well as new subtests added to longstanding broadband measures of academic skills. Curriculum-based measures are now readily available and are routinely used by many school psychologists (Benson et al., 2019). The question is no longer about whether school psychologists should or should not use a certain class of assessments, but rather how to select an ideal set of measures that will (1) provide the best assessment of a student’s academic strengths and difficulties, (2) help identify the reason(s) for their skill difficulties, and (3) point to how to best intervene.
ASSESSMENT AND DECISION MAKING FOR ACADEMIC PROBLEMS Salvia et al. (2007) defined assessment as “the process of collecting data for the purpose of (1) specifying and verifying problems, and (2) making decisions about students” (p. 5). They identified five types of decisions that can be made from assessment data: referral, screening, classification, instructional planning, and monitoring of pupils’ progress. They also add that decisions about the effectiveness of programs (program evaluation) can be made from assessment data. Not all assessment methodologies for evaluating academic behavior equally address each type of decision needed. For example, some norm-referenced instruments may be useful for understanding a student’s level of achievement compared to peers of the same grade or age, but may not be valuable for decisions regarding instructional programming. Likewise, criterion-referenced tests that offer intrasubject comparisons may be useful in identifying a student’s relative strengths and weaknesses but may not be sensitive to monitoring student progress within a curriculum. Methods that use frequent, repeated assessments may be valuable tools for monitoring progress but may not offer sufficient data on identifying the cause of a student’s academic problem. Clearly, the use of a particular assessment strategy should be linked to the type of decision one wishes to make, and no specific measure type is likely to be sufficient. A framework that can be used across measures and types of decisions would be extremely valuable. The various types of decisions described by Salvia et al. (2007) should require the collection of different kinds of data. Interesting trends have been observed over the years in the types of assessment data that are commonly collected. In an early study, Goh et al. (1981) reported data suggesting that regardless of the reason for referral, most school psychologists administer an individual intelligence test, a general test of achievement, a test of perceptual–motor performance, and a projective personality measure. A replication of the Goh et al. study 10 years later (Hutton et al., 1992) found that little had changed. Psychologists still spent more than 50% of their time engaged in assessment.
Academic Skills Problems
8
Hutton et al. (1992) reported that the emphasis on intelligence tests noted by Goh et al. (1981) had lessened, whereas the use of achievement tests had increased. Hutton et al. (1992) also found that the use of behavior rating scales and adaptive behavior measures had increased somewhat. Stinnett et al. (1994), as well as Wilson and Reschly (1996), again replicated the basic findings of Goh et al. (1981). In a survey of assessment practice, Shapiro and Heick (2004) observed some shifting of assessment practices in relation to students referred for behavior disorders toward the use of measures such as behavior rating scales and systematic direct observation. Shapiro et al. (2004) also found some self- reported movement of school psychologists toward the use of curriculum-based assessment (CBA) measures when the referral was for academic skills problems. Even so, almost 47% of those surveyed reported not using CBA in their practice. Similarly, Lewis et al. (2008), as part of a national telephone survey of practicing school psychologists, examined the self-reported frequency of functional behavioral assessment (FBA) and CBA in their practice over the past year. Outcomes of their survey found that between 58 and 74% reported conducting fewer than 10 FBAs in the year of the survey, and between 28 and 47% of respondents reported using CBA. Trends toward greater use of measures that more directly assess a skill of interest have continued. Benson et al. (2019), in their survey of a nationally representative sample of practicing school psychologists, observed a significant increase compared to previous surveys in practitioners’ routine use of curriculum-based measures, behavior observations, behavior rating scales, problem-solving interviews, and functional assessment interviews. Concurrently, increased use of these methods was associated with decreased use of projective tests and tests of psychomotor functioning. However, Benson et al. found that tests of intellectual and cognitive functioning remain common. Thus, across nearly 40 years, assessment practices changed dramatically since the findings of Goh et al. (1981) and moved away from high-inference projective tests toward methods that more closely evaluate students’ behavior and achievement as a function of their environment and instruction. Nevertheless, comprehensive tests of cognitive and intellectual functioning remain common in evaluations of learning difficulties, despite evidence that their use is not necessary (and can lead to problems) in identifying learning disabilities (Fletcher & Miciak, 2017). This chapter provides an overview of the conceptual issues of academic assessment and remediation. The framework on which behavioral assessment and intervention for academic problems are based is described. First, however, the current state of academic assessment and intervention is examined.
TYPES OF INDIVIDUAL ASSESSMENT METHODS Norm‑Referenced Tests One of the most common methods for evaluating individual academic skills involves administering commercially available, norm-referenced, standardized tests of achievement. For simplicity, we refer to this class of tests as “standardized” tests of achievement.1 the text, we use the term standardized tests to refer to published, commercially available, norm-referenced, standardized tests. We acknowledge that this term is imprecise; standardization simply refers to establishing a set of administration and scoring procedures; therefore, any test can be standardized. Additionally, measures not commercially available can be norm-referenced. Nevertheless, standardized tends to be associated most often with the class of commercially available measures described in this section, and thus we use the term for simplicity.
1 Throughout
Introduction 9
Examples of commonly used standardized tests of achievement include batteries of multiple skills such as the Woodcock–Johnson IV Tests of Achievement (WJ IV; Schrank et al., 2014), or the Wechsler Individual Achievement Test—Third Edition (WIAT-III; Wechsler, 2009), which include subtests that assess reading, mathematics, spelling, writing, and provide overall scores for each skill domain. Other standardized norm-referenced tests are designed to assess more specific skills, such as the Test of Word Reading Efficiency— Second Edition (TOWRE-2; Torgesen et al., 2012), KeyMath—3 Diagnostic Assessment (KeyMath-3; Connolley, 2007), and the Test of Written Language— Fourth Edition (TOWL-4; Hammill & Larsen, 2009). One of the primary purposes of standardized tests is to determine an individual’s “relative standing,” in other words, where a student’s level of performance stands in relation to other individuals of the same age or grade. Scores on the test are derived by comparing the scores of the child being tested to scores obtained across a large sample of children representing a full range of achievement. Standard scores and percentile ranks are used to describe the relative standing of the target child in relation to other students of the same age or grade in the normative sample (e.g., above average, average, below average, well below average). On a test appropriate for a given age or grade level, the performance of individuals in the normative sample is expected to be “normally distributed.” This means that scores of most individuals will cluster in the middle of the range around the mean (i.e., average), with increasingly smaller numbers of individuals scoring at the upper end (higher achievers) and lower end (lower achievers) of the distribution. This distribution is assumed to approximate the overall population of individuals from which the normative sample was drawn. Thus, normative data give the assessor a reference point for identifying the degree to which the responses of a specific student differ significantly from those of the average same-age/same-grade peer. In other words, a comparison of an individual’s achievement to a normative sample indicates the size of the difference between a student’s performance on the skill and how most other students perform. Despite the popular and widespread use of standardized tests for assessing individual academic skills, some issues limit their usefulness. Suppose a test is purports to evaluate a student’s acquisition of knowledge. In that case, the test should assess the knowledge student should be expected to demonstrate given the content and curriculum used for instruction. If there is little overlap between what the student is expected to know on the test and what the student was taught, a child’s low score on the measure may not necessarily reflect a failure to learn. Instead, the child’s score may be due to the test’s poor relation to the curriculum the child was provided, or content that is not representative of skills important for the academic domain the test was designed to measure. In the 1980s and early 1990s, Shapiro and several other scholars in school psychology and special education identified problems with the lack of alignment between what students were taught and what they were tested on in commercial, norm-referenced tests of achievement. In a replication and extension of the work of Jenkins and Pany (1978), Shapiro and Derr (1987) examined the degree of overlap between five commonly used basal reading curricula and four commercial, standardized achievement tests. At each grade level (first through fifth), the number of words appearing on each subtest and in the reading curricula was counted. The resulting score was converted to a standard score (M = 100, SD = 15), percentile, and grade equivalent, using the standardization data provided for each subtest. The results of this analysis are reported in Table 1.1. Across subtests and reading series, there appeared to be little overlap between the words appearing in the series and on the tests.
10
23 28 37 40 40
20 23 23 23 23
23 24 24
Ginn-720 Grade 1 Grade 2 Grade 3 Grade 4 Grade 5
Scott, Foresman Grade 1 Grade 2 Grade 3 Grade 4 Grade 5
Macmillan-R Grade 1 Grade 2 Grade 3
RS
1.8 2.0 2.0
1.4 1.8 1.8 1.8 1.8
1.8 2.8 4.0 4.4 4.4
GE
58 20 9
27 13 7 3 1
58 50 52 40 25
%tile
PIAT
103 87 80
91 83 78 72 65
103 100 101 96 90
SS
35 41 48
39 44 46 46 46
40 52 58 58 61
RS
IB IM 2B
IM IE 2B 2B 2B
IM 2M 2E 2E 3B
30 10 5
41 16 4 1 .7
47 39 27 16 12
%tile
WRAT-R GE
TABLE 1.1. Overlap between Basal Reader Curricula and Tests
92 81 76
97 85 73 67 59
99 96 91 85 82
SS
13 18 21
12 17 17 17 17
14 23 27 27 28
RS
1.6 2.0 2.3
1.5 1.9 1.9 1.9 1.9
1.6 2.6 3.2 3.2 3.4
GE
32 19 12
27 18 5 2 1
37 42 32 16 9
%tile
K-TEA
93 87 82
91 86 70 70 66
95 97 93 85 80
SS
42 58 66
33 63 63 63 63
38 69 83 83 83
RS
1.9 2.2 2.4
1.8 2.3 2.3 2.3 2.3
1.8 2.5 3.0 3.0 3.0
GE
44 22 10
30 27 9 2 .4
38 33 24 10 4
%tile
WRM
98 89 81
92 91 80 70 56
96 94 90 81 74
SS
11
24 28 35 35 35
Keys to Reading Grade 1 Grade 2 Grade 3 Grade 4 Grade 5
1.8 2.2 2.6 2.8 2.8
2.0 2.8 3.8 3.8 3.8
2.0 2.0
58 28 21 11 6
68 50 47 26 14
4 2
103 91 88 82 77
107 100 99 91 84
74 69
35 46 49 54 55
41 51 59 59 59
48 50
IB 2B 2B 2M 2E
1M 2M 3B 3B 3B
2B 2M
30 21 6 8 4
50 37 30 18 8
2 1
92 88 77 79 73
100 95 92 86 79
70 65
13 17 20 22 24
15 20 24 24 25
21 21
1.6 1.9 2.2 2.4 2.7
1.7 2.2 2.7 2.7 2.8
2.3 2.3
32 18 10 6 4
42 27 19 9 5
5 2
93 86 81 77 64
97 91 87 80 76
75 70
37 56 68 76 81
42 68 84 84 84
67 67
1.8 2.1 2.5 2.8 2.9
1.9 2.5 3.1 3.1 3.1
2.5 2.5
35 20 11 7 3
44 33 26 11 4
3 2
94 89 82 78 72
98 94 91 82 74
72 69
Note. The grade-equivalent scores “B, M, E” for the WRAT-R refer to the assignment of the score to the beginning, middle, or end of the grade level. RS, raw score; GE, grade equivalent; SS, standard score (M = 100; SD = 15); PIAT, Peabody Individual Achievement Test; WRAT-R, Wide Range Achievment Test—Revised; K-TEA, Kaufman Test of Educational Achievement; WRM, Woodcock Reading Mastery Test. From Shapiro and Derr (1987, pp. 60–61). Copyright © 1987 PRO-ED, Inc. Reprinted by permission.
Scott, Foresman—Focus Grade 1 23 Grade 2 25 Grade 3 27 Grade 4 28 Grade 5 28
24 24
Grade 4 Grade 5
12
Academic Skills Problems
Although these results suggest that the overlap between what is taught and what is tested on reading subtests is questionable, the data examined by Shapiro and Derr (1987) and by Jenkins and Pany (1978) were hypothetical. Good and Salvia (1988) and Bell and colleagues (1992) provided evidence that with actual students evaluated on standardized norm-referenced achievement measures, there is inconsistent overlap between the basal reading series employed in their studies and the different measures of reading achievement. In the Good and Salvia (1988) study, 65 third- and fourth-grade students who were all being instructed in the same basal reading series (Allyn & Bacon’s Pathfinder Program, 1978) were administered four standardized norm-referenced reading subtests popular at the time. Their analysis showed significant differences in test performance for the same students on different reading tests, predicted by the test’s content. Using a similar methodology, Bell et al. (1992) examined the content validity of three standardized tests of word reading: the Reading Decoding subtest of the Kaufman Test of Educational Achievement (KTEA), the Reading subtest of the Wide Range Achievement Test—Revised (WRAT-R; Jastak & Wilkinson, 1984), and the Word Identification subtest of the Woodcock Reading Mastery Tests—Revised (WRMT-R; Woodcock, 1987). All students (n = 181) in the first and second grades of two school districts were administered these tests. Both districts used the Macmillan-R reading series (Smith & Arnold, 1986). Results showed dramatic differences across tests when a word-by-word content analysis (as conducted by Jenkins & Pany, 1978) was conducted. Perhaps more importantly, significant differences were evident across tests for students within each grade level. For example, students in one district obtained an average standard score of 117.19 (M = 100, SD = 15) on the WRMT-R and a score of 102.44 on the WRAT-R, a difference of a full standard deviation. Problems of overlap between test and text content are not limited to reading. For example, Shriner and Salvia (1988) examined the curriculum overlap between two elementary mathematics curricula and two commonly used individual norm-referenced standardized tests (KeyMath and Iowa Tests of Basic Skills) across grades 1–3. Hultquist and Metzke (1993) examined the overlap across grades 1–6 between standardized measures of spelling performance (subtests from the KTEA, Woodcock–Johnson—Revised [WJ-R], PIAT-R, Diagnostic Achievement Battery—2, Test of Written Spelling—2) and three basal spelling series, as well as the presence of high-frequency words. An assessment of the correspondence for content and the type of learning required revealed a lack of content correspondence at all levels in both studies.
Caveats and Cautions to Interpreting the Studies on Standardized Test and Curriculum Overlap The studies described above have appeared in this text since its early editions, published during a time when criticism of standardized norm-referenced tests of achievement was strongest. This critique was most often based on a lack of direct correspondence between the items on a test and whether those specific items appeared in the curriculum used for instruction. There is certainly a rationale for the criticism; one should always try to test what one teaches. The previous work in this area was valuable in helping educators and test developers focus on the most relevant test content. However, theory and evidence that emerged in the 30 years since the first edition of this text have indicated that, in some skill areas such as reading, a lack of extensive and specific overlap between a test and the curriculum is not as problematic as once suggested.
Introduction 13
Studies that focused on the overlap between tests of word reading skills and reading curricula (e.g., Bell et al., 1992; Good & Salvia, 1988; Jenkins & Pany, 1978; Shapiro & Derr, 1985) were based on a problematic notion that a test was more valid for assessing reading skills if the specific words in the test also appeared in the reading curriculum the student received. The problem with this perspective is the faulty assumption that reading development is solely the product of instruction, and that one’s ability to read a word depends on having been taught that specific word. This is clearly not the case. No reading curricula could ever hope to target every single word that students will encounter in print, nor is that even necessary. Share’s self-teaching hypothesis and research on it (Share, 1995, 2008), as well as connectionist models of reading acquisition (Harm & Seidenberg, 1999; Seidenberg, 2005, 2017), illuminated how children develop the ability to read thousands of words without having been exposed to instruction in every single word. Word reading is about learning a code. As discussed in Chapters 2 and 6, effective reading instruction involves directly teaching how sounds connect to letters and letter combinations, using that information to decode words, and providing opportunities for students to read lots of types of words and texts while providing affirmative and corrective feedback. Equipped with those skills and provided extensive opportunities to read, typically developing readers become better readers the more they read. Their strong foundational skills allow them to read words they have never seen in print based on their knowledge of spelling–sound correspondence and pronunciation of letter combinations (and help on some occasions from vocabulary and reading comprehension). They have “cracked the code”; their development of reading proficiency was due more to how they were taught to read words, not the specific words that appeared in a curriculum. Struggling readers and students with learning disabilities commonly experience difficulties with the basic skills that facilitate word reading acquisition, and they often have problems generalizing basic decoding skills to words or similar spelling patterns that were not targeted in instruction. These are hallmarks of reading disability. Therefore, a typically developing reader will perform better on a standardized test of word reading than a student with a reading problem, regardless of the proportion of words that overlap between the test and the curriculum materials they received. To be useful, a test should measure the types of words a student should reasonably be expected to read; a lack of overlap between the specific words on the test and those that appeared in the curriculum of instruction is not a reason to say a test lacks validity or has no utility for understanding a student’s reading difficulties. The issue applies to other academic areas as well. Spelling is similar to word reading; performance on a spelling test is influenced more by a student’s knowledge of letter– sound correspondence, knowledge of letter combinations, word-specific spellings, and reading and writing experience, and less about having been directly taught how to spell the specific words in the test. Reading comprehension depends on a host of factors related to word reading, language comprehension, and background knowledge, not only on specific comprehension strategies taught in a curriculum or on the types of texts provided. Therefore, performing well on a test of reading comprehension can be influenced by previous instruction in the subject matter of the passages, but it is certainly not dependent on high test-instruction overlap. Mathematics, vocabulary, and some aspects of early literacy are areas in which overlap between what was taught and what is tested is more relevant. Unlike word reading, there is much less “self-teaching” possible in mathematics—correctly completing specific mathematics operations often depends on having been taught that operation. In vocabulary, unless context is provided that allows a test-taker to infer word meaning or meaning
14
Academic Skills Problems
can be inferred by understanding the morphemes within the word (i.e., affixes, roots), demonstrating knowledge of vocabulary terms depends on having been exposed to them in language, reading, or having been directly taught them. In early literacy, associating a letter or letter combination with a sound is the basis of the alphabetic code and must be taught. Therefore, the relevance of test-instruction overlap varies across different academic skills.
Judicious Consideration of Standardized Norm‑Referenced Tests Previous editions of this text conveyed little optimism regarding the utility of standardized norm-referenced tests of achievement in individual student evaluations. But much has changed since the late 1980s in terms of our understanding of academic skills development and the quality and availability of academic achievement tests. Readers familiar with that history will note that the perspective taken in this edition is more open to the possibility that some standardized tests and subtests of relevant academic skills can contribute meaningfully to an academic assessment and intervention plan. The preceding and subsequent discussions are not meant to fully absolve standardized tests of achievement, but no test (standardized or not) should be above critique. Rather, any test should be scrutinized regarding its relevance and usefulness for understanding a student’s academic difficulty and what to do about it. As noted earlier, individual standardized tests are helpful in indicating the relative standing of an individual within a peer group. Provided the test assesses skills relevant to a student’s grade level or stage of development, and the academic domain of interest, understanding the extent to which a student’s current level of performance differs from that of their peers can help establish the need for supplemental intervention support. Across academic domains, current performance is one of the strongest predictors of subsequent achievement unless instruction intercedes. Moreover, one of the defining features of learning disabilities is performance significantly below that of peers (Fletcher et al., 2019). Understanding the extent of a student’s skill deficit informs both the need for and the intensity of an intervention. When I (Clemens) work with a student with reading difficulties, one of the first things I want to see is their performance relative to grade-level peers on the Test of Word Reading Efficiency (Second Edition), given how important accuracy and efficiency in reading isolated words are for reading proficiency. To be clear, many curriculum-based measures have normative information available; therefore, this need is not satisfied exclusively by standardized tests. However, in some cases, standardized tests provide high-quality information on specific skills that is not available with other forms of assessment. Although normative comparisons and relative standing information can be helpful for determining the need for supplemental or intensified intervention supports, standardized achievement tests may have limited use in other types of assessment decisions. An important consideration in assessing academic skills is determining how much progress students have made across time. This determination requires that periodic assessments be conducted. Not only would the costs of frequently administering standardized norm- referenced tests be prohibitive, but these measures were never designed to be repeated at frequent intervals. Doing so would create practice effects that would compromise the integrity of the test. In addition to the problem of bias from the frequent repetition of the tests, the content and stimuli assessed on standardized tests may result in a very poor sensitivity to small changes in student behavior. Typically, these tests contain items that sample a large
Introduction 15
array of skills. As instruction or intervention occurs, gains evident on a daily or weekly basis may not appear on the standardized norm-referenced test since these skills may not be reflected on test items. Vocabulary knowledge provides a good example because learning vocabulary terms depends on being exposed to them in language, text, or instruction. Unless an intervention targeted skills in inferring word meaning from surrounding text and the test was set up to allow students to use context (e.g., vocabulary terms situated within a sentence), a test of vocabulary knowledge will not be sensitive to effects of an intervention if none of the vocabulary terms on the test were targeted in instruction. Indeed, studies of interventions designed to improve students’ vocabulary knowledge have consistently demonstrated meaningful effect sizes on researcher-developed vocabulary tests aligned with the words targeted in the intervention, but minimal effects on standardized tests of vocabulary (Marulis & Neuman, 2010). The effects observed on the researcher-developed tests indicate that interventions were effective in improving vocabulary knowledge, but a standardized vocabulary test is not a fair index of intervention effects if students were never exposed to the tested words in the first place. In summary, standardized tests of specific and relevant academic skills can provide information for understanding the extent and severity of a student’s academic difficulties at a given time. This can help establish the need for supplementary intervention support and how intensive those supports should be. Additionally, many contemporary standardized tests have often undergone an extensive development process using advanced methodologies such as item response theory, have a broad normative base, and therefore demonstrate strong psychometric properties. On the other hand, standardized norm-referenced tests are not sensitive to small changes in student functioning, are not designed for frequent administration, and may not test content covered in instruction. These advantages and limitations must be considered in academic evaluations. The judicious use of standardized norm-referenced tests in academic assessment is discussed in more detail in Chapter 4.
Criterion‑Referenced Tests Another method for assessing individual academic skills is to examine a student’s mastery of specific skills. “Mastery” in this context refers to skills learned to the point of very high accuracy and automaticity (i.e., fluency). This procedure requires a comparison of student performance against an absolute standard that reflects proficiency of a skill (i.e., a “criterion” of performance), rather than the normative comparison made to peers of the same age or grade. Many of the statewide, high-stakes tests administered by schools use criterion-referenced scoring procedures, which identify students as scoring in categories such as “below basic,” “basic,” “proficient,” or “advanced” based on their scores relative to predetermined criteria. Scores on criterion-referenced measures are interpreted by determining whether a student’s score meets a criterion that reflects mastery or proficiency of that skill. By looking across the different skills assessed, one can determine the particular components of the content area (e.g., reading, mathematics, writing, learning-related behaviors) that represent strengths or weaknesses in a student’s academic profile. Examples of individual criterion-referenced instruments are a series of inventories developed by Brigance. Each of these measures is designed for a different age group, with the Brigance Inventory of Early Development— (Brigance, 2013a) containing subtests geared for children from birth through age 7, the Comprehensive Inventory of Basic Skills— I II (CIBS-III; Brigance, 2013b) providing inventories for skills development
16
Academic Skills Problems
between PreK and grade 9, and the Transition Skills Inventory (TSI; Brigance, 2013c) that contains inventories for informing plans for students in middle and high school to transition to postsecondary school life. Each measure includes skills in academic areas such as readiness, speech, listening, reading, spelling, writing, mathematics, and study skills. The inventories cover a wide range of subskills, and each inventory is linked to specific behavioral objectives. Another example of an individual criterion- referenced test is the Phonological Awareness Literacy Screening (PALS; Invernizzi et al., 2015) developed at the University of Virginia. The PreK, kindergarten, and grades 1–3 versions were designed to measure young children’s early literacy development, such as phonological awareness, alphabetic knowledge, letter sounds, spelling, word concepts, word reading in isolation, and passage reading. As with most criterion-referenced measures, the PALS is given to identify those students who have not developed these skills to levels that are predictive of future success in learning to read. The PALS measures were originally designed as broad-based screening measures, although they can also be used for diagnostic and progress monitoring decisions (Invernizzi et al., 2015). There are situations in which criterion-referenced tests have utility. For example, criterion-referenced measures may be useful for screening decisions. If the interest is in identifying children who may be at risk for academic failure, the use of a criterion- referenced measure should provide a direct comparison of the skills a student possesses against the range of skills expected of same-age or same-grade peers, skills important for a particular period in development, or skills that are key for an upcoming transition. The type of decision in which criterion-referenced tests may contribute most is the identification of target areas for the development of educational interventions. Given that these measures contain assessments of subskills within a domain, they may be useful in identifying specific strengths and weaknesses in a student’s academic profile. In this way, students who have substantially fewer or weaker skills can be targeted for more in-depth evaluation and possible intervention. The measures do not, however, offer direct assistance in identifying intervention strategies. Instead, by indicating a student’s strengths, they may aid in the development of interventions capitalizing on these subskills to remediate weaker areas of academic functioning. It is up to the evaluator and educators to (1) determine which skills identified by the test as “weaknesses” are the most important targets for additional assessment and intervention, and (2) align relevant, evidence-based interventions to address the most important areas of weakness. Criterion-referenced tests can also be helpful for program evaluation. These types of decisions involve the examination of the progress of a large number of students. As such, any problem of limited curriculum–test overlap or sensitivity to short-term growth of students would be unlikely to affect the outcome. For example, one could use the measure to determine the percentage of students in each grade meeting the preset criteria for different subskills, and the extent to which that percentage changes across a school year. This information may be of use in evaluating the effectiveness of instruction or a curriculum. When statewide assessment data are reported, they are often used in this way to identify districts or schools that are meeting or exceeding expected standards. Criterion-referenced tests are less helpful in other situations, such as determining eligibility for special education. If criterion-referenced measures are involved in such decisions, it is critical that skills expected to be present in students without disabilities be identified. Because these measures do not typically have a normative base, it becomes difficult to make statements about a student’s relative standing to peers. For example, to use a criterion-referenced test in kindergarten screening, it is necessary to know the type and
Introduction 17
level of subskills children should possess as they enter kindergarten. If this information were known, the obtained score of a specific student could be compared to the expected score, and a decision regarding their probability of success could be derived. Of course, the empirical verification of this score would be necessary since identifying subskills needed for kindergarten entrance would most likely be obtained through a teacher interview. Although criterion-referenced tests could be involved in classification decisions, they should not be employed in isolation in this way. A challenge to interpreting some criterion-referenced tests is that it is not clear how the criterion representing mastery was derived. Although one method for establishing this criterion may be a normative comparison (e.g., criterion = number of items passed by 80% of same-age/same-grade peers), most criterion-referenced tests establish the acceptable criterion score based on logical, rather than empirical, analysis. Another area in which individual criterion-referenced tests require careful consideration is in monitoring student progress. It would seem that since these measures only make intrasubject comparisons, they would be valuable for monitoring student progress across time. Indeed, some measures like the PALS can be used for monitoring students’ progress toward important milestones and benchmarks in early literacy. However, some criterion-referenced tests are not set up for repeated administration. Most criterion- referenced measures are developed by examining standards or published curricula and pulling a subset of items together to assess a subskill. As such, student gains in a specific curriculum may or may not be related directly to performance on the criterion-referenced test. Our earlier discussion regarding test-curriculum overlap is also relevant here. Another consideration related to monitoring student progress is the limited range of subskills included in a criterion-referenced test. Typically, most criterion-referenced measures contain a limited sample of subskills and a limited number of items assessing any particular subskill. This may require frequently changing the progress monitoring measure as skills are mastered. Nevertheless, as we discuss further in Chapter 7, there are times when monitoring a student’s progress toward mastery of a specific skill (which may be defined as meeting a particular accuracy or fluency criterion) that is directly targeted by instruction can be a valuable component of a progress monitoring plan and may benefit the instructional decision-making processes (VanDerHeyden & Burns, 2018). In summary, like standardized norm-referenced tests, criterion-referenced tests have utility in some situations. Criterion-referenced tests have strong relationships to intrasubject comparison methods and ties to behavioral assessment strategies (Cancelli & Kratochwill, 1981; Elliott & Fuchs, 1997). Furthermore, because the measures offer assessments of subskills within broader areas, they may provide useful mechanisms for the identification of remediation targets in the development of intervention strategies. Criterion-referenced tests may also be useful in a screening process and can help monitor progress in some situations. Conversely, criterion-referenced tests have limited utility for educational classification, the source of their criteria may not be known, and the development of intervention strategies may not be addressed adequately with these measures alone. In individual assessment situations, like any type of measure, criterion-referenced tests should be used judiciously and viewed as contributors to an overall assessment, are not necessary in all situations, and should not be used in isolation. Both norm- and criterion-referenced measures provide an indirect evaluation of skills by assessing students on a sample of items taken from expected grade-level performance. This may be helpful when one wishes to gauge a student’s overall achievement or evaluate the extent to which a student can generalize skills learned in instruction to content
Academic Skills Problems
18
not directly taught. In other cases, however, the items selected on a norm- or criterion- referenced test may not have strong relationships to what students were expected to learn. Additionally, because the measures provide samples of behavior, they may not be sensitive to small gains in student performance across time. As such, they cannot directly tell when an intervention method is successful. The use of norm- or criterion-referenced tests on their own do not reveal the influence of the instructional environment on student academic performance. Both norm- and criterion-referenced instruments may tell us certain things about a student’s individual skills, but they do not assess variables that affect academic performance, such as instructional methods for presenting material, feedback mechanisms, classroom structure, competing contingencies, and so forth (Lentz & Shapiro, 1986). Clearly, what is needed in the evaluation of academic skills is a method that more directly assesses student performance within the academic curriculum. This methodology should directly assess both the student’s skills and the academic environment. It should be able to incorporate a variety of assessment methods and address a wide range of educational decisions. This type of methodology is the basis for this text.
DIRECT ASSESSMENT OF ACADEMIC SKILLS: CURRICULUM‑BASED ASSESSMENT Many assessment models have been derived for the direct evaluation of academic skills (e.g., Blankenship, 1985; Deno, 1985; Gickling & Havertape, 1981; Howell & Nolet, 1999; Salvia & Hughes, 1990; Shapiro, 2004; Shapiro & Lentz, 1986). These models are driven by an underlying assumption that one should test what one teaches. As such, the content or skills evaluated by the assessments employed for each model are based on the instructional curriculum and skills critical for academic success. In contrast to the potential problem of limited correspondence between the curriculum and the skills measured by the test in other forms of academic assessment, evaluation methods based on the curriculum offer direct evaluations of student performance on skills that students are expected to acquire. Thus, direct assessment refers to methods that directly assess skills relevant to the student’s academic achievement. The close connection of these methods to the curriculum of instruction is why these models are generally referred to as methods of curriculum-based assessment (CBA). Various direct assessment and CBA models have been conceptualized over the years, and each model provided a somewhat different emphasis to the evaluation process. In addition, various terms have been used by these investigators to describe their respective models.
Gickling’s CBA Model Gickling and colleagues (Gickling & Havertape, 1981; Gickling & Rosenfield, 1995; Gickling & Thompson, 1985; Gravois & Gickling, 2002; Rosenfield & Kuralt, 1990) described a subskill mastery model of CBA that seeks to determine the instructional needs of struggling students based on their performance within the curriculum and content used for instruction. The primary goal is to identify and eliminate a mismatch between the skill level of a target student and the demands of the curriculum in which the student is taught (Gickling & Thompson, 1985). Gickling’s CBA model concentrates on the selection of instructional objectives and content based on assessment. This model tries to control the level of instructional delivery carefully so that student success is maximized. To
Introduction 19
accomplish this task, a student’s academic skills are evaluated to identify items that are “known” (i.e., immediate and correct response), “hesitant” (i.e., response was correct, but the student struggled or was unsure), and “unknown” (i.e., incorrect response). The proportion of known to hesitant and unknown items is then used to inform adjustments to the content used for instruction so it is at an “instructional” level suitable for use during teaching, as compared to “independent” (very high accuracy or number of “knowns”) or “frustrational” levels (very low accuracy or high number of “unknowns”; Betts, 1946; Gickling & Thompson, 1985). Shapiro (1992) illustrated how this CBA model can be applied to a student with problems in reading fluency. Burns and colleagues (Burns, 2001, 2007; Burns et al., 2000; MacQuarrie et al., 2002; Szadokierski & Burns, 2008) have published studies offering empirical support for various components of Gickling’s model of CBA.
Howell’s Curriculum‑Based Evaluation Model Howell and colleagues (Hosp et al., 2014; Howell et al., 2002; Howell & Nolet, 1999) established a subskill-mastery model and problem-solving approach called curriculum- based evaluation (CBE) that is wider in scope and application than the Gickling model. Howell’s CBE model is a process of inquiry that uses a series of if–then precepts explicitly aimed at understanding a student’s problem and what to do about it. It uses strategies such as task analysis, skill probes, direct observation, and other evaluation tools to gather information. Extensive suggestions for intervention programs based on CBE are offered for various subskills such as reading comprehension, decoding, mathematics, written communication, and social skills. In addition, details are provided for changing intervention strategies. CBE is a problem-solving process that involves five steps: (1) fact-finding/problem identification, (2) hypothesizing assumed causes of the problem, (3) confirming or revising hypotheses through problem validation, (4) summative decision-making and intervention identification, and (5) formative decision-making during intervention (Howell et al., 2002; Kelley et al., 2008). The initial step of the process involves identifying the extent of a difference between the student’s current and expected performance, thereby defining the problem the student is experiencing. For example, in mathematics, one would conduct what are called “survey-level assessments.” These assessments examine student performance on various mathematics skill domains that are considered related to the area of math in which the student is having difficulty, and evaluating the size of the discrepancy between the student’s skills and what is expected at that point in time. In addition to the administration of specific mathematics problems, data would be obtained through interviews with teachers and students, observations of the student completing the problems, and an error analysis of the student’s performance. The data from the initial step of the process leads the examiner to develop hypotheses related to the root causes of the student’s problems in math. This step involves pinpointing skill or knowledge gaps and specific areas of weakness that are causing the student’s difficulties in the broader academic domain. For example, a CBE might lead the examiner to hypothesize that, based on assessments collected in Step 1, a student’s reading difficulties are caused in large part by insufficient knowledge of letter–sound correspondence, which is critical for the acquisition of decoding and fluent word recognition skills. This “best guess” serves as the guide in the next part of the evaluation. In the next step of the process, validation of the problem is conducted by developing specific-level assessments that focus on the hypothesized skill problems that were
20
Academic Skills Problems
identified. If insufficient letter–sound knowledge is hypothesized as the root cause of the student’s reading difficulties, the assessment would specifically determine the number of letters and letter combinations the student can correctly associate with their common sounds. Very low accuracy with the task would be viewed as support for the hypothesis. If the hypothesis is not validated, for example, the student demonstrates 95% accuracy in letter–sound correspondence, it suggests that the hypothesized cause was not the root of the problem. The process would cycle back to Step 2 to revise the hypothesis and gather additional assessment data. Once the hypothesis is validated, the data collected are summarized and a plan for remediation is developed. Validation of the root cause of the problem facilitates the identification of relevant evidence-based interventions for targeting the need. To provide baseline data, a statement indicating the student’s present level of achievement and functional performance, and a list of goals and objectives for instruction, are developed. Once the instructional plan is developed and implemented, formative assessment (i.e., progress monitoring) data are collected to determine whether the intervention is having its desired effect. Lack of progress signals the need to back up to Step 4 to reconsider the assessment data and devise a new intervention that will better meet the student’s needs. A key to CBE is a recognition that the remediation of academic skills problems requires a systematic problem-solving process. CBE involves a thorough examination of the subskills required to reach mastery and pinpoints interventions that are designed specifically to remediate those skills. The if–then nature of the process provides an empirical, inquiry-based framework that utilizes assessments relevant to and aligned with the referral concern. This process can result in an intervention aimed specifically at the root of the student’s academic difficulty.
Deno’s Curriculum‑Based Measurement Framework Among models of CBA, the approach with the most substantial research base is that developed by Deno and colleagues at the University of Minnesota (e.g., Deno, 1985). Derived from earlier work on “data-based program modification” (Deno & Mirkin, 1977), Deno’s model, called curriculum-based measurement (CBM), was primarily designed as a progress monitoring framework to guide instructional decisions (i.e., signaling decisions to continue effective programs or make adjustments to ineffective ones), rather than as a system designed to develop intervention strategies. However, as will be discussed later in this text, information useful for informing a student’s skill difficulties and what kinds of adjustments may be warranted may still be gleaned from the ongoing collection of CBM data. The model involves repeated and frequent administration of a standard set of skills probes that are known to be important indices of student proficiency within the skill domain in which the child is being taught (i.e., reading, mathematics, writing). Research has shown that the model is effective in monitoring student progress even when the measures used for purposes of monitoring are not derived directly from the material used for instruction (L. S. Fuchs & Deno, 1994). The skill measured by the probes (e.g., oral reading fluency) may not necessarily be the skill being directly taught or practiced, but is viewed as an academic “vital sign” that reflects improvement and overall skill acquisition in the academic domain. Although CBM was originally designed for monitoring students’ progress within interventions, subsequent work by Deno and colleagues supported the use of CBM tools and decision-making frameworks for this and other related purposes, such as screening decisions, eligibility decisions, progress monitoring, and program evaluation
Introduction 21
(e.g., Deno, L. S. Fuchs, et al., 2001; Deno, Marston, & Mirkin, 1982; Deno, Marston, & Tindal, 1985–1986; Deno, Mirkin, & Chiang, 1982; Filderman et al., 2018; Foegen et al., 2007; L. S. Fuchs, Deno, et al., 1984; L. S. Fuchs & D. Fuchs, 1986a; L. S. Fuchs, D. Fuchs, et al., 1994; Jung et al., 2018; Shinn et al., 1993; Stecker & L. S. Fuchs, 2000; Stecker et al., 2005; Wayman et al., 2007).
General Outcomes versus Specific Subskill Mastery Models L. S. Fuchs and Deno (1991) differentiated between general outcomes measurement and specific subskill-mastery measurement models used for monitoring progress. General outcomes measures are central to Deno’s CBM framework and focus on skills reflective of overall achievement (e.g., the measurement of oral reading rate as an indicator of overall reading achievement). As such, they provide measurement of students’ growth toward year-end “general” outcomes in the overall academic domain. General outcomes measures are presented in a standardized format. Material for assessment is controlled for difficulty by grade levels and may or may not come directly from the curriculum of instruction (L. S. Fuchs & Deno, 1994). Typically, measures are presented as brief, timed samples of performance, using rate as the primary metric. Although outcomes derived from this model may suggest when and if instructional modifications are needed, and the information gained by monitoring a student with a general outcomes approach can inform the need for additional assessment and identifying the kinds of instructional decisions that should be made, the model was designed to signal the need to either continue an effective program or adjust an ineffective one, and not to suggest what those instructional modifications should be. In contrast, a specific subskill-mastery measurement approach (L. S. Fuchs & Deno, 1991) uses measures that include items contained in the instructional program, such as vocabulary terms included in a curriculum. Measures are often administered to evaluate a student’s progress toward meeting a specific mastery criterion, for example, 100% accuracy completing a set of mathematics computation problems targeted in instruction. Specific subskill-mastery measures are generally not considered part of the CBM framework (L. S. Fuchs & Deno, 1991), but still can provide highly useful data for making instructional decisions (VanDerHeyden et al., 2018). The primary objective of the subskill-mastery model is to determine whether students are meeting the short-term instructional objectives of the curriculum, in other words, whether the student is acquiring the skill or content that is being targeted in instruction. Specific subskill-mastery measures may be teacher-made or embedded within the curriculum of instruction, and the metric used to determine student performance can include accuracy, rate, or analysis of error patterns. This type of approach usually requires a shift in measurement with the teaching of each new objective. However, the measures are often highly sensitive to student progress with the skills being targeted, and can help identify areas in which instruction should be adjusted.
Shapiro’s Model of CBA Although the previously described models each differ in various ways, they all focus on the direct assessment of students’ academic performance to evaluate academic problems. Certainly, the importance of assessing individual academic skills cannot be denied. However, it seems equally important to examine the instructional environment in which the student is being taught. Lentz and Shapiro (Shapiro, 1987a, 1990, 1996a, 1996b, 2004;
22
ACADEMIC SKILLS PROBLEMS
Shapiro & Lentz, 1985, 1986) provided a model for academic assessment that incorporated the evaluation of the academic environment and student performance. Calling the model “behavioral assessment of academic skills,” they (Shapiro, 1989, 1996a; Shapiro & Lentz, 1985, 1986) drew on the principles of behavioral assessment employed for assessing social– emotional problems (Mash & Terdal, 1997; Ollendick & Hersen, 1984; Shapiro & Kratochwill, 2000) but applied them to the evaluation of academic problems. Teacher interviews, student interviews, systematic direct observation, and examinations of student-produced academic products played a significant part in the evaluation process. Specific variables examined for the assessment process were selected from the research on effective teaching (e.g., Denham & Lieberman, 1980) and applied behavior analysis (e.g., Sulzer-Azaroff & Mayer, 1986). In addition, the CBM methodology developed by Deno and colleagues was used to evaluate individual student performance on an ongoing basis but was combined with the assessment of the instructional environment in making recommendations for intervention. Indeed, this assessment of the instructional ecology differentiates the model from other models of CBA. In a refinement of this model, Shapiro (1990) described a four-step process for assessing academic skills that integrates several of the existing models of CBA into a systematic methodology for conducting direct academic assessment. As illustrated in Figure 1.1, the process begins with an evaluation of the instructional environment through the use of systematic observation, teacher interviewing, student interviewing, and a review
FIGURE 1.1. Integrated model of CBA. Adapted from Shapiro (1990, p. 334). Copyright © National Association of School Psychologists, Inc. Reprinted by permission of Taylor & Francis Ltd, http:// www.tandfonline.com on behalf of National Association of School Psychologists.
Introduction 23
of student-produced academic products. The assessment continues by determining the student’s current instructional level in curriculum materials and directly assessing relevant academic skills. Next, instructional modifications, designed to maximize student success, are implemented with ongoing assessment of the acquisition of instructional objectives (subskill and short-term goals). The final step of the model involves monitoring student progress toward general outcomes (year-end) curriculum goals. Shapiro’s model remains essentially unchanged in this edition of the text. As described in more detail in Chapter 3, readers will find the core aims, stages of assessment, and methodologies the same. Within the model, however, readers will discover some expansions to the assessment and intervention methods conducted within the steps based on research and theory that have emerged in the 30 years since the model was first established. One of the important considerations in adopting a methodology for conducting academic assessment is the degree to which the proposed change is acceptable to consumers who will use the method. Substantial attention in the literature has been given to the importance of the acceptability of intervention strategies recommended for schooland home-based behavior management (e.g., Clark & Elliott, 1988; Miltenberger, 1990; Reimers, Wacker, Cooper, et al., 1992; Reimers, Wacker, Derby, et al., 1995; Witt & Elliott, 1985). Eckert and Shapiro (Eckert, Hintze, et al., 1999; Eckert & Shapiro, 1999; Eckert, Shapiro, et al., 1995; Shapiro & Eckert, 1994) extended the concept of treatment acceptability to assessment acceptability. Using a measure derived from the Intervention Rating Profile (Witt & Martens, 1983), the studies demonstrated that both teachers and school psychologists found CBA, compared to the use of standardized norm-referenced tests, to be relatively more acceptable in conducting academic skills assessments. In a nationally derived sample, Shapiro and Eckert (1993) also showed that 46% of school psychologists surveyed indicated that they had used some form of CBA in their work. In a replication and extension of the original survey 10 years later, 53% of those psychologists surveyed indicated that they had used CBA in conducting academic assessments (Shapiro, Angello, et al., 2004). However, surveys in both 1990 and 2000 also revealed that school psychologists still had limited knowledge of the actual methods used in conducting a CBA. Although there was a statistically significant (p < .05) increase in the percentage reporting use of CBA in 2000 compared to 1990, a large proportion of psychologists still reported not using CBA. At the same time, over 90% of those surveyed who had graduated in the past 5 years had been trained in the use of CBA through their graduate programs. Chafouleas et al. (2003) found a similarly stronger acceptability of CBA methods among 188 members of the National Association of School Psychologists for developing intervention strategies compared to norm-referenced tests or brief experimental analysis (a methodology discussed further in Chapter 4). O’Bryon and Rogers (2010) determined that 53% of school psychologists surveyed reported using CBA as part of their assessment practices with English learners. Filter et al. (2013) found that practicing school psychologists reported CBA was an activity and assessment model they wished they could use more often, but were unable due to other responsibilities such as intelligence testing and report writing. In their comprehensive survey of 1,317 school psychologists, Benson et al. (2019) observed that CBM measures represented 3 of the top 10 forms of assessment that respondents reported using on a monthly basis. Overall, these studies suggest that although CBA and related methods are highly acceptable to practitioners (teachers and school psychologists), school psychologists usually prefer to use CBA, and it is increasingly being taught as part of the curriculum in
Academic Skills Problems
24
training school psychologists. As of the most recent survey by Benson et al. (2019), it appears that elements of CBA are commonly used (mainly CBM tools). However, systematic models of CBA that take a comprehensive approach to assessment have yet to assume a prominent role in the assessment methods of practicing psychologists.
Summary of Assessment Approaches Strategies for the assessment of academic skills range from the more indirect norm- and criterion-referenced methods through direct assessment, which is based on evaluating a student’s performance on skills critical for achievement and consistent with the curriculum of instruction. The type of decision to be made must be tied to the assessment strategy employed. Although some norm-referenced standardized assessment methods may be helpful in understanding the severity of a student’s academic difficulties and may contribute to making eligibility decisions, these strategies are lacking in their ability to assist evaluators in recommending appropriate intervention strategies or in their sensitivity to improvements in academic skills over short periods of time. Many alternative assessment strategies, designed to provide closer links between the assessment data and intervention methods, are available. Although these strategies may also be limited to certain types of decision making, their shared characteristic of using the curriculum as the basis for assessment allows these methods to be employed more effectively for several different types of assessment and evaluation decisions. Shapiro’s model of CBA is the basis of this text, and it is described in detail across the remaining chapters.
INTERVENTION METHODS FOR ACADEMIC SKILLS Intervention strategies developed for academic problems can be conceptualized on a continuum from indirect to direct procedures. Intervention techniques that attempt to improve academic performance by targeting underlying cognitive, neuropsychological, motor, or sensory processes can be characterized as indirect interventions because they are considered to affect academic performance by way of improvement in an underlying process. In particular, programs that target working memory, executive functions, perceptual–motor skills, or interventions based on aptitude by treatment interactions (ATIs), would be considered indirect methods of intervention. In contrast, direct interventions attempt to improve the area of academic difficulty by directly teaching the academic skills missing from the student’s repertoire. These skills are targeted because assessment data indicate they are reasons for the student’s difficulties in the larger academic domain. These types of interventions usually are based on an examination of variables that are centrally involved in academic performance.
Indirect Interventions for Academic Skills Working Memory Training Computerized working memory training programs are commercially available, and their use in schools has increased significantly. Programs typically consist of games and activities designed to improve working memory and attention. Cogmed (Neural Assembly, 2021) is an example of a computer-based program actively marketed to schools and clinicians as an intervention for students with learning and attention difficulties. Developers
Introduction 25
of working memory training programs like Cogmed make two assumptions: (1) They assume that working memory and attention can be improved by playing games that involve working memory (i.e., “near-transfer” effects), and (2) they assume that improvements in working memory and attention will lead to improvements in attention and ontask behavior in the classroom, cognitive abilities, and academic skills such as reading and mathematics (i.e., “far-transfer” effects). The products are promoted through extensive marketing materials, which often include quotes from unnamed educators regarding the dramatic improvements they observed in their students’ attention and academic achievement following use of the program. Unfortunately, the research evidence on computerized working memory programs is not nearly as encouraging as the marketing materials suggest. High-quality randomized controlled trials and several meta-analyses (i.e., quantitative summaries of multiple research studies) have reached similar conclusions: Working memory training tends to result in short-term near-transfer improvement on working memory tasks aligned with the activities in the training programs, but weak to negligible far-transfer improvements in more general areas of functioning including verbal or nonverbal ability, reading skills, mathematics performance, attention, or behavior regulation (Anderson et al., 2018; Chacko et al., 2014; Gray et al., 2012; Hitchcock & Westwell, 2017; Melby-Lervåg & Hulme, 2013; Melby-Lervåg et al., 2016; Redick et al., 2013; Sala & Gobet, 2017; Shipstead, Hicks, et al., 2012; Shipstead, Redick, et al., 2012; Soveri et al., 2017). To illustrate, a comprehensive meta-analysis of 145 experimental comparisons by MelbyLervåg et al. (2016) found that any improvements in working memory following training were short-lived, and when they did occur, transfer to improvements in generalized attention, cognitive functioning, or academic skills was not observed. Thus, experimental evidence has not supported a core assumption on which working memory training is based. An additional problem with assessment and intervention for working memory, which extends to other cognitive processes, is that working memory is embedded and highly integrated within human language, cognition, and behaviors. In fact, you are using working memory right now as you read, and you will use it later when you are talking with someone, writing an email, or listening to the news. It is indispensable for complex, intellectual tasks and activities. This deep integration makes working memory difficult to isolate and measure as a unique, discrete construct. As a result, working memory measures tend to be highly specific, rudimentary indices that reveal very little about a student’s academic skill difficulties, and certainly do not reveal more about a student’s problems than a test of the actual academic skill of interest. Moreover, a working memory test administered to a student with academic difficulties will probably indicate low performance, but this knowledge is of little practical use because improving working memory has not yet been shown to reliably improve academic performance. Instead, assessment is better focused on the skills relevant to reading, mathematics, and writing development and difficulties and will translate to more relevant, practical, and effective intervention recommendations.
Interventions Targeting “Executive Functioning” Although its precise definitions vary, executive functioning (EF) refers to a class of processes that allow individuals to accomplish goal-directed activities (Cirino et al., 2019; Miciak et al., 2019). EF is generally agreed to include inhibitory control, cognitive flexibility, and working memory (Diamond, 2013).
26
Academic Skills Problems
EF has long been hypothesized to play a causal role in academic achievement (Bierman et al., 2008), and numerous correlational studies have observed that EF, even from very young ages, is correlated with academic skills measured at the same time or at some point in the future (e.g., Blankenship et al., 2019; Blair & Razza, 2007; Cirino et al., 2019; Cutting et al., 2009; Jacob & Parkinson, 2015; Morgan et al., 2019). However, as with other processes that indirectly affect academic skills, a correlation between EF and achievement does not necessarily mean that targeting EF will lead to academic improvement. EF and its related subskills have not been found to predict student responsiveness to academic intervention (Miciak et al., 2019), and comprehensive meta-analyses by Jacob and Parkinson (2015), Rapport et al. (2013), and Kassai et al. (2019) found no convincing evidence that interventions targeting EF led to improved academic achievement or attention. A problem with EF training, in most studies, is that it targets basic processes through games, activities, and training routines that are highly distal and abstracted from tasks and situations that students navigate in the classroom. Interventions have targeted self- regulation, inhibitory control, or attention in computer games or in clinical training sessions outside of the classroom, and thus have observed limited effects on generalized classroom behavior or academic skills (Jacob & Parkinson, 2015; Rapport et al., 2013). These effects are consistent with that of most other indirect interventions—the further one separates a task or process from how the skill should be demonstrated in real-life academic situations, the less likely that any improvement in the abstracted process will improve academic skills. There is one aspect of EF that shares a good deal of overlap with behaviors that have been demonstrated to be causally related to students’ overall functioning and achievement in school. Most definitions of EF include inhibitory control, which represents one’s ability to suppress a dominant response when needed. For example, a student suppressing their urge to blurt out an answer to a teacher’s question, and raising their hand instead, is demonstrating inhibitory control. Inhibitory control is part of self-regulation, also considered a component of EF. Self-regulation, and behavioral self-regulation more specifically, refers to a set of acquired skills involved in intentionally activating, directing, sustaining, and controlling one’s attention, emotions, and behaviors (Morrison et al., 2010). Self-regulation skills are critical for accomplishing complex academic tasks, which often require sustained attention, remembering rules and expectations, determining appropriate solutions, reflection and monitoring understanding, task persistence, and detecting errors. As such, strategies to teach and support behavioral self-regulation required of complex tasks have been successfully integrated into interventions in reading (Joseph & Eveleigh, 2011), written expression (Harris & Graham, 2016), and mathematics (L. S. Fuchs et al., 2003). Important aspects of self-regulation have been studied that pertain directly to learning-related skills and students’ engagement within academic tasks. Behavior self-regulation can be viewed as a point at which behavior and learning intersect. As will be discussed in Chapters 5 and 6, interventions aimed at improving students’ task engagement have been shown to improve academic skills—but the crucial aspect is, unlike many applications of general EF training, the student’s self-regulation behavior is targeted within the specific academic situations in which it is most relevant and needed. In summary, interventions that target executive functions with tasks that are isolated and discrete from the situations in which they are needed are unlikely to improve academic skills. However, directly targeting EF-related behaviors necessary for success in academic tasks (i.e., behavior self-regulation), and doing so in the actual situation
Introduction 27
where the behaviors are relevant, is more likely to be associated with improved academic performance.
Perceptual–Motor Training and Interventions Targeting Visual and Auditory Modalities A number of intervention approaches have targeted gross and fine motor skills, and foundational perceptual abilities such as visual and auditory processing, in efforts to improve academic achievement and related skills (e.g., task engagement, cognitive functioning). The rationale for targeting motor skills is based on hypotheses that complex, higher- order cognitive skills, such as reading or mathematics proficiency, have their roots in brain areas and systems such as the cerebellum and the vestibular system that are responsible for coordinating movement, balance, and processing incoming sensory information. Deficits or dysfunction in these systems is thought to cause learning difficulties and disabilities; it is also thought that targeting underlying perceptual and motor skills should improve academic skills (e.g., Kibby et al., 2008; Levinson, 1988; Nicolson et al., 2001; Westendorp et al., 2011). Hypotheses such as these were most notably reflected in programs that emerged in the 1960s, such as the Doman–Delacato patterning technique (Delacato, 1963), and intervention strategies that involved gross and fine motor coordination, such as crawling, walking, hand–eye coordination, and balancing. Like many other interventions that are far removed from the involvement of academic skills, evidence for the efficacy of these approaches was not observed. In 1982, the American Academy of Pediatrics condemned the Doman–Delacato technique as ineffective and potentially harmful, and the seminal meta-analysis by Kavale and Mattson (1983) observed no benefits of perceptual–motor training on students’ functioning. Nevertheless, programs continue to emerge that market kinesthetic activity and perceptual–motor training as ways to “repattern” the brain to improve learning. Brain Gym® is one such example. But, like before, reviews of research show no evidence of benefit on learning or academic achievement (Hyatt, 2007; Hyatt et al., 2009; Spaulding et al., 2010). Furthermore, well-controlled studies and meta-analyses have failed to find evidence that cerebellar functions play a causal role in learning difficulties, nor is there evidence that students with learning difficulties demonstrate deficits, structural abnormalities, or dysfunction in this area (Barth et al., 2010; Irannejad & Savage, 2012; Paulesu et al., 2014; Richlan et al., 2009). Other intervention approaches have focused on visual or auditory processing. These methods assume that because vision and language are so heavily involved in performing academic tasks such as reading, writing, and mathematics (1) learning difficulties must be caused by visual or auditory processing deficits, and (2) training these specific areas will result in academic skills improvement. The magnocellular theory (Stein, 2001) is one such example; it postulates that reading disabilities are caused by abnormal functioning of large neurons in the visual pathway, which causes text to appear crowded and jittery. There is evidence that individuals with reading disabilities perform more poorly on visual tasks compared to individuals without reading disabilities (Tafti et al., 2014); however, research indicates that poorer performance on visual tasks is more likely a consequence of impoverished reading experience, not a cause of it (Creavin et al., 2015; Olulade et al., 2013). Other approaches in reading have targeted the visual modality through the use of colored eyeglass lenses or colored overlays placed over text (Irlen & Lass, 1989), vision
28
Academic Skills Problems
exercises (e.g., Badami et al., 2016), or special fonts designed for individuals with dyslexia (e.g., Dyslexie, OpenDyslexic) that are believed to improve visual processing. None of these approaches have demonstrated reliable improvements in reading. Colored overlays demonstrate no differential benefit for improving reading skills (Hyatt et al., 2009); special fonts like Dyslexie do not improve reading skills for students with reading disabilities, nor do students prefer them over typical fonts (e.g., Kuster et al., 2018; Wery & Diliberto, 2017); and vision-specific exercises or interventions do not improve reading and are not endorsed by major pediatric and ophthalmologist organizations for treating reading disabilities (American Academy of Pediatrics, 2011). Vision is obviously involved in sighted reading, but as will be discussed in Chapter 4, vision is how information is taken in, where other areas and brain systems responsible for rapidly connecting symbols with phonological information, language, reasoning, and self-regulation are primarily involved in reading achievement. Similar arguments have been made regarding the role of auditory processing as an explanation for learning difficulties (i.e., “auditory processing disorder”), and that targeting this domain will result in academic skills improvements for students with learning difficulties (e.g., Tallal, 1980). Fast ForWord (Tallal, 2000) is an example of an auditory processing program designed to improve reading and language skills, which includes audio-visual training exercises using acoustically modified language to target memory, timing, sequencing, and discrimination of sounds in oral language. However, a meta- analysis of experimental studies revealed no benefits of the Fast ForWord intervention on students’ language or reading difficulties (Strong et al., 2011).
Aptitude by Treatment Interactions Historically, many indirect interventions for remediating academic skills have been based on the concept of ATI. The basis for ATI is the theory that individuals differ in the ways they best learn (i.e., “aptitudes”), which were commonly thought to include visual, auditory, or kinesthetic modalities. The ATI concept emerged in the 1960s and enjoyed significant popularity during the 1970s, but its influence can still be observed today. In education, one often hears reference to children’s “learning styles,” reflecting a belief that one student may be more of a “visual” learner, while another may be more of an “auditory” learner. Another commonly held belief is that high-ability learners benefit more from instructional environments that are less structured, whereas lower-achieving students require greater structure. These notions originated with the ATI theory. By extension, the ATI hypothesis posited that different aptitudes require different treatment. If one properly matches the correct treatment to the correct learning style (i.e., aptitude), then gains will be observed in the child’s learning and behavior. For example, an evaluation by an advocate of learning styles may suggest the student is a “visual learner” and possesses a preference for learning in the visual over auditory modality. In light of this information, and ignoring the fact that the vast majority of learners could be classified as visual learners given how much humans rely on sight for learning, the learning styles advocate would predict that the student will succeed if their instruction is based more on a visual than an auditory approach to teaching reading. In this example, the aptitude (a strength in the visual modality) is matched to a treatment procedure (change to a more visually oriented reading curriculum). Thus, a better ATI is sought in hopes of improving the student’s skills. Experimental studies that examined the validity of ATIs have not yielded encouraging results. In a meta-analysis of 39 studies of ATI-based assessment and instruction,
Introduction 29
Kavale and Forness (1987) found that (1) groups of individuals displaying a particular modality preference could not be consistently or reliably differentiated, meaning there was considerable overlap in modality preferences across groups, and therefore no evidence that one student could be reliably classified as an “auditory learner” and another classified a “visual learner”; and (2) there were no benefits associated with instruction linked to the assessed modality of learning. Pashler et al. (2009) conducted an updated, comprehensive review of research on learning styles and ATIs. They determined that although children and adults may differ in how they process different forms of information (i.e., auditory, visual, kinesthetic), and will report a preference in the way they like information to be presented, there was no evidence that aligning instruction to an individual’s learning style improved learning or achievement. Pashler et al. concluded that “at present, there is no adequate evidence base to justify incorporating learning styles assessments into general educational practice” (p. 105). THE PERSISTENCE OF THE ATI CONCEPT: MODELS OF COGNITIVE PROCESSING STRENGTHS AND WEAKNESSES
Despite a lack of evidence that a student’s learning aptitude contributes to instruction planning, significant efforts are still being made to use the ATI paradigm to explain academic failure. One type of effort to maintain the ATI concept is a group of methods that seek to identify a student’s pattern of cognitive processing strengths and weaknesses (PSW) as a basis for identifying learning disabilities and designing intervention (Flanagan et al., 2013; Hale & Fiorello, 2004; Hale et al., 2008). PSW approaches include three types: cross-battery assessment (XBA; Flanagan et al., 2013), the concordance–discordance method (C-DM; Hale et al., 2008), and the discrepancy/consistency method (D/CM; Naglieri & Das, 1997). Although the three models differ in specifics, they share a common conceptual framework (Fiorello et al., 2014; Flanagan et al., 2010, 2013). PSW models involve administering a large set of cognitive processing tests, in addition to tests of academic achievement. PSW models assume that a student with a learning disability has strengths (i.e., average scores or better) in many cognitive domains but a weakness in a specific cognitive process that is related to the academic skill that was the reason for the referral. A statistically significant difference between the cognitive process strengths and the specific cognitive weakness is viewed as evidence for the specific nature of the learning disability, as opposed to a general difficulty in learning. At the same time, the evaluation must reveal a weakness in an academic skill that is significantly different from the cognitive strength, but not significantly different from the specific cognitive process weakness. Thus, the academic difficulty is unexpected and specific, given that the academic difficulty occurs in the presence of multiple cognitive strengths (hence, “specific learning disability”). Determining a student’s cognitive profile is thought to facilitate the identification of interventions (Flanagan et al., 2010; Hale et al., 2008). Research has revealed significant problems with PSW methods of evaluation. Studies have revealed that PSW methods demonstrate poor accuracy in correctly identifying students who have a learning disability (Kranzler et al., 2016a; Miciak, Fletcher, et al., 2014; Miciak, Taylor, et al., 2018; Stuebing et al., 2012). There are psychometric concerns with PSW; evidence indicates problems with the validity and reliability of profile analysis (Fletcher et al., 2019; McGill & Busse, 2017; McGill et al., 2018), and that standardized achievement tests from different batteries are not as interchangeable as PSW models assume (Miciak et al., 2015). Maki and Adams (2020) found that when presented with
30
Academic Skills Problems
case studies, school psychologists demonstrated significantly less consistency in interpreting the results of the same PSW evaluations compared to results based on a student’s responsiveness to intervention. Even more problematic is the lack of evidence that a PSW evaluation meaningfully informs intervention (Fletcher et al., 2019; Miciak et al., 2016); some have argued that attempts to connect cognitive profile assessment to intervention by proponents of PSW are perhaps more likely to be driven by confirmation bias or clinical illusion (Kranzler et al., 2016b; McGill & Busse, 2017). Although advocates of PSW have acknowledged that other sources of information are used in evaluation decisions and that sole reliance on a cognitive profile will result in inaccuracies (Fiorello et al., 2014), to date there is no convincing evidence that determining a student’s cognitive processing profile adds any value to justify its expense in time and resources (Fletcher & Miciak, 2017). Overall, very little empirical evidence supports the utility of PSW models, which led Kranzler et al. (2016b) to conclude that PSW approaches have more characteristics of pseudoscience than evidence-based practice. A fundamental problem with models of PSW is the assumption that matching a student to a treatment that targets the underdeveloped or poorly functioning cognitive process will lead to improvements in academic skills. Syntheses of research evidence do not support this assumption. A meta-analysis by Burns et al. (2016) of 37 studies that investigated the use of neuropsychological assessment data to design instruction observed an overall effect size of d = 0.17 on student achievement (i.e., a small effect). On the other hand, the meta-analysis revealed that instruction based on direct assessment of relevant skills, such as reading fluency or phonological awareness, was associated with educationally meaningful effect sizes of 0.43 and 0.48, respectively. Kearns and Fuchs (2013), after systematically reviewing 50 studies in which instruction that targeted a cognitive function, or combined a focus on cognitive function with academic components, failed to find support in improving students’ academic achievement. Make no mistake: Cognitive processing is absolutely involved in academic achievement, as it is with any complex human activity. Advances in neuroscience have occurred through advances in imaging techniques that have provided insights on the brain regions, structures, systems, and networks involved in reading skills, both in terms of patterns of activation associated with typical functioning as well as reading disabilities (Richlan, 2019, 2020), and neurological differences that precede or follow reading interventions (Barquero et al., 2014; Nugiel et al., 2019; Partanen et al., 2019). This work is accumulating in mathematics (Arsalidou et al., 2018; De Smedt & Ghesquière, 2019) and written expression (e.g., Brownsett & Wise, 2010; Richards et al., 2011). However, brain imaging is not yet sophisticated enough to reliably identify learning disabilities (Seidenberg, 2017), let alone design interventions from the results. The difficulty with efforts focused on cognitive and neuropsychological assessment in identifying learning difficulties is the poor link between the assessment findings and the intervention strategies that flow from the assessment. Very little well-designed research on the treatment validity of cognitive and neuropsychological assessment or other such approaches can be found. Although advocates will point to specific case studies illustrating their findings, or to examples with small samples of how cognitive processing assessment is applied to intervention development, systematic and well-controlled research on the link between cognitive processing and neuropsychological methods of assessment is yet to be evident in the literature. This may change in the future as we continue to increase our understanding of how the brain works. However, at present, the appeal of basing intervention recommendations on neuropsychological processing assessments remains popular but not scientifically substantiated.
Introduction 31
Summary of Indirect Interventions Overall, the best way to summarize the effects of indirect interventions on academic skills, and academic evaluation models that focus on cognitive processes, is as follows: If the intervention does not involve teaching or practicing the reading, mathematics, or writing skills it is intended to improve, do not expect improvement in reading, mathematics, or writing. The best chances for improving academic skills are to target those skills directly.
Direct Interventions for Academic Problems Interventions for academic problems are considered direct if the skills and behaviors targeted for change are directly involved in the performance of the skill in the natural environment. For example, in reading, interventions that specifically target the skills and knowledge sources necessary for reading words (e.g., letter–sound correspondence, phonemic blending and segmenting, spelling, orthographic knowledge, morphemes) and comprehending text (e.g., fluent reading, vocabulary knowledge, background knowledge) would be considered direct interventions. This contrasts with indirect interventions discussed earlier, which may target sensory processes, motor skills, or cognitive processes. Additionally, instead of intervention based on aptitude by treatment interactions, more productive interventions consider skill × treatment interactions that match an ideal strategy to a student’s specific academic needs (e.g., Burns et al., 2010; Connor et al., 2009, 2011). The remainder of this book is focused on Shapiro’s four-step model for assessing academic skills problems and developing interventions that directly target them. Most interventions will include strategies that fall within two domains: 1. Explicit instruction in relevant academic skills. Explicit instruction refers to clear, unambiguous instruction in which a teacher clearly teaches and demonstrates the skill (“I do”), leads students in performing the skill (“We do”), and monitors and provides immediate feedback on student performance of the skill (“You do”). The skills targeted are those that are directly involved and important for success in the academic skill area. Decades of research have demonstrated that explicit instruction is associated with stronger student outcomes compared to other less explicit or student-centered instruction, especially with struggling learners. The benefits of direct and explicit instructional strategies have been demonstrated for improving reading (Ehri 2020b; Stockard et al., 2018), mathematics computation (Codding et al., 2011; Gersten, Chard, et al., 2009), mathematics word-problem solving (Zheng et al., 2013), and written expression (Gillespie & Graham, 2014). Interventions that involve explicit instruction are discussed across the text. 2. Active engagement, opportunities to respond, and practice. The time during which students are actively rather than passively engaged in academic responding, or “engaged time,” has a long and consistent history of finding significant relationships to academic performance. Humans learn by doing and obtaining feedback on the result. Although observing is important for learning, complex skills also require opportunities to practice and learn through trial and error. Practice is essential because it provides us with experience and allows us to receive feedback, either through observing the outcome or from someone else such as a peer or teacher, on whether our response was accurate.
32
Academic Skills Problems
The association of active engagement in academic tasks to academic skill acquisition has an extensive history (e.g., Berliner, 1979, 1988; Fisher & Berliner, 1985; Frederick, Walberg, & Rasher, 1979; Goodman, 1990; Greenwood, 1991; Greenwood, Horton, et al., 2002; Hall et al., 1982; MacSuga-Gage & Simonsen, 2015; Myers, 1990; Pickens & McNaughton, 1988; Stanley & Greenwood, 1983; Thurlow, Ysseldyke, et al., 1983, 1984; Ysseldyke, Thurlow, et al., 1984, 1988). Accordingly, a number of academic interventions designed specifically to increase opportunities to respond have been developed and are discussed across this text. In the chapters that follow, a full description is provided of Shapiro’s four-step model for assessing children referred for academic skills problems. The reader will learn how assessment can be considered in the context of the students’ environment, how it is best focused on skills and behaviors most relevant for academic tasks, and how to identify the most relevant, evidence-based interventions from the assessment findings. Interventions are of central importance to the four-step model; not only are interventions the reason for conducting the assessment in first place, but a student’s responsiveness to intervention provides a better understanding of their academic achievement and needs for support moving forward. In this model, and across this text, assessment and intervention are considered together in a dynamic manner.
CHAPTER 2
Choosing Targets for Academic Assessment and Intervention
W
hen a child is referred for academic problems, the most important questions facing the evaluator are “What do I assess?” and “What skills or behavior(s) should be targeted for intervention?” As simple as these questions seem, the correct responses are not as logical as one might think. For example, if a child is reported to have difficulties in sustaining attention to task, it seems logical that one would assess their on-task behavior and design interventions to increase task engagement. However, as we discuss later in this chapter, the literature suggests that may not always be the most effective approach to solve this type of problem. Similarly, if a child is not doing well in reading, one should assess their reading skills to determine whether the selected intervention is effective. But reading is an incredibly complex skill consisting of many foundational subskills. What is the most efficient way to determine (1) the root cause of the problem, (2) the most important skill to target with intervention, and (3) the skill with which to monitor student progress? The selection of behaviors for assessment and intervention is a critical decision in remediating academic problems. The complexity of identifying target skills and behaviors for assessment has long been reflected in the literature on behavior assessment. For example, Evans (1985) suggested that identifying targets for clinical assessment requires understanding the interactions among behaviors, not just considering specific behaviors. Kazdin (1985) likewise pointed to the interrelations of behaviors, which suggests that a focus on single targets for assessment or remediation would be inappropriate. Weist et al. (1991) noted that for children, in particular, there is limited use of empirically based procedures that incorporate their perspective and the selection of appropriate targets for intervention. Nelson (1985) observed that when a child presents several disruptive behaviors, the one chosen for intervention may be the behavior that is the most irritating to the teacher or causes the most significant disruption to other students. Although intervention may temporarily reduce that behavior, before long, it may simply be replaced by another disruptive behavior that serves a similar purpose (i.e., behavior function), such as obtaining attention or escaping task demands. Determining and addressing the root cause of the behavior may 33
34
Academic Skills Problems
have broader-reaching and more sustainable benefits for the student and the instructional environment. Academic skills problems can be viewed similarly: as a complex interaction of skills and variables in the environment that requires careful assessment to determine the cause of the problem and identifying the ideal skills to target through intervention. In this chapter, we discuss a critical starting point of an academic assessment: Identifying skills and behaviors that have high utility for improving students’ academic difficulties. These skills and behaviors are targets for assessment and intervention. The assessment framework emphasizes the consideration of (1) the student’s academic environment, (2) academic skills relevant to the referred problem, and (3) the student’s learning-related skills and behaviors. This model differs somewhat from certain school psychology practices in assessing learning and academic difficulties that often fail to adequately consider the instructional environment and tend to emphasize the assessment of cognitive processes and intellectual ability. By considering each student’s unique needs, their relative strengths and weaknesses in skills directly relevant to achievement, and the interaction between the student and the environment, the model described here is a way to assess and support all students. We begin with the assumptions and principles that guide the assessment framework described in this text.
ASSUMPTIONS UNDERLYING THE SELECTION OF SKILLS AND BEHAVIORS FOR ASSESSMENT Lentz and Shapiro (1985) listed several basic assumptions in assessing academic problems. Each assumption is consistent with a behavioral approach to assessment and recognizes the important differences between assessment for academic difficulties and assessment for behavioral–emotional problems (in which behavioral methods are typically used). Many of these assumptions are as accurate today as they were in 1985. However, they should be considered in light of contemporary evidence on how reading, mathematics, and writing skills develop; related difficulties; and advances in assessment. Each of the assumptions described by Lentz and Shapiro (1985) is presented here, as they appeared in the previous editions of the text, followed by comments on their relevance in view of contemporary evidence. 1. Assessment must reflect an evaluation of the behavior in the natural environment. Behavioral assessment emphasizes the need to collect data under conditions that most closely approximate the natural conditions under which the behavior occurred. A child can perform academically in many formats, including individual seatwork, teacher- led small-group activities, teacher-led large-group activities, independent or small peer groups at learning centers, teacher-led testing activities, cooperative groups, peer-tutoring dyads, and so forth. Each of these instructional arrangements may result in differential academic performance of the same task. Comment: This notion is as true today as it was in 1985. 2. Assessment should be idiographic rather than nomothetic. In assessment, the idiographic perspective focuses on the individual and the unique factors, circumstances, and contingencies specific to that person that result in the individual’s present performance and behavior. Changes in an individual’s performance are compared to their prior performance. In contrast, a nomothetic perspective assumes some common factors and variables influence outcomes, and performance is compared to that of a group. The concerns that
Choosing Targets for Assessment and Intervention 35
often drive the assessment process are the identification and evaluation of potential intervention procedures that may assist the remediation process. In assessing academic skills, it is important to determine how the targeted student performs in relation to a preintervention baseline rather than normative comparisons. In this way, any changes in performance following an intervention can be observed. Although normative comparisons are involved in eligibility decisions and setting goals, intraindividual rather than interindividual comparisons remain the primary focus of the direct assessment of academic skills. Comments: A contemporary perspective to academic skills assessment and intervention involves both nomothetic and idiographic approaches. Longitudinal research studies have identified foundational language, literacy, and numerical skills and knowledge sources that are consistently predictive of children’s academic skills development and, when absent, explain students’ academic difficulties. Research has also identified the instructional characteristics and strategies that tend to be effective for most students, and performance levels that when achieved are predictive of subsequent success (i.e., so-called “benchmarks”), and therefore can serve as goals. These are nomothetic perspectives, but they can help inform assessment and intervention for individual students. At the same time, what is true of a group is not always true for an individual. Thus, an idiographic perspective considers variables unique to an individual student and provides a fuller understanding of the student’s academic difficulties and what interventions will be most effective. Assessment should consider the individual student’s unique set of skills, language development, the home and instructional environments to which they have been exposed, and the variables unique to the student or the setting that dictate which interventions may be more effective. This perspective also involves evaluating a student’s growth rate compared to the student’s previous levels of performance when examining the effects of an intervention. Therefore, rather than a dichotomized view that academic assessment should only be idiographic, a more contemporary perspective considers both idiographic and nomothetic approaches to gathering evidence, interpreting it, and identifying interventions (Greulich et al., 2014; Hitchcock et al., 2018; Lyon et al., 2017). 3. What is taught and expected to be learned should be what is tested. One of the problems with traditional norm-referenced testing, as noted in Chapter 1, is the potential lack of overlap between the instructional curriculum and the content of achievement tests (Bell et al., 1992; Good & Salvia, 1988; Jenkins & Pany, 1978; Martens et al., 1995; Shapiro & Derr, 1987). In the behavioral assessment of academic skills, it is important to establish a significant overlap between the curriculum and the test. Without such overlap, it is difficult to determine if a child’s low performance on a test was due to inadequate mastery of the curriculum or inadequate instruction of the material covered on the test. Comments: Compared to the 1980s, there is now a deeper understanding of academic skills development and difficulties, as well as greater availability of tests helpful in measuring them. Recall the discussion in Chapter 1 regarding test and curriculum overlap: A lack of direct correspondence between the specific items on a test and their occurrence in the curriculum of instruction is not as problematic as once thought, especially in some skill areas (e.g., reading). Rather than expecting one-to-one test–curriculum overlap, a critical aspect to identifying useful assessment measures is that they measure skills that are highly relevant to the development (or difficulty) of the academic skill in question. Later in this chapter, we refer to these skills as keystones of academic achievement.
36
Academic Skills Problems
4. The results of the assessment should be strongly related to planning interventions. A primary purpose of any assessment is to identify those strategies that may be successful in remediating the problem. When assessing academic skills, it is important that the assessment methods provide some indications of potential intervention procedures. Comment: This statement is as true now as it was in 1985, and it is a central theme of this text. 5. Assessment methods should be appropriate for continuous monitoring of student progress so that intervention strategies can be altered as indicated. Because the assessment process is idiographic and designed to evaluate behavior across time, it is critical that the measures employed be sensitive to change. Indeed, whatever methods are chosen to assess academic skills, these procedures must be capable of showing behavioral improvement (or decrements), regardless of the type of intervention selected. If the intervention chosen is effective at improving a child’s math computation (e.g., single-digit subtraction), the assessment method must be sensitive to any small fluctuations in the student’s performance. It is also important to note that, because of the frequency with which these measures are employed, they must be brief, repeatable, and usable across types of classroom instructors (e.g., teachers, aides, peers, parents). Comments: The question of whether to monitor progress with a measure of discrete skills specifically targeted in instruction (as provided in the example above of single- digit subtraction) has been a matter of debate. Academic skills are interrelated, and academic skills problems rarely represent difficulties with a single, discrete skill. Scholars have effectively argued for the value of monitoring progress with measures of “general outcomes” that are not always immediately sensitive to subtle changes in performance, but over time are highly indicative of overall achievement and real, substantive improvement in the overall academic domain (L. S. Fuchs & Deno, 1991). Measures of general outcomes are often more practical, but specific subskill measures are highly sensitive to subtle or immediate changes in achievement. There are many occasions in which a measure of a specific skill complements information provided by a general outcomes measure (VanDerHeyden et al., 2018). We discuss situations in which both types of measures are appropriate in Chapter 7. 6. Selected measures need to be based on empirical research and have adequate validity. Like all assessment measures, methods used to conduct a direct assessment of academic skills must meet appropriate psychometric standards. From a traditional perspective, this would require that the measures display adequate test–retest reliability and internal consistency, sufficient content validity, and demonstrated concurrent validity. In addition, because they are designed to be consistent with behavioral assessment, the measures should also meet standards of behavioral assessment, such as interobserver agreement, treatment validity, and social validity. Although there have been substantial research efforts to provide a traditional psychometric base for direct assessment measures (e.g., Shinn, 1988, 1998), there have been few efforts to substantiate the use of the measures from a behavioral assessment perspective (Lentz, 1988); however, see Derr and Shapiro (1989) and Derr-M inneci and Shapiro (1992). Comments: These notions are as true now as they were in 1985, but there are still areas in which publishers need to improve their report of the technical properties of their measures. Publishers routinely report reliability and validity but rarely interrater
Choosing Targets for Assessment and Intervention 37
agreement (i.e., whether two different raters agree on their scores for the same student) and social validity (i.e., users of the test scores believe the test measures skills that are relevant and important for the intended use). Publishers do not routinely report metrics pertaining to how readily evaluators can be trained to administer and score measures with fidelity. These aspects are important; studies have observed that trained evaluators are highly prone to administration and scoring errors (Reed & Sturges, 2013), which directly affect the validity of the data obtained from an assessment. 7. Measures should be useful in making many types of educational decisions. Any method used for assessing academic skills should contribute to different types of decisions (Salvia & Ysseldyke, 2001). Specifically, the assessment should be helpful in screening, setting individualized education plan (IEP) goals, designing interventions, determining eligibility for special services, and evaluating special services. Comment: This is as true today as it was in 1985. In summary, considering both the sound reasoning of Lentz and Shapiro (1985) as well as the evidence that has accumulated in the 35+ years since, this text offers a slightly refined view of the critical keys to selecting target skills (and by extension, measures) for assessing academic problems. This text emphasizes the importance of choosing measures and assessment approaches that (1) are relevant to skill development and, when absent, to skill difficulties; (2) are relevant to what students have been taught and are expected to know; (3) meaningfully inform the identification or development of intervention strategies; (4) meet appropriate technical psychometric standards; (5) are sensitive to meaningful changes in the academic skill area of interest; and (6) consider both the academic environment and individual skills in the assessment process. The model described in this text is built around the core idea that the best targets for assessment are skills that provide the best targets for intervention. In other words, the assessment focuses on the skills and variables that are most relevant for academic achievement, and likely the cause of the student’s academic difficulties. This model, displayed in Figure 1.1 (see Chapter 1), was initially conceptualized by Shapiro and Lentz (1985, 1986), and was later modified by Shapiro (1989) in the first edition of this text. It was refined over the years in subsequent editions. The approach combines components from several other models of curriculum-based assessment. It involves a four-step process: (1) assessment of the instructional environment, (2) assessment of the student’s relevant academic skills and their instructional placement, (4) modifying instruction, and (5) monitoring progress. An examination of the literature from related fields (cognitive psychology, educational psychology, applied behavior analysis, and special education) provides significant support for selecting targets for evaluation in both the academic environment and specific to the student.
SOURCES OF INFLUENCE FOR THE MODEL Curriculum‑Based Assessment This is a CBA model; it is designed to assess individuals who have deficits in academic skills using measures and assessments grounded in the content that students are taught. This means that the procedures are relevant for pupils with difficulty reading,
38
Academic Skills Problems
mathematics, and writing. Students of all ages, including secondary-age students with difficulties in these areas, can be assessed using CBA. Students with academic problems are typically first identified—and thus more often initially assessed—in the elementary grades. However, late-emerging academic difficulties certainly do occur and may be related to previously unidentified difficulties or underdeveloped basic skills that emerge as expectations and task demands increase across grade levels. For example, low achievement in social studies may be due to problems reading complex and multisyllabic words, or poor reading comprehension. Or problems with algebra may result from slow and inaccurate mathematics computation skills. Academic difficulties among students in secondary grades may also be due to declining motivation or emotional problems, and CBA can be a way of ruling out difficulties in basic academic skills as a reason why they are struggling. In short, CBA is a relevant method of assessment for students across grade levels. The data derived from CBA are intended to help one make decisions regarding the variables that contribute to academic performance. A significant assumption of CBA is that these variables lie both within the individual skills of the student and within the academic environment. CBA can contribute to a number of educational decisions for students with academic difficulties. It can help: 1. Determine reasons for a student’s academic difficulties. 2. Determine whether a student is accurately placed in curriculum materials. 3. Assist in developing strategies for remediation of academic problems. 4. Suggest changes in the instructional environment that may improve the student’s performance. 5. Provide a means for setting short- and long-term goals for students in special education programs. 6. Indicate the skills and methods for monitoring progress and performance of students across time. 7. Provide an empirical method for determining when an intervention is effective or not. 8. Make the assessment relevant to what the child has been expected to learn in the curriculum. 9. Provide data as part of evaluations that determine a student’s eligibility for special education or help in identifying an ideal instructional setting.
Behavioral Psychology and Applied Behavior Analysis: An Empirical, Hypothesis‑Driven Approach to Assessment Another influence for Shapiro’s model of academic assessment comes from applied behavioral analysis. In this area, the use of functional analysis for identifying the variables in the environment that occasion, reinforce, and maintain behavior has been highly influential. Functional analysis represents a hypothesis-driven approach to empirically determining what variables are motivating and maintaining a behavior. Identifying the variables controlling the target behavior provides a clear route to modifying these behaviors, and subsequent measurement can determine whether the approach is successful. Functional analysis has a long history (e.g., Ferster, 1965; Iwata et al., 1982), and many empirical investigations have demonstrated the utility of this methodology. In general, the method begins with a problem identification interview with the referral source
Choosing Targets for Assessment and Intervention 39
(teachers, parents, or caretakers), followed by data collection to generate a set of hypotheses about why the challenging behavior is occurring, which is referred to as the function of the behavior (e.g., seeking attention from adults or peers, accessing a preferred item, avoiding an aversive task or request). The hypotheses are then systematically tested in a controlled setting using a set of conditions in which the variables are manipulated, such as delivering attention or removing a task demand when the behavior occurs. Data are graphed and examined to determine which variables are most likely to elicit the target behavior. A relevant treatment procedure is developed that seeks to eliminate reinforcement for the challenging behavior while teaching and reinforcing positive, prosocial behaviors for achieving the same function. If the analysis is accurate, the aberrant behavior should be significantly reduced or eliminated once the appropriate treatment procedure is implemented. A few notable examples of the application of the process are provided here. In working with a 12-year-old student with a long history of severely disruptive behavior, Dunlap et al. (1991) demonstrated the relationships between the student’s school curriculum and her disruptive behavior. Following a descriptive analysis and data-collection process, four hypotheses were generated: (1) The student demonstrated more appropriate behavior when engaged in gross motor as opposed to fine motor activities; (2) the student demonstrated more appropriate behavior when fine motor and academic requirements were brief, as opposed to lengthy; (3) the student demonstrated more appropriate behavior when engaged in functional activities with concrete and preferred outcomes; and (4) the student demonstrated more appropriate behavior when she had some choice regarding her activities. In the initial phase of the study, each of these hypotheses was evaluated systematically during 15-minute sessions using materials taken directly from the classroom curriculum. Results of the first hypothesis test showed that the student had zero level of disruptive behavior and consistently high levels of on-task behavior under gross motor conditions compared to fine motor activities. Data from the second hypothesis test showed substantially better performance under short versus long tasks. The third hypothesis test revealed that she did much better when the task was functional. The results of the final hypothesis test showed superior performance when the student was proved a choice of activities. Thus, the analysis of the student’s performance revealed that her best performance would occur using functional, gross motor activities of a short duration when she had a choice of tasks. Following the assessment phase, the student’s curriculum was revised to incorporate the variables associated with high rates of on-task behavior and low rates of disruptive behavior. The results were dramatic and immediate. Once the intervention began, classroom disruption occurred only once across 30 days of observation and improvements were observed up to 10 weeks later. The student also showed substantial increases in appropriate social behavior and decreased in inappropriate vocalizations. Functional analysis and functional assessment procedures influenced Shapiro’s model of academic assessment by providing an idiographic, empirical, evidence-based approach to evaluating the source of students’ difficulties and maintaining variables. Assessments and interventions are directly linked and focus on skills that are causally related to students’ academic difficulties. Considerable attention is paid to the variables in the student’s environment that support learning, such as the contingencies in a classroom that motivate students, the rules and expectations that teach and communicate expected behaviors, and instruction that teaches behaviors that facilitate academic success.
40
Academic Skills Problems
Cognitive‑Behavioral and Educational Psychology Other influences on Shapiro’s model of academic skills assessment come from cognitive- behavioral and educational psychology. Behavioral self-regulation refers to a set of acquired skills involved in intentionally activating, directing, sustaining, and controlling one’s attention, emotions, and behaviors for a given situation (Morrison et al., 2010; Schunk & Zimmerman, 1997). Self-regulation skills are critical for accomplishing complex academic tasks, which often require sustained attention, determining appropriate solutions, reflection and monitoring understanding, and detecting errors. Thus, students’ individual differences in behavioral self-regulation have broad implications for their subsequent academic achievement (Blair & Razza, 2007; McClelland et al., 2006; Ursache et al., 2012). Self- regulation strategies to teach and support meta-cognitive skills required of complex tasks have been successfully integrated into interventions in reading (Joseph & Eveleigh, 2011), written expression (Graham et al., 2012), and math (L. S. Fuchs et al., 2003). In an academic context, behavioral self-regulation affects the extent to which students maintain their attention to instruction and persist with academic tasks, especially when activities are challenging or undesirable. We refer to this as academic engagement, which is also called “on-task” behavior. Academic engagement is a critical learning- related skill. Attention and active engagement represent half of Dehaene’s (2020) four crucial “pillars” of human learning. Without sustained attention to a teacher or a task, learning does not occur. Academic engagement can be behaviorally defined, observed, and measured. Although students demonstrate individual differences in their ability to self-regulate their behavior and sustain their engagement in academic tasks, academic engagement is both highly influenced by the environment and highly responsive to intervention.
A Keystone Behavior Perspective The academic assessment model described in this text also draws from the keystone behavior perspective in identifying assessment and intervention targets (Ducharme & Schecter, 2011; Nelson & Hayes, 1986). Keystone behaviors refer to those that are relatively straightforward to define and observe, such as engagement (i.e., on-task), following directions, or communication, but that have broad and widespread benefits for an individual and their context (Barnett, 2005). Keystone behaviors often represent skills that are critical to success in current and future environments, are typically incompatible with maladaptive and antisocial behaviors (e.g., it is difficult to be highly engaged with a task and disruptive at the same time), and when improved, can positively influence other behaviors and outcomes, interpersonal relationships, and the classroom environment as a whole (Barnett, 2005). The importance of keystone behaviors for promoting achievement and success in multiple areas, combined with their relatively simple nature, makes them highly attractive targets for assessment and intervention. Researchers have examined sets of keystone behaviors that encompass a wide range of behaviors important for school success. Developers of direct behavior ratings (Chafouleas, 2011), through a systematic program of research, identified academically engaged (i.e., on-task), respectful (i.e., follows directions), and absence of disruptive behavior as encompassing the primary behaviors involved in school success. Similarly, Ducharme and Shecter (2011) identified on-task, compliance (i.e., follows directions), social skills (positive peer interactions), and communication (i.e., seeking help or attention in appropriate and prosocial ways) as critical keystone behaviors for students.
Choosing Targets for Assessment and Intervention 41
Two types of keystone behaviors stand out as being particularly relevant for academic skills assessment: academic engagement (i.e., on-task), and disruptive behavior (i.e., behaviors that interfere with learning or distract the target student or others in the environment). These two keystone behaviors capture the intersection between behavior and learning.
Why Include Academic Engagement as a Keystone Behavior? Learning requires sustained attention to task, which makes academic engagement one of the most important behaviors for learning. The effect of instruction on academic improvement is mediated by students’ academic engagement (Greenwood, 1996; G reenwood et al., 1994). Academic engagement can be improved through intervention and changes to the academic environment, which in turn improves achievement (DiGangi et al., 1991; DuPaul et al., 1998; Prater et al., 1992; Wood et al., 1998). Improving academic engagement can also lead to broader positive effects for a student and the instructional setting. Fewer disruptions to instruction occur when students are more engaged. Teachers value students’ skills in self-control, including attention and following directions, as some of the most important characteristics for school success (Lane et al., 2004). Greater student academic engagement is associated with positive teacher–student relationships (Hughes & Kwok, 2007), which in turn are predictive of stronger student achievement and motivation (Birch & Ladd, 1998) and can serve as powerful protective factors for at-risk students (Liew et al., 2010; O’Connor & M cCartney, 2007; Silver et al., 2005).
Why Include Disruptive Behavior as a Keystone Behavior? Although it overlaps with academic engagement, including disruptive behavior as a keystone captures an additional aspect of student behavior that not only impacts their own learning, but disrupts the larger environment. We define disruptive behaviors as those that interfere with and distract the student or others in the environment. Thus, they extend beyond a lack of engagement. Intervention studies have indicated that problem behaviors are one of the primary variables related to students’ inadequate response to interventions (Al Otaiba & Fuchs; 2002; Nelson et al., 2003). This should be clear to anyone with even minimal experience working with students—disruptive behavior not only keeps a student off-task, but it also distracts other students, and often requires the teacher to pause instruction. Multiple episodes of disruptive behavior affect the consistency and quality of instruction; thus, very little learning occurs when a student is disruptive. In summary, academic engagement and disruptive behavior represent two keystone behaviors particularly relevant for academic assessment and intervention. Although commonly applied to social and behavior skills, a keystone approach is also relevant for considering academic skills difficulties. Later in this chapter we extend the keystone approach to understanding academic skills difficulties.
IDENTIFYING TARGETS FOR ASSESSMENT AND INTERVENTION IN THE ACADEMIC ENVIRONMENT The instructional environment is not often adequately considered when a child is referred for academic difficulties. This is unfortunate because research has demonstrated that reasons for a child’s academic failure may reside, at least in part, in the instructional
42
Academic Skills Problems
environment (Lentz & Shapiro, 1986; Thurlow et al., 1993; Ysseldyke et al., 2003). If a child fails to master an academic skill, one possible cause that must eventually be ruled out is that the problems are the result of inadequate instruction or other variables in the classroom that impede learning. In Shapiro’s model, assessing the academic environment is the first step. The reference to the environment term is important because the assessment considers multiple variables in the academic setting rather than just the classroom arrangement. The variables that the evaluator considers as targets for assessment are those that have been shown to facilitate learning and, more importantly, can be targeted through intervention.
Academic Engaged Time and Student Opportunities to Respond Considerable attention has been paid to the relation between the time that students spend engaged with learning activities and academic performance (Caldwell et al., 1982; Carroll, 1963; Goodman, 1990; Karweit, 1983; Karweit & Slavin, 1981). Studies indicate that the link between a teacher’s instruction and students’ skill improvement is made possible through students’ active engagement in academic tasks (Greenwood, 1996; Greenwood et al., 1994). One of the most influential projects that examined relationships between engaged time and academic achievement was the Beginning Teacher Evaluation Study (BTES; Denham & Lieberman, 1980). Observations were conducted on the entire instructional day in second- and fifth-grade classrooms across a 6-year period. Data were collected on the amount of time allocated for instruction, how the allocated time was actually spent, and the proportion of time that students spent actively engaged in academic tasks within the allocated time. Subsequent analyses of data from the BTES study found that time allocated for instruction, engaged time, and actual time spent in instruction were significantly and consistently predictive of academic achievement (Brown & Saks 1986; Coatney, 1985). Thus, being actively involved in learning tasks and direct instruction increases the likelihood that learning occurs. We refer to active engagement as the times in which an individual is demonstrating an action related to the learning activity, such as reading, writing, completing mathematics problems, answering a question, explaining a problem solution, or engaging in a discussion. In this way, active engagement is distinguished from passive engagement, which refers to behaviors such as listening to instruction, watching a demonstration, or listening to a peer answer a question. Passive engagement is important and often represents one’s exposure to a new skill. For instance, we often listen to someone describe a task and watch as they model it before trying it ourselves. However, active engagement offers the strongest opportunities for learning in most situations (Dehaene, 2020). There are numerous examples from research. Doabler et al. (2019) observed stronger mathematics outcomes when students were offered three practice opportunities following every teacher demonstration of a mathematics concept, compared to only one practice opportunity. Wanzek et al. (2014) found that students’ reading outcomes were stronger the more time they spent engaged in reading print. Establishing and maintaining stronger academic engagement during challenging tasks improve skill acquisition and academic performance (DiGangi et al., 1991; Prater et al., 1992; Wood et al., 1998), and for students with academic and behavior difficulties, interventions that improve academic engagement are associated with improved achievement (DuPaul et al., 1998). One of the most powerful and consistently recommended ways to increase student engagement is to establish an instructional environment that results in a high number of
Choosing Targets for Assessment and Intervention 43
opportunities to respond (OTR; Delquadri et al., 1983; Greenwood, Delquadri, et al., 1984). OTR refers to opportunities that students are provided to actively respond to academic material and requests (as opposed to passively attending). An extensive evidence base reveals that instructional settings or activities with greater OTR are associated with higher student engagement and stronger student achievement (for research reviews, see Common et al., 2020; Leahy et al., 2019; MacSuga-Gage & Simonsen, 2015; Sutherland & Wehby, 2001). The recognition of the importance of engaged time and opportunities to respond has coincided with a number of studies examining the levels of student engagement in general and special education classrooms. Several scholars have observed very low amounts of time during which students were directly engaged with academic instruction (Hayes & Jenkins, 1986; Kent et al., 2012; Leinhardt et al., 1981; Swanson et al., 2016; Vaughn et al., 2002; Wanzek et al., 2014). Several alarming examples are worth noting. In their review of research, Vaughn et al. (2002) found that in classrooms serving students with learning disabilities or behavior disorders students spent very little time actually reading (more time was spent waiting). Observing kindergarten students at risk for reading difficulties in general education reading instruction, Wanzek et al. (2014) were forced to summarize the average amount of time students spent responding to print in seconds per day, rather than minutes or hours. The evidence reviewed so far should make it apparent that measuring student engagement and the number of OTR in the classroom are key targets for assessment. The importance of considering OTR is that it situates student engagement as a product of a strong instructional environment. Opportunities is the operative word. Evaluating OTR in the classroom involves recognizing that how instruction is presented, the nature of the activities, and the teacher’s behaviors play an important role in student engagement and, ultimately, student achievement. Over the years, observational codes have been designed to collect data on classroom instruction, student engagement, and OTR. These measures are described in more detail in Chapter 3. When assessing academic skills, it can be helpful for measures to show the level of active student responding and to not simply measure on-task time alone. Two codes that provide such a variable and are useful for classroom observation are the Behavioral Observation of Students in Schools (BOSS; Shapiro, 1996b, 2003a) and the State–Event Classroom Observation System (SECOS; Saudargas, 1992). Furthermore, it is also possible that approximations of engaged time can be obtained by combining observations on interrelated behaviors that together represent academic engaged time. The observational systems described here take this approach by collecting data about a child’s academic behavior from various sources (teacher interview, student interview, direct observation, permanent products) and combining these to determine the student’s level of academic response. Detailed discussion of the use of these codes is provided in Chapter 3.
Explicit Instruction Explicit instruction refers to direct, unambiguous, systematic, and supportive teaching (Archer & Hughes, 2011). Other terms are synonymous with explicit instruction; the term direct instruction is often used to refer to explicit instruction generally, whereas the same term when capitalized (Direct Instruction) refers to specific explicit instruction procedures and lessons in reading and mathematics developed by Carnine and colleagues (Carnine et al., 2017; Stein et al., 2018). Explicit instruction can be contrasted with
Academic Skills Problems
44
discovery learning approaches in which students are provided materials and encouraged to explore and learn independently, with no (or minimal) teacher instruction or guidance. At its core, explicit instruction involves teaching with clear, simple, and precise language. It involves extensive modeling and demonstrations of a skill with concrete examples, prompts, and supports (i.e., so-called “scaffolds”) for students as they practice the skill. It provides affirmative feedback for correct responses and immediate corrective feedback for incorrect responses. The phrases model–lead–test or I do, we do, you do exemplify how explicit instruction is carried out, which refers to the teacher’s modeling of the skill (model; “I do”), the teacher and the students practicing the skill together (lead; “We do”), and the teacher closely monitoring (and providing feedback) while the students practice the skill on their own (test; “You do”). Explicit instruction is a behavioral approach to teaching that follows the antecedent– behavior–consequence sequence. Academic requests or questions (antecedents) are followed by a student response (behavior), which is immediately followed by affirmative or corrective feedback from the teacher (consequence). Most central to this process is that instruction is clear, nothing is left to guesswork or chance, and the teacher’s feedback immediately follows students’ responses. Additional discussion and examples of explicit instruction are provided in Chapter 5. Across the past 40+ years, an extensive research base has established the benefits of explicit instruction on student learning across academic domains and settings. The evidence is clear—stronger learning outcomes are observed when instruction involves direct and explicit methods, compared to instruction that is less explicit or involves discovery learning approaches. The work is extensive, and we cite it here to provide the reader with a sense of the breadth and depth of the evidence base. The benefits of explicit instruction have been demonstrated for:
• Students in general education (Adams & Engelmann, 1996; Hattie, 2009; Rosenshine, 1979, 1981, 2009; Stockard et al., 2018)
• Students with learning difficulties, and in special education and intervention
settings (Christenson & Ysseldyke, 1989; Gillespie & Graham, 2014; Gersten, Chard, et al., 2009; Swanson, 2000; Swanson & Hoskyn, 1998; Vaughn et al., 2000; White, 1988) • Reading (Ehri, 2001; Gersten, Haymond, et al., 2020; Solis et al., 2012; Stockard et al., 2018; Swanson, 1999; Vaughn et al., 2000) • Mathematics (Gersten, Beckmann, et al., 2009; Kroesbergen & Van Luit, 2003; National Mathematics Advisory Panel, 2008; Stockard et al., 2018) • Writing (Gillespie & Graham, 2014; Graham et al., 2012; Vaughn et al., 2000) In short, explicit instruction is key to student achievement. It is particularly important for students with academic difficulties, thus making it a central assessment target in assessing the instructional environment. There are observational tools specifically designed to collect data on explicit instruction, and their use in this role is discussed in Chapter 3.
Feedback Feedback refers to the information we receive about an action. In an academic context, feedback most typically refers to how students know whether a response was correct or incorrect. Feedback is an inherent part of explicit instruction, but it can occur across
Choosing Targets for Assessment and Intervention 45
instruction and practice situations (not just the introduction of new skills or material). Because feedback is so important for human learning, and practitioners often overlook its importance, it deserves its own section. Human beings learn through feedback. When acquiring a skill or new knowledge, affirmative feedback tells us when we have demonstrated the skill correctly, and corrective feedback indicates when we have made an error. Imagine that you are learning to play golf. An instructor can show you how to stand, hold the club, position the ball, the appropriate length of your backswing, how to turn your hips, and so on, but critical information comes from hitting the ball and seeing where it goes. The ball flying straight and true affirms your actions, whereas the ball flying directly to the left suggests that something about your swing was wrong. The instructor can augment this feedback by explaining what you did that was correct or incorrect. Now let’s say you are hitting golf balls in the dark and have no idea where your shots went. Because you have no idea whether your shot was good or bad, you have no basis for knowing if you need to adjust your swing, or how to correct it. You would probably give up or, at worst, develop terrible swing habits, thinking that your shots were flying straight. The same is true for students learning a new skill. A beginning reader who reads the word cat correctly may receive affirmative feedback if they know the word. But learning is most efficient if a teacher or skilled reader is present to confirm that they read the word correctly with a simple “yes” or “good.” On the other hand, if the student reads cat as cut and does not recognize the error (like a golfer in the dark), there is no chance for correction and potential for the development of a misunderstanding. Feedback can come through observing confirmation of an action; for example, seeing your basketball go through the hoop, or verifying that you solved a math problem correctly because both sides of the equal sign match. However, feedback is most effective when it comes from a teacher or otherwise skilled individual and (1) indicates whether a response was correct or incorrect (with a simple “right,” “correct,” “good,” or “yes”), and (2) if your response was incorrect, that person shows you what to do differently. This is corrective feedback. Corrective feedback is critical because errors are learning opportunities. Corrective feedback must be immediate, positive, simple in its wording, and show students how to respond correctly or how to fix the error. Without corrective feedback, learning may proceed very slowly and inconsistently, or it might never occur at all. More feedback should be provided when students are in the early stages of learning because they will be prone to making more errors and have limited awareness of when they respond correctly. Therefore, it is important that teachers closely monitor students as they practice a new skill, such as reading words or completing math problems, to be able to provide affirmative or corrective feedback as soon as responses occur. For these reasons, feedback is an important target for assessment and intervention. An assessment of academic skill difficulties should ascertain the frequency of affirmative and corrective feedback that students receive during instruction, and how effective that feedback is in resolving errors.
Pace of Instruction One aspect closely tied to student responding and engagement is the pace of instruction. Instruction that moves at a brisk pace (notice that we do not say “fast” because instruction should not be rushed) is often more engaging and results in more opportunities for students to respond to academic material. A brisk pace of instruction can be accomplished through increasing the rate of presentation, reducing wait times for responses,
46
Academic Skills Problems
decreasing pause time between tasks or requests, and reducing the amount of extraneous “teacher talk” so that student responding and practice opportunities are maximized. Improvements in instructional pace (without rushing) are associated with stronger student engagement and accurate responding (Becker & Carnine, 1981; Darch & Gersten, 1985; Rosenshine, 1979; Skinner, Fletcher, & Henington, 1996). Carnine (1976) found that students answered correctly about 80% of the time when in a briskly paced condition (12 questions per minute), as compared to answering correctly 30% of the time in a slow-rate condition (5 questions per minute). Clearly, the pace of instruction is a useful target for assessment and intervention.
Classroom Contingencies Academic responding does not occur in isolation. Each response is surrounded by various stimuli within the instructional environment that can affect its likelihood of occurring, and its likelihood of occurring again in the future. Stimuli immediately preceding the academic responses (e.g., the teacher’s explicit instruction, teacher modeling of the skill or concept, the student’s success with a previous response, rules and expectations in effect, the amount of overall support or prompting available), stimuli preceding the response but removed in time (e.g., studying for a test the night before, the student’s success with previous assignments), consequences immediately following the response (e.g., the teacher’s affirmative or corrective feedback, teacher praise, peer reactions to the student’s response), delayed consequences (e.g., parent feedback on work products or grades), and contingencies that compete with academic responses (e.g., off-task or disruptive behavior, a distracting or unpredictable environment) may all interact in complex ways to affect student rates of responding and academic performance. These variables can play a role in a student’s academic difficulties and may be altered to improve performance. Therefore, they serve as possible targets for assessment and intervention in the academic environment. Although there is significant support for the effects of immediate antecedent and consequent stimuli on academic responding (see Kern & Clemens, 2007), less research has examined the impact of antecedent conditions that occur well in advance of the situations in which academic responding is required but may still affect performance. Obviously, it is challenging to establish whether one antecedent is associated with an academic response if they do not occur closely together. This probably accounts for the lack of research conducted on such events. Still, it is common to encounter teachers who attribute a child’s failure to perform in school to events that occurred some time ago. Furthermore, it seems logical that a student who arrives at school without breakfast, and who has been awake most of the night on their smartphone, may not perform as well as expected despite past evidence of skill mastery. Moreover, research is accumulating on the links between internalizing difficulties (i.e., anxiety and depression) and academic difficulties. Several studies have observed links between anxiety and difficulties in reading (Francis et al., 2019; Grills-Taquechel et al., 2013) and in mathematics (Chang & Beilock, 2016). Much of the evidence suggests reciprocal, bidirectional relations between anxiety and academic performance. Although not an antecedent variable per se (in behavioral terms, anxiety probably functions more as an establishing operation), it is possible that a student’s anticipation of an academic activity or class period they find difficult may elicit maladaptive levels of arousal that compromise their ability to learn, thereby serving as an antecedent. Teachers and parents can unwittingly contribute to students’ academic-related anxiety with the messages they
Choosing Targets for Assessment and Intervention 47
convey about academic skill development; even caregivers’ and teachers’ math-related anxiety is associated with lower mathematics achievement among students over time (Beilock et al., 2010; Maloney et al. 2015). Additionally, the classroom environment can be one that is highly competitive or punitive, and could unwittingly contribute to students’ anxiety related to reading, mathematics, or writing, thereby potentially affecting motivation, effort, and persistence. On the other hand, the environment could be supportive, and one that recognizes and reinforces student effort as most important (not perfect accuracy), which may reduce the possibility of academic-related anxiety. These aspects provide additional potential targets for assessment and intervention.
Establishing and Reinforcing Rules, Expectations, and Routines Studies indicate that student behavior and achievement are better when students are more aware of the rules, expectations, and routines for the instructional setting (Oliver et al., 2011). Rules may involve the behavioral expectations of certain settings or tasks, such as when students must raise their hands to respond or the times it is OK to call out (e.g., as in choral or unison responding), if students are permitted to talk with a peer, and rules for remaining in an assigned location. Instructional routines and expectations might involve a predictable sequence of subjects and activities each day, how to transition between activities, what students can do if they finish their work early, or what to do when they need help. Clear expectations provide a structured, predictable environment for students that can reduce distractions and help focus instructional time. In addition to establishing the rules and expectations, it is equally important to (1) teach students the expectations and routines directly (using models and demonstration), and practice them to ensure that the students understand; (2) refer to those expectations and routines often; and (3) recognize and reinforce students when they follow the rules (Oliver et al., 2011). Thus, a possible target for assessment and intervention is student understanding of the rules and expectations in academic situations of interest.
Interfering Behaviors Other important variables that affect academic performance are competing behaviors. These are events that compete for student attention and disrupt the antecedent– consequence relationships that are important for learning. For example, if a student is frequently drawn off-task by peers asking for help, the observed student may be making fewer academic responses than are desirable. Likewise, if a target student engages in high rates of contact with other students, out-of-seat or out-of-area responses, disruptiveness, or other behaviors that prevent student academic performance, it is likely that the student’s academic achievement will be limited. Therefore, the assessment of competing behaviors requires a careful examination of student behaviors that typically occur in classrooms and are considered disruptive or related to poor academic performance. These would include such behaviors as being out of seat, student–student contact, talking out, physical or verbally aggressive behavior, and general inattentiveness.
Summary: Identifying Targets for Assessment and Intervention in the Academic Environment Assessment of the academic environment involves understanding the variables that impact academic performance. Although there is a myriad of variables in academic settings that
48
Academic Skills Problems
can affect behavior and achievement, we reviewed several that are most prominent, measurable, and may be the most likely to affect learning: active engaged time and opportunities to respond, explicit instruction, feedback, classroom contingencies, pace of instruction, competing behaviors, and rules and expectations. These variables represent key targets for assessment and intervention for a student referred for academic difficulties. Other aspects of the classroom environment, including teacher–student interactions and relationships, emotional support, and classroom organization, are critical variables that impact student outcomes (Allen et al., 2013; Pianta, Belsky, et al., 2008). Although it is unlikely that any one of these variables alone will explain most of a student’s academic difficulties, some may play more of a role than others in certain situations. As is discussed in later portions of this text, environmental variables can interact with student skills in ways that have important implications for academic performance. Therefore, assessing the academic environment represents an important initial step in an academic assessment.
IDENTIFYING ACADEMIC SKILLS TARGETS FOR ASSESSMENT AND INTERVENTION Choosing academic skills for assessment is not always as simple as it seems. Although a student may have significant problems in reading and mathematics, the complexity of both skills requires consideration of multiple factors. Therefore, the challenge for school-based evaluation professionals is to determine what areas to assess that are most relevant for understanding the student’s academic difficulty and what to do about it in instruction. This does not mean focusing only on one area (e.g., foundational skills), the curriculum mismatch with the student’s level of functioning, or the teacher’s classroom management—because all of those areas may be relevant and related to the student’s difficulties. The key is to carefully identify what skills are most crucially involved in academic performance and, when absent or underdeveloped, are likely responsible for a student’s difficulty in the overall academic domain. These are the skills to assess and may be the skills targeted in the intervention. By extension, this also means not using time and resources to assess skills or abilities that are less relevant and useful as intervention targets. Understanding what academic skills to assess is made easier by understanding how reading, mathematics, and writing typically develop. Much has been learned across the past 30 years about how students achieve proficiency in reading, mathematics, and writing. This research has simultaneously made great strides in understanding how those skills break down for students with difficulties. This knowledge facilitates a more informed selection of assessment methods and tools for academic skills difficulties, better interpretation of assessment results, more targeted identification of intervention strategies, and more responsive instruction over time. For example, understanding the role of phonological awareness in reading development (and its deficits in reading difficulties) helps evaluators understand when and why to include measures of phonological awareness, what the assessment results mean for understanding a student’s reading difficulty, and how the findings inform intervention development. This knowledge also helps the evaluator recognize assessment situations in which measures of phonological awareness may not be as relevant. Historically, training programs for school psychologists and educational diagnosticians have often neglected comprehensive instruction in how academic skills develop. Instead, training has often emphasized the how-to of assessment—the procedures of test
Choosing Targets for Assessment and Intervention 49
administration and scoring—without building an understanding of why certain skills are important to assess. One of the most valuable bits of advice that I (Clemens) ever received came from Dr. Lynn Fuchs, who served on my doctoral dissertation committee when I was a graduate student. After reading my dissertation proposal, she gently recommended that I thoroughly read the theories and evidence on how reading skills typically develop, which is a basis for understanding how they break down. Her advice led me to the work of Linea Ehri, Marylin Adams, Keith Stanovich, David Share, Charles Perfetti, Catherine Snow, and others. It fundamentally changed how I viewed assessment and intervention. In this edition, we hope to offer readers a foundation of reading, mathematics, and writing development and some starting points for self-study. To assist practitioners in the assessment and intervention of academic skills, I developed what I call “keystone” models of reading, mathematics, and writing. Consistent with the keystone behavior perspective discussed earlier (i.e., behaviors that have broad- reaching positive effects for students and their instructional environments), these keystone models of academics identify skills and knowledge sources that are critical to the development of academic skills proficiency and, when absent, help explain the reasons for a student’s difficulties. Consequently, these keystone skills are primary targets of assessment and intervention. Readers will note that a common theme across models for reading, mathematics, and writing is the importance of fluency with basic foundational skills that facilitate learning and developing more sophisticated skills in their respective domains. Each model is presented across the following sections and is based on theory and extensive research with typically achieving students and students with academic difficulties.
Reading The essence of reading is the comprehension of text—the ability to understand and learn from print. With this goal in mind, reading subskills (e.g., letter–sound knowledge, phonemic processing, word reading) should be viewed as skills that are influenced by and combine with language to make reading comprehension possible, not as goals in and of themselves. However, for students with reading difficulties, problems with reading comprehension are often the result of problems with underlying subskills in reading or language. Furthermore, a given student’s stage of reading development makes some subskills more relevant for assessment and intervention than others. Before discussing the keystone model of reading, we discuss perspectives on which it is based.
The Simple View of Reading The simple view of reading (SVR; Gough & Tunmer, 1986) is one of the most wellknown models of reading proficiency and is shown in Figure 2.1. The SVR was developed to communicate the two primary skill and knowledge domains that contribute to reading comprehension, and for identifying the two major areas in which reading difficulties arise. The SVR models reading comprehension as the product of (1) word reading (i.e., decoding; the skills needed to read words in text with sufficient accuracy and efficiency), and (2) linguistic comprehension, which involves the skills and knowledge involved in understanding spoken language, including vocabulary knowledge, syntax, inference making, and background knowledge. In other words, decoding × language comprehension = reading comprehension. This expression means that, as words are read accurately, the reader applies their knowledge of spoken language (i.e., vocabulary, syntax) to make
50
Academic Skills Problems Language Comprehension
Reading Comprehension Decoding (Word Reading)
FIGURE 2.1. The simple view of reading (Gough & Tunmer, 1986).
sense of the text. It is a multiplicative expression, meaning that if either decoding or language is “zero,” reading comprehension does not occur. Put differently, text cannot be comprehended if words are not read accurately. Similarly, text cannot be comprehended if words are pronounced correctly but the meaning of the words, or the ideas they represent, are unknown. The SVR conceptualization has been supported across numerous studies, ages, and languages (e.g., Catts et al., 2006; Joshi, 2018; Joshi & Aaron, 2000; Kendeou et al., 2013; Kirby & Savage, 2008; Lonigan et al., 2018), including nonalphabetic languages such as Chinese (Peng et al., 2020). The SVR was not intended to be a complete view of reading. It does not imply that reading is “simple,” or that decoding and linguistic comprehension domains are simplistic (indeed, the content below will demonstrate how complex these aspects are). It does not discount other influences on reading comprehension. As a model of reading, the SVR is intentionally oversimplified to illustrate the predominant skill domains involved in reading comprehension, and to communicate the essential importance of word reading for reading proficiency (Gough & Tunmer, 1986; Hoover & Gough, 1990). The model also shows the primary ways in which reading difficulties can occur: decoding difficulties, language difficulties, or both. From an assessment standpoint, the SVR provides some excellent starting points. It illustrates how the assessment of a student referred for reading difficulties should begin by ascertaining whether the student’s primary difficulties exist (1) at the word level, meaning the student experiences difficulties reading words, and by extension, text, with accuracy and efficiency; (2) in language comprehension, including inadequate knowledge of vocabulary or language skills to process spoken language; or (3) both areas. This consideration must take the student’s general level of reading development into account because the relative importance of each element as the primary driver or limiter of reading comprehension changes by level of reading development (Lonigan et al., 2018). For beginning readers, who are learning to use the alphabetic code to read words, reading comprehension is primarily influenced and limited by decoding (Storch & Whitehurst, 2002). As word and text reading skills develop, word-level skills become increasingly less primary in explaining reading comprehension. Thus, for more skilled readers, reading comprehension is primarily influenced (or limited by) language, background knowledge, and skills connecting ideas in text. Although this continuum of reading proficiency is correlated with age (i.e., younger students are more likely to experience difficulty reading words), it is important to view the continuum as based on level of reading skill, and not necessarily on age. Consider a fourth grader with significant word reading difficulties.
Choosing Targets for Assessment and Intervention 51
Their word reading difficulties are the primary explanation for their reading comprehension difficulties. Conversely, consider a second grader with very strong word reading skills but is struggling in reading comprehension. Their reading comprehension difficulties are more likely to be the result of underdeveloped linguistic skills or limited background knowledge. As such, the SVR serves as a basis for identifying overall targets for assessment, and subsequently, intervention.
A Keystone Model of Reading
Language: Vocabulary and linguistic comprehension, knowledge Phonemic Awareness Letter–Sound Knowledge
Word Reading Accurate
Text Reading Efficiency
Efficient
FIGURE 2.2. The keystone model of reading.
Reading Comprehension
The keystone model of reading was developed to guide assessment and intervention activities. This is not a new model of reading because it incorporates aspects of several theories and models of reading development. Rather, I think of its contribution as helping educators and school psychologists in identifying the primary “keystone” skills essential for reading development, how they work together and interact toward the development of reading comprehension, and why they serve as important possible targets for assessment and intervention. The keystone reading model is displayed in Figure 2.2. It is informed by research and theoretical perspectives such as the SVR (Gough & Tunmer, 1986); Ehri’s phase theory and perspective on orthographic mapping (Ehri, 2020b); Perfetti’s verbal efficiency theory (Perfetti, 1985), lexical quality hypothesis (Perfetti, 2007), and reading systems framework (RSF; Perfetti & Stafura, 2014); Share’s self-teaching hypothesis (Share, 1995, 2008); and connectionist models of reading (Foorman, 1994; Harm & Seidenberg, 1999; Seidenberg, 2017). Working from left to right in Figure 2.2, the model summarizes the interrelations of language and early literacy building blocks that ultimately make reading comprehension possible. Reading must be considered as a developmental sequence, not as a set of isolated skills that exist independent of each other, and not skills that are equally important all the time. Problems with any one of the skills cause problems for everything else that happens downstream. For example, problems with reading words accurately affect one’s ability to read text with accuracy and efficiency, thereby limiting one’s ability to understand what is being read. Even earlier, difficulties with phonemic awareness affect one’s ability to connect letters to sounds, and their ability to segment or blend sounds when learning to read and spell words. These difficulties with foundational skills create a chain reaction; word reading development is limited or severely impaired, and in turn, these difficulties
52
Academic Skills Problems
impair the ability to read text with fluency, ultimately impairing the ability to understand it. The effects are similar for a student with inadequate vocabulary knowledge or other problems with language in general; their word reading skills might be adequate, but their language difficulties impair reading comprehension. The goal of an assessment of reading difficulties is to identify the point in the process at which things have broken down. This may be a point where the problem seems to originate, which negatively affects the downstream development and coordination of skills that enable a student to understand print. This perspective helps the evaluator focus on the right skills. For example, consider a first grader referred for “difficulties comprehending what they read,” whose teacher indicates that the student has difficulty reading words accurately and struggles to read unknown words. Considering the SVR, the evaluator should recognize that the student’s difficulties are not really about poor reading comprehension—the student’s primary difficulties (and thus the targets of assessment) are likely with the foundational skills that make comprehension possible. Next, we walk through each aspect of the keystone model and discuss how it guides the identification of targets for assessment and intervention.
The Central Role of Language in Reading Language spans the entire model in Figure 2.2. Written text is language in print form. In an alphabetic writing system, like English, Spanish, and many others, letters of the alphabet are a code used to represent sounds that can be combined in various ways to make words. A writing system therefore is a code to make spoken language permanent. Language, in the forms that we know it, existed long before writing systems. Recent estimates indicate that speech-based language has existed far longer than previous estimates of 200,000 years, perhaps as long ago as 20 million years (Boë et al., 2019). We evolved to process language; the ability to communicate improved chances for survival. Therefore, there was an advantage favoring genetic variations that ultimately resulted in the ability to use spoken language without needing to be taught how. In contrast, the earliest known writing systems, such as Sumerian, emerged about 5,000 years ago and the earliest alphabetic writing systems, more recently than that. This means that spoken language has been a part of human existence for a vastly much longer period of time than reading has. The human brain evolved to acquire language simply by being exposed to it. However, unlike spoken language, reading is not acquired simply through exposure. A writing system is a human creation: a coding system used to make a permanent record of spoken language. We must learn the code to be able to read words. Reading was not part of the evolution of the human brain, but the brain conveniently uses neural networks that evolved to process spoken language and symbolic information, which allows us to read text (Seidenberg, 2017). Nevertheless, there are critically important ways in which language development and proficiency play roles in both the initial acquisition of reading, as well as in proficient reading comprehension.
“The Spark”: The Interaction of Phonological Awareness and Alphabetic Knowledge, and the Start of Reading Development Next we will work through the model in Figure 2.2 starting on the lower left, where an interaction is depicted between phonological awareness and alphabetic knowledge.
Choosing Targets for Assessment and Intervention 53 WHAT IS PHONOLOGICAL AWARENESS?
Phonological awareness (also referred to as phonological processing or phonological sensitivity) is one’s perception of sounds within words and is critically tied to reading development (Bus & van IJzendoorn, 1999; Kirby et al., 2003; Melby-Lervåg et al., 2013; Wagner & Torgesen, 1987). As part of the language system, phonological processing involves listening and does not involve printed letters or words. In practice, phonological processing is often used to describe students’ ability to identify words that rhyme, and their ability to isolate and manipulate (i.e., segment, blend, delete) portions of words, such as syllables. It is an umbrella term that encompasses a more sophisticated (and important) aspect called phonemic awareness, which we describe next. WHAT IS PHONEMIC AWARENESS?
Phonemic awareness (also used with terms such as phonemic processing and phonemic sensitivity) is a more specific form of phonological awareness and refers to one’s ability to perceive the smallest units of sound in speech, called phonemes. There are 44 phonemes in English, as listed in Table 2.1. These 44 phonemes can be combined in various ways, like Lego bricks, to form all 170,000+ words in spoken English. Phonemic awareness, then, refers to one’s ability to perceive these small sounds within words, and phonemic processing involves the ability to do things with the sounds, such as segmenting a word into its individual phonemes (e.g., segmenting cat into /k/ . . . /a/ . . . /t/); blending phonemes together to form a whole word (e.g., blending “k . . . a . . . t” together to form cat); identify words that have the same beginning, middle, or ending sounds; or remove or replace a phoneme in a word to make a new word (e.g., removing /k/ from cat leaves at, changing /k/ to /b/ in cat makes bat). For simplicity, we refer to this awareness and these skills collectively as phonemic awareness. Like phonological awareness, phonemic awareness activities do not need to involve print. But as will be discussed below, phonemic awareness integrated with letters is superior to phonemic awareness activities in isolation. This integration is the essence of phonics, an instructional approach that teaches the connections between letters and sounds and how to use that information to read words. We discuss phonics in detail later on. So where does language proficiency fit in with phonemic awareness? The reader will notice that in Figure 2.2 there is an arrow pointing from language to phonemic awareness. Although phonemic awareness falls under the larger umbrella of language it is better conceptualized as a metalinguistic skill because it involves recognition and attention to sounds within words as opposed to what words mean. Language proficiency plays a role in its development because the ability to discriminate phonemes likely comes through experience. The more language a child hears, the more sophisticated they become at perceiving similarities, differences, and subtleties in sounds in words. Indeed, evidence indicates positive correlations between language and vocabulary knowledge with phonemic processing among preschoolers (Lonigan et al., 2009), and that vocabulary development has a causal influence on the development of phonological and phonemic processing (Lonigan, 2007). How does this happen? Although researchers are not exactly sure, one possibility is that developing extensive vocabulary knowledge depends on hearing a lot of it, and hearing a lot of language allows the brain to perceive it on an increasingly more precise level. Greater precision in perception then may allow for a greater ability to perceive phonemes, the smallest units of speech.
Academic Skills Problems
54
TABLE 2.1. The 44 Phonemes of Spoken English and Their Written Forms Phoneme
Written forms (graphemes)a
As in . . .
/b/ /d/ /f/ /g/ /h/ /j/ /k/ /l/ /m/ /n/ /p/ /r/ /s/ /t/ /v/ /w/ /y/ /z/ /ch/ /ng/ /sh/ /th/ /th/ /wh/ /zh/ /a/ /ā/ /e/ /ē/ /i/ /ī/ /o/ /ō/ /u/ /ū/ /oo/ /oo/ /ow/ /oy/ /ar/ /air/ /ear/ /or/ /ur/
b, bb d, dd, ed f, ph g, gg h j, g, dge c, k, ck, ch, cc, que l, ll m, mm, mb n, nn, kn, gn p, pp r, rr, wr s, ss, c, sc t, tt, ed v w y z, zz, s, x ch ng, n sh, ss, ch, ti, ci th th wh s a, au a_e, ay, ai, ey, ei e, ea ee, e_e, e, ea, ey, ie, y i i_e, i, igh, y, ie o, a, au, aw, ough o_e, oa, ou, ow u, o u_e oo oo, u, ew ow, ou oi, oy ar air, ear, are ear, ere or, ore, oor ur, ir, er, ear, or
bat, lobby dip, ladder, mailed fin, photo get, egg hip jam, gem, fudge cat, kit, pick, ache, occur, plaque let, fill met, comma, numb net, sunny, knife, gnat pet, happy rat, hurry, wrap sat, mess, city, scent ten, be, wetter, cooked van win yet zoo, buzz, busy, xylophone chat rang, link shop, passion, machine, patient, social thief, bath that, breathe when vision at, laugh bake, bay, pain, they, vein bet, head tree, Pete, be, leap, key, thief, city it mine, kind, fight, cry, pie pot, swap, caught, claw, fought hope, coat, shout, low cup, oven huge look boot, flu, flew cow, pout coin, boy far lair, pear, stare dear, here for, more, floor turn, flirt, perk, heard, work
a Most
common grapheme form is listed first, in bold.
Choosing Targets for Assessment and Intervention 55
One analogy I (Clemens) like to use is to think about listening to classical music played by a full orchestra. Imagine if, like me, you were a classical music novice and you heard a piece for the first time. You would likely hear it on a more holistic level. Although it might sound pleasing, and you may be able to discern the melody, overall tempo, and other salient aspects, your insight into its intricacies is limited because this type of music is new to you. However, things change the more you listen. Each time you hear the piece (or other pieces of classical music), you become more attuned to it—you begin to perceive it on a more specific and sophisticated level. You begin to discriminate more subtle aspects of the melody and structure of the piece. You can start identifying sections of the orchestra or individual instruments from the rest. While rhythm may have been less perceptible before, you can now hear that more clearly as a more sophisticated listener. Overall, rather than hearing the music more holistically, you are beginning to hear it more on a segmental basis. A music teacher can further help you hear the nuance as you listen (i.e., through instruction). Then, if you were to learn an instrument, your perception would reach even greater levels of sophistication, and greater still if you start playing in an orchestra. Developing the ability to discriminate small, individual units of sound in phonemic awareness may work in a similar way. This developing sophistication of discriminating speech sounds facilitates the ability to perceive words as a specific and unique series of sounds, which will lead to the ability to pull apart and decompose the words by those sounds. This insight will also facilitate the child’s ability to pair those individual sounds (phonemes) with printed letters (graphemes), thus forming the most essential raw material for reading words. It is partly why speech and language difficulties are often associated with lower phonological awareness and reading difficulties. It is also a reason why most difficulties in reading, particularly word-level reading difficulties and disabilities (i.e., dyslexia), can often be traced back to problems with phonological processing. But phonemic awareness is not the only important foundational skill; it interacts with another critical skill that makes word reading possible. Moreover, phonemic awareness develops further as a result of learning to read and spell words. ALPHABETIC KNOWLEDGE AND THE IMPORTANCE OF LETTER–SOUND ACQUISITION
Alphabetic knowledge refers to knowing the names of letters of the alphabet and the sounds they represent. A term often used in reading research is the alphabetic principle, which refers more specifically to the understanding that letters represent sounds and that letters can be combined in various ways to represent pronunciations. In English, and particularly in the United States, alphabetic knowledge typically begins with learning letter names. There are many benefits to learning letter names. Foulin (2005), in a comprehensive review, concluded that learning letter names helps begin to tune the child’s phonological awareness to a more specific level. Learning letter names also helps children learn the sounds those letters make (Kim, Petscher, Foorman, et al., 2010; McBride-Chang, 1999; Piasta & Wagner, 2010; Treiman & Kessler, 2003; Treiman & Rodriguez, 1999) because (1) most letter names provide clues to their sounds (e.g., m, s, t, b, d, f, and most others), and (2) learning is often aided by connecting new information (e.g., a letter sound) to previously learned information (e.g., a letter name). Thus, learning letter names is a good thing. It is usually best to target letter names and sounds together, as research indicates that instruction pairing letter names with sounds results in superior letter–sound acquisition over teaching letter sounds only (Piasta et al., 2010; Piasta & Wagner, 2010).
56
Academic Skills Problems
Letter–sound correspondence represents perhaps the most important single skill that children must acquire in learning to read (Ehri, 1998). Letter sounds are the basis for the code on which written language is built. As will be discussed more below in word reading development, letter-sound knowledge is essential for learning to decode words, and subsequently, words are primarily recognized based on the letters within them (not whole-word shapes or forms). THE INTERACTION BETWEEN PHONEMIC AWARENESS AND LETTER–SOUND ACQUISITION
Readers will notice in the keystone model in Figure 2.2 that there are circular arrows between phonemic awareness and alphabetic knowledge, indicating a reciprocal relationship (Byrne & Fielding-Barnsley, 1993; Burgess & Lonigan, 1998; Phillips & Torgesen, 2007; Wagner, 1988; Wagner et al., 1994). First, phonemic awareness helps facilitate the acquisition of letter sounds because the ability to hear and isolate individual sounds places a child in a better position to link that sound to a printed letter (Foulin, 2005; Treiman & Kessler, 2003). As Schatschneider and Torgesen (2004) described, a child who can hear four sounds in clap is in a better position to link each letter with a sound. Second, development of letter–sound knowledge (and later, learning to read words) helps promote finer-grained perception of individual sounds in speech and the ability to manipulate them (remember the orchestra analogy, and how learning to play an instrument enhances your ability to perceive music on a more sophisticated level), which represents increasing sophistication in phonemic awareness. Therefore, across the initial stages of reading development, phonemic awareness and emerging alphabetic knowledge interact to provide the essential raw material for learning to read and spell words, and doing so reciprocally promotes more sophisticated development in both areas as reading progresses (Burgess & Lonigan, 1998; Hulme et al., 2012; Wagner et al., 1994). Learning to read and spell words contributes to further refinement and enhancement of phonemic processing skills, as readers draw on their knowledge of word spellings when completing more advanced phonemic manipulation tasks, such as phoneme deletion (i.e., elision) and phoneme substitution (Byrne & Fielding-Barnsley, 1993; Castles et al., 2003; Hogan et al., 2005; Perfetti et al., 1987). Two phonemic awareness skills stand out as being particularly important for learning to read and spell: phoneme segmentation and phoneme blending (Beck & Beck, 2013; O’Connor, 2011). A child with some degree of phonemic awareness and knowledge of a few letter sounds is ready to learn to read (decode) and spell (encode) words. It is for these reasons that phonemic awareness and letter–sound knowledge are two keystone skills in the model, and thus key targets for assessment and intervention in early reading.
Learning to Read Words The ability to read words develops through a connection-forming process in which pronunciations are bonded to units of letters, a process that relies on students’ knowledge and skills in phonemic awareness and letter–sound correspondence (Ehri 1998; Elbro & de Jong, 2017; Foorman et al., 1991; Seidenberg, 2017; Share, 1995, 2008). More simply, knowledge of letter–sound correspondence and phonemic awareness helps children open the door to reading by giving them the tools to decode words by “sounding out.” For example, a beginning reader might look at the word at, use their knowledge of letter sounds to sound it out (“aaaaa . . . t”), and then use their skills in phoneme blending to
Choosing Targets for Assessment and Intervention 57
merge the sounds together to say “at.” If asked to spell at, a beginning reader would rely on their skills in phoneme segmentation to isolate the sounds and pair each sound with a letter from memory. Over time, through exposure and feedback, children learn to connect letters to pronunciations in increasingly larger units so that sounding out is no longer necessary. For instance, after repeated encounters with at, and some form of feedback to know their pronunciation is correct, typically developing readers begin to read it as a whole unit. They have connected a set of letters with a pronunciation, which over time also allows them to read the word almost instantaneously and with little conscious effort. Through practice and instruction, this process is repeated numerous times across other spelling patterns. Children learn that putting some letters together can make a single sound (like “ch”), which allows them to sound out chat as “ch . . . at.” After reading chat a few times, with feedback that they have read it correctly (either from a teacher or recognition that their pronunciation is correct), they can read the word as a whole unit. They will not even have to think about it—they will just know it. They now can read the word “by sight.” They will also be in a better position to spell it from memory. This is sometimes referred to as orthographic mapping because the sequence of letters in word has been mapped (i.e., “linked”) to a pronunciation. Over time, the units of letters that students link to pronunciations grow increasingly larger, and the number of spelling patterns that students learn grows increasingly vast. These links in memory become stronger with reading experience, to the point that word reading becomes very fast and efficient. The interaction between phonemic processing and letter–sound knowledge is the engine that drives the formation of connections between letter units and pronunciations. For many students (especially students with reading difficulties), this must be directly taught. Instruction that teaches students to connect sounds and pronunciations to letters and letter combinations, and use this information to read words, is phonics instruction. Students that can begin to segment and blend individual phonemes and know two- to three-letter sounds can begin to learn to read simple words. It is not necessary that students be flawless at segmenting or blending, nor is it necessary that they know a large number of letter sounds. Teaching only a few common letter sounds like “a,” “m,” “s,” and “t” allows students to use that information to sound out and blend words like at, am, mat, sat, and Sam. Therefore it is important to teach very common and useful letter sounds first, not in alphabetical order. Additional letter sounds are taught along the way and added to the types of words and spellings that students are taught to read.
Theories of Word Reading Acquisition There are several prominent and overlapping theories of word reading acquisition. These theories help explain how, across a few years of reading instruction, students go from reading no words to being able to read thousands of words almost instantaneously and with virtually no conscious effort. These theories of word reading acquisition are embedded in Figure 2.2. EHRI’S PHASE THEORY
Linnea Ehri’s phase theory is a highly accessible and research-driven theory of how children move from being nonreaders to being highly skilled at reading words (Ehri, 1998, 2005). Supporting evidence is comprehensively reviewed in Ehri (2020b). She proposed
58
Academic Skills Problems
that, through exposure, instruction, and practice, readers move through four phases that describe their predominant approach to reading words. Beginning in the pre-alphabetic phase, children have not yet acquired letter sounds or understand how to use them to read words. However, there may be occasions in which they appear to be reading, such as saying “stop” when they see a stop sign, or shouting out “McDonald’s!” when they see the chain’s logo. However, rather than reading the words, they are recognizing the salient visual features surrounding the word, such as the color, word shape, or features of a logo. For typically developing readers, this phase usually occurs in PreK (i.e., before formal reading instruction). Although they are not reading, these behaviors are still important because it reveals that children are building symbolic knowledge—they are recognizing that a word or logo represents something. This symbolic understanding will help facilitate learning that letters and letter combinations represent sounds. According to Ehri’s theory, children enter the partial alphabetic stage when they begin to learn letter–sound correspondences and use that information to sound out words. This phase usually occurs for students in early through midkindergarten, depending on the nature of reading instruction. No longer guessing at words based on wholistic features (although many beginning and struggling readers may continue to do this), children in the pre-alphabetic stage are learning that a word can be read by “decoding” its unique combination of letters. At this stage, students have usually not learned all letter sounds and they are likely not very fluent with them either. They take time to sound out words, may make errors in connecting letters to sounds, and inconsistently blend sounds back together to form a word. However, they are developing the most important and reliable strategy for reading unfamiliar words, which they will continue to use to their advantage in the coming years. Children enter what Ehri refers to as the full alphabetic phase when they have mastered letter–sound correspondence and some short letter combinations, and they can use this information to more accurately decode unfamiliar words. This phase generally describes typically developing readers in late kindergarten into first grade. Most importantly, students in this stage are beginning to map (i.e., link) increasingly longer units of letters to pronunciations. This consolidation of individual letters and letter combinations into larger spelling patterns allows children to bypass sounding out words letter by letter. Eventually, whole-word spellings become linked to pronunciations. Ehri’s consolidated phase represents increasing unitization of familiar letter patterns, as the child establishes a very large store of spelling pronunciations in memory (across first through third grades). This will involve linkages for whole-word spellings that increase in length over time, as well as letter units found in longer words (e.g., syllables, affixes, and roots). The consolidation of individual letters into larger spelling patterns allows printed words to be recognized almost instantaneously and very efficiently, as if “by sight” (the multiple meanings of the term sight word will be addressed later on). This phase may generally span first through third grades for typically developing readers. Ehri’s phase theory helps illustrate how readers develop the ability to read whole words with automaticity. That ability was made possible through a long process in which they first learned to link printed letters to sounds, learned how to use that information to sound out words, and through instruction and practice, gradually bound pronunciations to increasingly larger letter patterns. The process was initially made possible through the interaction of phonemic awareness and letter–sound correspondence.
Choosing Targets for Assessment and Intervention 59 SHARE’S SELF‑TEACHING HYPOTHESIS
David Share’s self-teaching hypothesis (Share, 1995, 2008) complements Ehri’s phase theory. Although explicit phonics instruction is critical for most readers to learn to read words (Ehri et al., 2001; Snow et al., 1998), it would be impossible for direct instruction to target each of the tens of thousands of words needed for skilled reading. Share’s self- teaching hypothesis provides a basis for understanding how typically developing readers learn to read thousands of words without needing to be taught each one. His theory suggests that, following the development of foundational decoding skills and through supported opportunities to practice reading words and text, self-teaching processes allow a reader to use their decoding skills to read unknown words independently (Share, 1995, 2008). Repeated opportunities to correctly pair letter strings to pronunciations allow the learner to rapidly build an orthographic memory store so that words can be read quickly and with little conscious effort. Essentially, the self-teaching hypothesis reflects a form of skill generalization; students apply previously learned letter–sound combinations to reading other similarly spelled words. Although it refers to “self-teaching,” the theory does not imply that students should be left to discover how to read by themselves. Direct, explicit instruction in letter–sound correspondence and phonemic awareness, how to use those skills to crack the code, are necessary to activate the process. Furthermore, self- teaching is made possible by lots of opportunities to read words, feedback when attempts are correct, and correction when errors are made. The self-teaching hypothesis underscores the importance of exposure and practice for building a large memory store of spelling–pronunciation linkages. It also offers an explanation for significant word reading difficulties. Some students, due to insufficient instruction, inadequate opportunities to read, inadequate basic skills in phonemic awareness or alphabetic knowledge, or an underlying disability, are missing the requisite skills needed to initiate the proliferating self-teaching process that their typically developing peers benefit from. Furthermore, recalling our discussion of test and curriculum overlap, the self-teaching hypothesis (as well as Ehri’s phase theory and connectionist models discussed below) underscores why it is not necessary for reading tests to have a direct correspondence between words tested and those targeted in instruction. Skill generalization to specific words not explicitly taught is a hallmark of skilled readers, and deficits in this area are characteristic of students with word reading difficulties. CONNECTIONIST MODELS OF READING ACQUISITION
Connectionist frameworks overlap with Ehri and Share’s theories, but they provide a more detailed illustration of how connections are made between increasingly larger letter patterns and pronunciations. Connectionist frameworks (Foorman, 1994; Harm & Seidenberg, 2004; Seidenberg & McClelland, 1989; Seidenberg, 2017) hypothesize that learning to read a vast number of words with efficiency is made possible through the development of a distributed network of connections among words. The network is formed through connections between overlapping phonological (pronunciation), orthographic (word spelling), and semantic (meaning) layers of information. The size and quality of the networks are moderated by the strength of the connections (i.e., “weights”) among units within and across layers of the network. Reading a printed word correctly involves connecting orthographic input (the word’s spelling) to phonological information (a pronunciation), and connecting the pronunciation to semantic information (the word’s meaning). Thus, in a connectionist framework, words are hubs that connect phonological,
60
Academic Skills Problems
orthographic, and semantic representations, which share interrelations with representations of other words to form the network. Like the other theories, connectionist models recognize the importance of explicit instruction in phonics to kickstart the connection-forming process (Foorman, 1994; Seidenberg, 2017); however, direct instruction cannot teach every printed word a child will encounter. Experience is essential to the development of the network—a developing reader must have many opportunities to vread words and receive performance feedback. Words overlap in their spellings, pronunciations, and meanings. This overlapping information is used to learn pronunciations that apply to certain combinations of letters but not others. Hence, learning to read words involves perceiving the statistical regularities of the writing system, a notion that has led to the application of statistical learning to how children learn to read. Statistical learning is a set of implicit learning processes driven by the recognition of patterns and probabilistic sequences within and across stimuli. The brain is very sensitive to recognizing stimuli that occur together. It is one of the ways we learn. Statistical learning offers explanations for how children can acquire complex skills, such as language, with no formal instruction. Statistical learning is increasingly the subject of reading studies with children and adolescents, as scholars have suggested that perceiving underlying statistical regularities of spelling patterns in words may explain the ability to read tens of thousands of words without instruction on each one (Treiman et al., 1995). Studies indicate that in a semi- transparent orthography like English, readers are highly sensitive to the frequency of letter patterns and their position within words (Chetail, 2017), and even older preschoolers are sensitive to reoccurring letter combinations (Mano & Kloos, 2018). Inherent in connectionist models and perspectives on statistical learning is the idea that students begin to recognize similarity in word spellings but variability in pronunciations. Pronunciation can be influenced by letter position in words (e.g., “gh” in ghost vs. laugh). Some letters or letter combinations are more likely to make a certain sound when paired with another letter, for example, “ea” makes a different sound when paired with “r” (as in earth and search) than when “r” is not present (as in neat and bead). Lots of exposure to words may engage these statistical learning mechanisms that are sensitive to surrounding letters, position within words, and other inductive, probabilistic learning mechanisms to link spellings with pronunciations (Apfelbaum et al., 2013; Chen & Savage, 2014; Compton et al., 2014; Deacon & Leung, 2013; Steacy et al., 2019; Treiman, 2018). Thus, statistical learning underlies inductive, self-teaching hypotheses. However, it is important to remember that explicit phonics instruction is usually necessary to start the connection-forming process for most children (Foorman, 1994; Seidenberg, 2017), and this explicit instruction is critical for students who experience risk factors for reading difficulties.
What about Learning to Read Sight Words? Children learning to read will encounter phonetically regular words, like up, cat, and stack, in which all letters represent their most common sounds. Sometimes these types of words are referred to as decodable. In written English, students will also encounter phonetically irregular words (i.e., so-called “exception” words), in which some or all letters do not represent their most common sounds, or their spelling patterns violate traditional phonics rules (e.g., have vs. gave, bear vs. bead). Historically, these irregular words have been referred to as sight words, which assumes that they are read differently than phonetically regular words. There are several problems with this perspective.
Choosing Targets for Assessment and Intervention 61
The first problem with the sight word perspective is that it is inconsistent with contemporary understandings of reading acquisition. Models such as Ehri’s phase theory, Share’s self-teaching hypothesis, and connectionist frameworks argue that the ability to read all words relies on a common foundation (Harm & Seidenberg, 1999; Seidenberg, 2005, 2017), whereby word-specific knowledge is facilitated by phonemic awareness and alphabetic knowledge. This results in generalizable skills in reading words regardless of phonemic regularity (Aaron et al., 1999; Ehri, 1995, 2014; Elbro & de Jong, 2017; Share, 2008). Evidence indicates that reading phonetically regular and reading irregular words are not independent skills (Aaron et al., 1999), and letter–sound acquisition facilitates reading both types of words (Clemens, Lee, et al., 2020; Foorman et al., 1991). A key aspect of this perspective, as Ehri points out, is that few words are completely irregular. When considering regularity to include single letters and letter combinations (e.g., ch, sh, ck, and so on), the vast majority of irregular words have at least one letter or word part that matches its most common pronunciation, and in many cases, most of the word is phonetically regular. Very often, a word becomes irregular due to a single letter or letter combination that does not conform to its most common sound. This is the reason that connectionist frameworks view words on a continuum of regularity, as opposed to the idea that some words are “regular” and others “irregular” (Seidenberg, 2017). For instance, words like cup, fish, and catch are entirely regular; and words like find, both, and begin are mostly regular. In fact, most words in English fall within the entirely to mostly regular range. Only a minority of words are mostly irregular, like aisle. Therefore, using letter sounds to sound out words will work for a lot of words and will get students close enough on a lot of others to read them correctly. The second problem with the sight word perspective is that it creates a misguided perception that learning to read phonetically irregular words requires a different set of teaching methods than learning to read regular words. In practice, the faulty notion of sight words can lead to “look-say” practices in which students are taught to memorize and guess at words based on how they “look,” as opposed to having students attack words based on the letters within them. Colenbrander et al. (2022) found that kindergarten students learned to read irregular words better when teachers prompted students to spell them or attempt to decode the words and correct their mispronunciations, compared to a “look-say” approach to instruction. According to the theories discussed here, reading words relies primarily on phonemic awareness and alphabetic knowledge. As Perfetti (1985) noted, “The identification of words is mediated by the perception of letters” (p. 46). Words are not recognized as whole-word shapes or word forms; readers process words based on their unique pattern of letters—orthographic strings that have been added to memory—which is made possible by connecting letters and letter patterns to pronunciations. Ehri’s phase theory, Share’s self-teaching hypothesis, and connectionist models of reading apply to learning to read all words, not a specific type. This is not to say that irregular words do not require something extra. For beginning readers, encountering phonetically irregular words requires an additional step in which the reader learns to adjust their pronunciations. They learn to flexibly apply letter–sound knowledge and phonics rules to alter a pronunciation to match a word in their vocabulary. For example, a reader learns to adjust their pronunciation of the word captain from “cap-tane” to “cap-tin.” Recent perspectives consider two-step processes in word reading development: one step in which letters are recoded as sounds, and a second step in which the word is recognized, which in some cases involves recognition of slightly different spelling-specific pronunciations (Elbro & De Jong, 2017). This ability to adjust partial or inaccurate pronunciations has also been referred to as set for variability (Savage et al., 2018; Steacy et al., 2019; Tunmer & Chapman, 2012) as the reader must possess some
62
Academic Skills Problems
level of flexibility in their knowledge of letter–sound correspondences to read words accurately in a semi-regular writing system, such as English. Having the word in one’s oral vocabulary significantly benefits this process by allowing the reader to match their adjusted pronunciation to a word in their vocabulary (Elbro & de Jong, 2017), and this is described in more detail in the vocabulary section below. This is the reason that there is an arrow from language comprehension to word reading in the keystone model (see Figure 2.2). A critical point is that the process to connect spellings to pronunciations is made possible by acquiring letter–sound associations, learned to an extent that they are recognized immediately and with little conscious effort (Elbro & de Jong, 2017). In summary, it is time to dispense with the notion of sight words. Treating irregular words as being very different from regular words, or teaching students to read them a different way, is inconsistent with contemporary theories and evidence on word reading acquisition. Educators and school psychologists should promote the understanding that learning to read all types of words is made possible by a common set of underlying skills, and that sounding out offers the best go-to strategy for reading any word. Instructional approaches are described in detail in Chapter 6.
Word Reading Efficiency and the Development of Reading Fluency Reading fluency is generally viewed as the ability to read connected text accurately, smoothly, with ease and proper inflection (i.e., prosody), and attention to punctuation. The measurement of oral reading rate (i.e., oral reading fluency) is a widely used index of reading proficiency, owing to the strong and consistent correlations between the number of words read per minute and measures of reading comprehension observed across numerous studies (e.g., Reschly et al., 2009). Indeed, reading fluency is a valuable target for assessment because of how well it serves as an indicator of overall reading skills. However, the apparent simplicity of measuring reading fluency—at its most basic level, it merely involves measuring the number of words students read correctly from connected text—frequently obscures the myriad skills it represents, and why it is such a powerful indicator of reading. In fact, reading fluency is so complex that it is not appropriate to refer to it as a discrete skill, but as a coordination of multiple skills (Hudson et al., 2008). In this section, we discuss what makes up reading fluency and why measuring it serves as such a powerful index of reading. WORD READING EFFICIENCY: THE PRIMARY DRIVER OF READING FLUENCY
Fluent reading of connected text is largely driven by proficiency in reading individual words (Altani et al., 2020; Jenkins et al., 2003; Levy et al., 1997). Obviously, reading words accurately is important, but even more important is that words are read efficiently, which means that words are read with ease. We emphasize a focus on efficiency rather than fluency as the key skill that underlies reading proficiency and what facilitates reading comprehension. Perfetti (2007) noted, “Efficiency is not the same as speed. Efficiency is a ratio of outcome to effort, with time as a proxy for effort” (p. 356). Efficient word recognition is such that words are recognized with little conscious effort. Although the shift in terminology from fluency to efficiency may seem subtle or even superfluous, it has important implications for how reading is taught and how interventions for students with reading difficulties are built. As discussed in Chapter 4, prevailing views of fluent reading as an intervention target are often conflated with how fluency is commonly measured, which can have problematic outcomes. An emphasis on trying to overtly increase the number of words a student reads per minute shifts the focus from
Choosing Targets for Assessment and Intervention 63
“efficiency” to “speed”—precisely what Perfetti (2007) argues is a misguided view. Complex activities compete for limited cognitive resources. The goal in successful reading instruction is not that text is read quickly, it is that words are recognized and text is understood with little effort. Increased reading rate is a byproduct of this ease in reading text. Reading Efficiency and Learning to Drive. The ways that reading efficiency facilitates reading comprehension is like learning to drive a car. Think back to when you first learned to drive, and the first time a parent or relative took you out to drive. Think about those initial experiences behind the wheel and what was running through your head: Your hands firmly gripped the wheel at 10 and 2 o’clock, you had to consider how to move your feet to push the accelerator and how hard to press, and you had to think about where to find the brake pedal and how much pressure to apply. If the car had a stick shift, it added additional layers of complexity because now you had to think about how to operate the clutch and accelerator in reciprocal fashion while simultaneously using the gear shift. Your first forays out of the empty parking lot and onto a road brought a host of new perils; your concentration, already overloaded with having to operate the vehicle, was also tasked with keeping the car between the road lines and away from oncoming cars, monitoring your speed, and navigating turns. Amid this immense cognitive load, you were probably unable to do anything apart from the actions needed to keep from crashing. You were likely not thinking about your day or what to have for dinner. You were probably not listening to music or a podcast. You were likely not talking with whoever was with you; in fact, by this point, you had probably told them to be quiet. You had no cognitive space left to contemplate anything else other than driving. But as you practiced your driving more, things changed. Each time you went out for a drive, it got easier. You may not have realized it, but eventually you stopped needing to actively think about how to hold the steering wheel, where the brake pedal was located, or how hard to press the accelerator. You just did it. You became more used to driving on the road, so keeping the car between the lines came more naturally. You were no longer phased by approaching cars or an upcoming curve. Gradually, the actions of driving the car become more like reflexes, because you no longer had to actively think about them any more. At the same time, you started to find it easier to contemplate things other than driving, such as what you looked forward to doing that weekend. Over time you also found it easier to do other things while driving, such as listening to music, maintaining a conversation, or eating. In short, the actions involved in driving had become efficient and automatic; the cognitive resources previously needed to operate the car could now be devoted to other activities. Similar things happen with reading comprehension. A beginning reader, or a student with word reading difficulties, is much like you when you were learning to drive a car. Reading words is slow and laborious, and requires considerable effort and attention for each word. Errors are frequent. All your cognitive resources are devoted to reading words, with little focus left over to comprehend the text. The laborious pace and frequent errors further impede comprehension. Reading is exhausting. But through instruction and practice, word reading skills improve. We do not encourage a beginning driver to drive faster, and in the same way, we do not try to get students to read “faster.” In both situations, we use instruction and practice to build skills, familiarity, and confidence. Repeated exposures to words with feedback allow letter sequences and word spellings to become linked to pronunciations, where they can be retrieved easily and very quickly. Word reading becomes more efficient—students no longer have to sound out or think about a word, they just know it. Reading words requires less effort. It
64
Academic Skills Problems
is the same effect that you, a skilled reader, experiences when you see a word—you cannot avoid reading it (unless you squint your eyes and do something else to intentionally avoid reading it). Just as the actions of driving became automatic for you, this increasing efficiency in word reading means that the student no longer has to devote attention and significant cognitive resources to reading words, which in turn means that more cognitive space is left over to do other things. Students can now more easily construct and maintain a mental image (a so-called “situation model”) from the text, make connections to their experiences or other parts of the passage, and make inferences. The more that the wordlevel reading processes become easier and effortless, the greater opportunity the reader has to understand and learn from the text. Let’s further extend the car-driving analogy to what happens when we encounter difficult text. Consider the times when you, a skilled driver, experienced adverse driving conditions. Maybe it was blinding rain, heavy traffic, or a snowstorm. As soon as you add adversity to any situation, your cognitive energies and attention must refocus on the mechanics of driving. It requires more effort. Your driving slows down. You strive for concentration and seek to minimize distractions by turning down the radio or asking fellow car riders to remain quiet. Your thoughts are no longer on what’s happening at work or what you want to do that evening; you are focused exclusively on preventing a crash. Similar things happen when readers (of any skill level) encounter challenging text, such as a text with a lot of words that are difficult to pronounce, words or phrases that are unfamiliar in meaning, or text written in a style that is difficult to comprehend. Like a driver on a snowy road, your reading slows down. You have to concentrate on individual words and phrases. Your comprehension becomes strained or breaks down because you have to focus more energy and attention on reading the text. Then, like the snowstorm clearing, shifting back to easier text allows the efficiency to return and, with it, your comprehension. Another way to understand the relation of reading fluency to reading comprehension is that it makes comprehension possible by not letting word reading be the limiting factor. Word reading efficiency removes a barrier to comprehension by freeing cognitive resources that allow it to take place (Perfetti, 1985). According to Perfetti’s Lexical Quality Hypothesis (Perfetti, 2007), this efficiency is the result of immediately connecting a printed word with a pronunciation and the word’s meaning. This efficiency allows the reader to apply the same processes to text as they do to the spoken equivalent (Gough & Tunmer, 1986). It also explains why increases in reading rate (i.e., oral reading fluency) are associated with improvement in reading comprehension, but only to a point. Reading fluency benefits comprehension to the extent it does not impede it. Once reading text is fast enough to support comprehension, further gains in reading rate are not associated with an additional improvement in reading comprehension. Faster reading is not necessarily better reading, in the same way that faster driving is not necessarily better driving. READING COMPREHENSION INFLUENCES READING FLUENCY
Readers will notice a set of reciprocal arrows between reading fluency and reading comprehension in Figure 2.2. We have covered how reading fluency facilitates reading comprehension, but does reading comprehension influence reading fluency? Yes. Although it plays less of a role compared to the influence that word reading efficiency has, reading comprehension also has an influence on text reading fluency (Eason et al., 2013; Jenkins et al., 2003). This aspect is often overlooked by practitioners and researchers who tend to view reading fluency as a discrete skill and are focused on the “words per minute”
Choosing Targets for Assessment and Intervention 65
scoring metric. Reading rate is usually higher when individuals read text compared to when they read isolated words or words in lists, suggesting that semantic processing may benefit reading efficiency (Eason et al., 2013; Jenkins et al., 2003). Accurate decoding of a word and the retrieval of its meaning allows for context facilitation (Seidenberg, 2017), whereby the reader establishes a schema or expectation set in which upcoming words are more easily processed and the goodness of fit of word pronunciations is tested based on the context (Chace et al., 2005; Jenkins et al., 2003). As a result, text reading is more accurate and efficient. Additionally, consider wut hapenzz to yr reedding wen u en-counter chalenjing or unfamillyer txt. Did you notice what happened as you read the last sentence? You slowed down to navigate words that were unfamiliar. Reading rate slows down when readers encounter text that is challenging, words that are difficult to pronounce, or when their comprehension falters (Wallot et al., 2014). In short, comprehension affects reading rate: When comprehension is present, it can help make reading more efficient, and when comprehension is absent or breaks down, reading rate does as well. As will be discussed in Chapter 4, this relation is seldom considered when reading fluency is viewed as a discrete skill, or when interventions are aimed solely at increasing reading speed.
The Essence of Reading: Reading Comprehension By now, the skills and knowledge sources that make up reading comprehension should be coming into focus. Like reading fluency, contemporary perspectives of reading comprehension view it less as a “skill” and more of a successful orchestration of skills. Word reading efficiency goes a long way in making comprehension possible, but it is far from the only thing. Reading comprehension is complex and is influenced by multiple variables, which can be categorized into three domains: student, text, and task. Student-level variables that influence reading comprehension include a student’s skills in word reading efficiency, linguistic comprehension and reasoning (e.g., vocabulary knowledge, inference making), background knowledge and experiences, attention, motivation, and working memory (Ahmed et al., 2016; Cain & Oakhill, 2009; Cromley & Azevedo, 2007; Cutting et al., 2009; Gough & Tunmer, 1986; Kendeou et al., 2014; Peng et al., 2019; Perfetti, 1985; Perfetti & Stafura, 2014; Tighe & Schatschneider, 2014; Verhoeven & Van Leeuwe, 2008). Text-level variables that affect comprehension include features of the passage. Students’ comprehension tends to be stronger in narrative texts (i.e., stories, fiction) compared to expository and informational texts (Best et al., 2008), most likely because narrative texts contain more familiar vocabulary, situations, and students are more accustomed to the structure of a “story.” The authors’ writing style influences comprehension, as anyone who has read text from the 18th century will attest. Greater familiarity with different text structures (e.g., compare and contrast, sequence, problem and solution) can also influence comprehension (Hebert et al., 2016; Kendeou & van den Broek, 2007). At the task level, reading comprehension is influenced by the task demands or the purpose for reading. These variables influence motivation and attention to aspects of a passage. For instance, comprehension may be different when reading for pleasure, reading to study for an exam, or reading when taking a test (Snow, 2002). Therefore, reading comprehension is the product of students’ skills in reading text and comprehending language, but is influenced by the text and the situation in which reading takes place. The relative importance of these domains can vary depending on the task and circumstances in which reading occurs. Although text- and situation-level
66
Academic Skills Problems
factors influence reading comprehension, student-level skills and knowledge are arguably the most important because stronger skills and knowledge allow a reader to better navigate various types of texts and reading demands. Therefore, we focus more on student- level factors in the assessment and intervention of reading comprehension. STUDENT‑LEVEL KNOWLEDGE AND SKILLS (BEYOND DECODING) THAT MAKE UP READING COMPREHENSION
Having established the importance of word reading skills for reading comprehension, we now turn to the linguistic comprehension skills and knowledge sources that form the second critical arm of the simple view of reading. In considering reading difficulties, we focus on two primary interrelated areas: linguistic comprehension (and specifically, vocabulary knowledge) and background knowledge. The Central Role of Vocabulary Knowledge in Linguistic Comprehension. Although linguistic comprehension can refer to a large number of language processes, for practical purposes it can be simplified to what is arguably its most important: familiarity with vocabulary and syntax—understanding the meanings of words and phrases and how they are used. Vocabulary knowledge can be viewed as the essence of language, because without it, speech cannot be understood. Part of becoming fluent in another language is learning what words and phrases mean. Remove vocabulary knowledge, and it is just gibberish. The same is true for reading. Vocabulary knowledge is essential for reading comprehension in the same way it is essential for spoken language. Text that is accurately decoded without knowledge of what the words mean might as well be made up of pseudowords. Vocabulary knowledge certainly involves knowing how to define words, but more importantly, it involves understanding how words are used and what they mean in a given context. For example, if you, a skilled reader, were asked to define the word system, it would probably take you some time to think of the exact words to define it. However, you will have no trouble immediately comprehending the following sentence: “Blood flows through the circulatory system.” Although articulating a precise definition of the word is challenging, you can probably conjure a mental image of it and understand what it means in context. Other words are even more difficult to define, such as conjunctions or connectives like however or furthermore, but you understand what they signify and how they are used to convey meaning when you encounter them in text. With this knowledge you are able to see that “However, he forgot to buy milk” has a different meaning than “Furthermore, he forgot to buy milk.” This is all a part of vocabulary and language knowledge. Knowledge of word meanings helps you construct a mental image from the text and make inferences. For instance, the mental image you construct from “Sara walked confidently onto the stage” is different from your image based on “Sara walked hesitantly onto the stage” because of your knowledge of what confidently and hesitantly mean. In addition to providing the basis for the mental image we create when reading, vocabulary knowledge is also critical for making inferences while reading. Each of the previous sentences about Sara prompts inferences about why she was walking onto the stage in the way the text described. Vocabulary knowledge promotes a deeper understanding of text beyond a literal level and makes more connections possible to other ideas and prior knowledge. For example, when you read the sentence “Jan crept across the carpet,” you might infer that Jan does not want to be noticed because you know what the verb crept means. Without a good knowledge of crept, you may be able to form a basic sense of Jan
Choosing Targets for Assessment and Intervention 67
walking across the room, but knowledge of crept provides you with a richer understanding of what she was doing and allows you to better connect the reasons why she was behaving that way in the surrounding text. Thus, vocabulary knowledge is fundamental to reading comprehension (Joshi, 2005), and inadequate vocabulary knowledge significantly impairs reading comprehension or prevents it altogether. Studies have estimated that even if as few as 2 to 5% of the words in a passage are unknown in terms of their meaning, reading comprehension can be significantly disrupted (Carver, 1994; Hsueh-Chao & Nation, 2000; Schmitt et al., 2011). The role of vocabulary knowledge in reading comprehension can also help explain the difficulties that students learning another language have in reading; often, students learning English can learn the phonics “rules” for decoding words, but their developing knowledge of vocabulary remains a primary limiting factor of comprehension. Considering depth and breadth of vocabulary knowledge is particularly important when working with emergent bilingual students and students from historically marginalized communities, who may have had fewer opportunities to develop expansive knowledge of academic vocabulary and dialect common to academic texts (i.e., General American English). Background Knowledge: Knowledge of the World. In addition to vocabulary knowledge, another critical foundational aspect to reading comprehension is background knowledge. While vocabulary refers to “word” knowledge, background refers to “world” knowledge. It can be challenging to isolate background knowledge as being distinct from vocabulary knowledge, because having background knowledge on a topic often relies on understanding the terminology and concepts involved in that topic. Knowledge is often acquired through language (oral or written). Our knowledge is stored, organized, and communicated with language. Indeed, studies of reading comprehension have found that it is difficult to statistically model background knowledge as independent from vocabulary knowledge (Ahmed et al., 2016). Nevertheless, background knowledge can represent a distinct knowledge source, such as knowledge of the chronology of a set of events in history, familiarity with a place or region, or personal experience with a situation. This knowledge benefits reading comprehension. For example, knowledge of how some animals depend on each other in a food chain can support a student’s comprehension of an article on how organisms interact and depend on each other in a coral reef ecosystem. Background knowledge can also involve one’s personal experiences. For instance, a student who has experienced snow and cold winters may be in a better position to understand a text about the challenges of an arctic expedition than a student who has never experienced a cold climate. Theoretical models acknowledge the importance of the interaction among multiple skills and knowledge sources that results in reading comprehension. For example, Kintsch’s (1988) construction-integration model posits that readers construct meaning by integrating their existing knowledge with information conveyed by the text. A great model of reading comprehension is Perfetti’s reading systems framework (Perfetti & Stafura, 2014) that identifies three knowledge sources as the bedrock of reading comprehension: orthographic knowledge (i.e., word reading), linguistic knowledge (i.e., linguistic comprehension especially vocabulary), and general knowledge (i.e., world and background knowledge). When reading, these knowledge sources interact and compete for limited cognitive resources, thus underscoring the importance of automaticity with key components (such as word reading) that make processing among other aspects more efficient. Limitations in one of the knowledge sources can explain the reasons for a student’s reading difficulties.
68
Academic Skills Problems
Summary: Selecting Targets for Assessment and Intervention in Reading Understanding the keystone skills that interact in reading development, and that when absent explain how reading breaks down, provides insight into what skills to assess and target in intervention. We expand on the specific measures that can be used to assess relevant skills in reading in Chapter 4. For now, consider the keystone skills, their developmental sequence, and their interactions (as illustrated in Figure 2.2) as guideposts for assessment. The critical role of text reading efficiency in reading indicates that it is an ideal place to start an assessment. Assess oral reading and, if reading accuracy or efficiency are problematic, work backward through the primary skills that underlie text reading, and continue backward as needed to determine the source of the problem. For a student referred for difficulties in basic word reading acquisition, word reading would be an obvious target but attention should also be paid to the skills that make word reading possible (i.e., letter–sound knowledge and phonemic processing). For a student referred for difficulties with reading comprehension, first determine if the student has problems with word and text reading accuracy and fluency and, if so, continue working backward. On the other hand, if word and text reading accuracy and fluency appear adequate, consider deficits in vocabulary knowledge or other related language and background knowledge areas as possible reasons for their reading comprehension difficulties.
Mathematics The development of mathematics proficiency is a cumulative process, in which new skills are built on previously learned skills and conceptual understanding. One aspect that distinguishes mathematics development from reading is that, for typically developing readers, there is a point at which continued instruction in how to read is no longer needed, a time when the student is no longer “learning to read,” but is now able to “read to learn.” In contrast, there are many types of mathematics that build on each other, and ideas grow increasingly abstract as skills become increasingly more sophisticated. Therefore, instruction in mathematics is needed for longer periods of time compared to reading (National Research Council [NRC] & Mathematics Learning Study Committee, 2001).
All Roads Lead to the Algebra Gateway A roadmap toward mathematics proficiency is paved by keystone skills and runs through success in algebra. Why algebra? The National Mathematics Advisory Panel (NMAP; 2008), after reviewing research on students’ mathematics performance, longitudinal outcomes, and instruction, identified algebra as a critical gateway to more sophisticated forms of mathematics in secondary school and other positive outcomes. For example, NMAP reported that completion of Algebra II is predictive of college success and higher salaries. While reading skills are necessary for school survival, mathematics proficiency offers significant opportunities for postsecondary education and high-paying careers in science, technology, and medicine. The beneficial effects are even greater for Black and Hispanic students. Simply, algebra = opportunity. Algebra I is typically taught in eighth grade. Given its importance for more sophisticated mathematics and, in turn, college and career opportunities, NMAP (2008) concluded that all mathematics instruction, from PreK to grade 8, should focus on establishing
Choosing Targets for Assessment and Intervention 69
a foundation of conceptual understanding, computational fluency, and problem-solving skills for success in algebra in eighth grade.
The Keystone Model of Mathematics A keystone model of mathematics is displayed in Figure 2.3. It models algebra as both a goal and an access point to higher-level mathematics. Working backward, the model helps identify the keystone skills and concepts that are foundational to algebra success. Readers will notice that the model is not as linear as the model for reading; mathematics involves more skills that interrelate and build on each other, and despite the way the skills are depicted in the model as having their own boxes, the skills should not be viewed as discrete or isolated from each other. Contemporary perspectives stress that the interrelation among mathematics skills, and conceptual understanding binds them together. The model is meant as a heuristic, not a complete illustration of all facets of mathematics development or proficiency. However, it should provide a guide for practitioners to understand the critical skills on the way to algebra, and the places where mathematics skills can break down (thereby providing targets for assessment and intervention). As such, the model offers a roadmap for discussing the development of mathematics proficiency, and a framework for assessment and intervention. Before discussing the model in detail, we need to consider some aspects that underlie mathematics proficiency.
The Dual Importance of Conceptual Understanding and Procedural Fluency Reports by the NRC and Mathematics Learning Study Committee (2001) and NMAP (2008), as well as other scholars (e.g., Ketterlin-Geller & Chard, 2011), have laid to rest some long-standing debates about whether mathematics instruction should emphasize conceptual understanding (i.e., a grasp of the ideas behind math operations, what they represent, and why they are used) or procedural fluency (i.e., knowledge of how to complete math operations and procedures). The consensus answer is both: Instruction should explicitly teach students how to complete problems and provide extensive practice opportunities to build automaticity, but it should also help students understand the what and why behind the procedures. As will be argued across this section, and interventions in Chapter 6, the procedural fluency and conceptual understanding can be integrated, and there is little rationale for creating a false dichotomy between the two.
Procedural Computation Early Numerical Competencies: Counting, Numeral ID, Quantity, Place Value
Number Combinations Fluency (“Math Facts”)
Accurate
Efficient
Word-Problem Solving and Algebraic Reasoning Rational Numbers, Especially Fractions Geometry and Measurement
Language: Vocabulary and Linguistic Reasoning Skills
FIGURE 2.3. The keystone model of mathematics.
Algebra
Advanced Mathematics
70
Academic Skills Problems
Conceptual understanding helps mathematics make sense (NRC & Mathematics Learning Study Committee, 2001; Rakes et al., 2010). Beginning with an understanding of numbers (including counting, number comparison, and place value), it represents an integrated awareness of the ideas that underlie mathematics operations and procedures, including when and why they are used, and how seemingly different operations are closely connected (e.g., addition and subtraction). Conceptual understanding makes it easier to learn and integrate new skills with existing skills, allows students to adapt and streamline procedures and algorithms, and to flexibly apply procedures across situations. Procedural fluency represents the ability to complete mathematics operations with accuracy and efficiency. A critical foundational aspect of procedural fluency, and in fact much of all subsequent mathematics, is the effortless and automatic recall of number combinations (i.e., “math facts”) in addition, subtraction, multiplication, and division. Analogous to the way that word reading efficiency facilitates reading comprehension by freeing costly cognitive resources, the automatic recall of number combinations reduces errors and allows students to allocate cognitive resources toward understanding, consider solutions, and more efficiently and accurately complete more complex forms of calculation and problem solving. In short, conceptual understanding and procedural fluency are both important and they reinforce each other. Learning procedures for solving problems is necessary, and developing fluency with those procedures makes mathematics more accurate and efficient. However, when procedures are learned without understanding, mathematics skills can become isolated islands without a connection to other skills, leaving the student with little concept of how they can be flexibly applied across situations (NRC & Mathematics Learning Study Committee, 2001). Learning the concepts helps the procedures make sense, and fluency with the procedures facilitates greater understanding of how they operate and flexibility in applying them.
The Keystone Skills That Pave the Road to Algebra Now, we are ready to walk through the keystone model of mathematics (Figure 2.3). THE ROLE OF LANGUAGE IN MATHEMATICS
Language skills and knowledge are involved in multiple aspects of mathematics development. Mathematics vocabulary is extensive, involving over 100 terms and phrases, many of which may have a specific meaning in a mathematical context compared to everyday use (Hughes et al., 2016; Powell et al., 2019). Students’ language proficiency and mathematics vocabulary are associated with early numerical competencies, basic arithmetic skills, computation, word-problem solving, and algebra (L. S. Fuchs et al., 2006; Lin et al., 2021; MacGregor & Price, 1999; Purpura & Logan, 2015; Spencer et al., 2020; Toll & Van Luit, 2014), and students with mathematics difficulties demonstrate significantly lower vocabulary (Forsyth & Powell, 2017). Less familiarity of the vocabulary and syntax used in relation to mathematics affects students’ ability to learn from instruction and develop conceptual knowledge of mathematics skills and operations because so many of mathematics’ concepts are communicated and processed through language discourse. Language skills are particularly salient for emergent bilingual students and children from backgrounds in which their exposure to rich vocabulary was limited. Thus, language comprehension, and mathematics vocabulary knowledge in particular, serve as targets for assessment and intervention for students with mathematics difficulties.
Choosing Targets for Assessment and Intervention 71 EARLY NUMERICAL COMPETENCIES
Several terms such as number sense and early numerical competencies have been used to refer to students’ early mathematics knowledge and skills. In this area, they correspond to a student’s emergent understanding of whole numbers (i.e., “complete” numbers; numbers that are not a fraction or include a decimal), which is perhaps the most important fundamental and foundational concept to mathematics skill development. The number sense term has also been applied in this way, however, number sense has also been used to describe an innate perception of quantity and nonsymbolic mathematical relations (i.e., a so-called “approximate number system”) that is present even among infants and likely has deep evolutionary roots (Dehaene, 2011), but is not particularly useful for understanding mathematics development or difficulties (Fletcher et al., 2019). To avoid this confusion, we use the term early numerical competencies, which better encapsulates children’s knowledge and skills with whole numbers that are better targets for assessment and intervention (Fletcher et al., 2019; Namkung & L. S. Fuchs, 2012). Early numerical competencies include conceptual knowledge and skills in counting, number identification, number comparison (i.e., quantity), subitizing, and an initial understanding of place value. Counting is one of the most important early numerical competencies. Most other forms of mathematics are built on counting; most notably computation skills. Children learning to count must learn several principles for it to be useful: 1. One-to-one correspondence: Pairing an object or item to be counted with a word (i.e., number name), and understanding that after an item is counted, it is not counted again 2. Stable order: The understanding that number names must be repeated in the same order whenever a set of items is counted 3. Cardinality: Understanding that, when counting a set, the last number name spoken is the number of items in the set 4. Abstraction: The awareness that anything can be counted and that the principles listed above apply in all counting situations An additional principle is order irrelevance, which pertains to the understanding that the things that are counted can be counted in any order, as long as principles 1 to 3 are followed. Although this is a numerical property, it can help prevent confusion among young children to model counting in a left-to-right sequence (Powell & L. S. Fuchs, 2012). As will be seen shortly, this is used in counting scaffolds such as ten frames and number lines. Preschoolers learning to count will often violate one or more of the counting principles; they may begin counting a set of objects and skip some, continue counting even though all items were counted, or count a set and not recognize that the last number counted is the number in the set. For these reasons, ten frames (see Chapter 6) are highly useful scaffolds to support students’ counting. Students place counters, pennies, or other objects in each space of the frame, always working from left to right beginning with the top row. The frame helps organize students’ counting and reinforces one-to-one correspondence, stable order, and cardinality. Ten frames also help demonstrate the composite nature of numbers; a number can represent one unit and a number can also represent a group of units. Further understanding of the composite nature of numbers is also reinforced by ten frames, by illustrating that numbers like 8 can be decomposed into two sets
72
Academic Skills Problems
of 4; four sets of 2; 5 and 3; 7 and 1; and so on—a concept that will be highly useful in learning addition, subtraction, multiplication, and division. Several other early numerical competencies are important for subsequent mathematics development. Number identification (i.e., pairing a number name with a printed number) represents a key symbolic understanding and is important for facilitating mathematics learning with numerals (as opposed to counters or other objects). Number identification also involves learning to compare numbers in terms of their quantity, for example, recognizing that 5 is greater than 4. To support skills in number identification, quantity discrimination, and to further foster counting skills and conceptual knowledge, number lines (also discussed in Chapter 6) are very useful supports. Number lines have numerous benefits for simultaneously building conceptual knowledge and procedural fluency in counting; they provide a conceptual link between arithmetic, and geometry and measurement (NRC & Mathematics Learning Study Committee, 2001); and later, are valuable scaffolds for helping students learn to add, subtract, multiply, and divide. They are also useful when students begin to learn about rational numbers (i.e., fractions, decimals, percent). Subitizing is another component of early numerical competencies. It refers to the ability to look at a group of objects and immediately recognize how many are in the set. Young children can learn to do this with small sets of two to three objects. Over time, this ability expands to being able to subitize numbers up to 10 and beyond, given the objects are arranged in a certain way (e.g., dots on dice). Subitizing becomes useful as students learn calculations, as it can remove to the need to count each object in some situations. An additional aspect of numerical competencies for students in the early stages of mathematics development is an understanding of the base-10 system of mathematics. Initially, this involves working with numbers 11–19. Using base-10 blocks, students can learn that a number in the teens is made up of 10 and another set of 1’s. This is an entry to understanding place value. Standards such as the Common Core State Standards (CCSS; National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010) provide indications for when skills in counting and other whole-number concepts should be expected. For example, by the end of kindergarten, students should be able to count to 100 by 10’s and 1’s; count forward from a given number; write numbers 1–20; count up to 20 items to tell “how many”; identify whether the number of items in a group is greater than, less than, or equal to another group; compare two numbers within 10 in terms of quantity; and compose and decompose numbers 11–19 into 10 and 1’s. Counting to 120, and place value understanding of numbers within 100 (i.e., some number of 10’s and 1’s), are expected by the end of first grade.
Fluency with Number Combinations: The Entrance to Computation Students begin learning calculation skills by learning number combinations (i.e., “math facts”), which refer to simple single-digit computations, such as 2 + 3 = 5. Students first learn how to compute number combinations accurately, and through practice, build fluency with number combinations so that they can be recalled automatically. There is no longer any debate about the importance of becoming fluent in recalling number combinations (NRC & Mathematics Learning Study Committee, 2001; NMAP, 2008). The ability to immediately recall the answers to problems such as 8 + 5, 7 – 4, 6 × 4, and 8 ÷ 4 immediately from memory affords immense and broad-reaching benefits. These benefits stretch across nearly all mathematics experiences students will encounter, even into advanced mathematics such as calculus and trigonometry. Automatic recall of
Choosing Targets for Assessment and Intervention 73
basic number combinations makes solving complex operations easier and more accurate. Further demonstrating its importance, a lack of automaticity with number combinations is a hallmark characteristic of students with mathematics difficulties (Cirino et al., 2007; Cumming & Elkins, 1999; Geary et al., 2007). Across domains such as multi-digit computation, word problems, pre-algebra and algebra, fraction operations, and geometry, where one error in calculation can mean the answer for the whole problem will be incorrect, students who can immediately recall the answers to number combination are more accurate and more efficient problem-solvers than students who need to mentally compute number combinations. Given the importance of this skill, it is no wonder that slow and inaccurate number combination performance is a common characteristic of students with mathematics difficulties (Cirino et al., 2007; Geary et al., 2007). Students who struggle in mathematics tend to use inefficient counting strategies, struggle to commit number combinations to memory, and make more errors with combinations they attempt to retrieve from memory (Geary, 1993; Geary et al., 2007). Furthermore, just as efficient word reading makes reading comprehension possible, automatic recall of number combinations frees cognitive resources so that students can focus on the larger problem and better evaluate the reasonableness of their solution. Inaccurate and inefficient skills with number combinations are a significant barrier to mathematics proficiency. Therefore, students’ accuracy and fluency with number combinations are key targets for assessment and intervention for students with mathematics difficulties. LEARNING AND BUILDING AUTOMATICITY WITH NUMBER COMBINATIONS
There are important aspects to how students learn number combinations that reinforce the concept behind the operations. These include (1) equipping students with a counting strategy for solving number combinations, while they are learning to commit them to memory, and (2) learning that operations are not distinct, but in fact interrelated (e.g., that addition can be used to solve subtraction problems, multiplication can be used to solve division). Counting skills are the basis for learning number combinations. They provide conceptual understanding of what 2 plus 2 means. More importantly, counting strategies give students reliable means for arriving at answers on their own. They also are particularly important for students with mathematics difficulties in learning number combinations (Powell & L. S. Fuchs, 2012). But there are several counting strategies, some more advantageous than others, and key counting principles to learn that make computation more efficient. Students do not need to have mastered counting before learning number combinations, but they should have a good understanding of the key counting principles in counting to 20: stable order, one-to-one correspondence, and cardinality. Students also must be able to identify numbers. Because more effective counting strategies involve starting with a larger number, it can help if students have an emerging understanding of number quantity (although it is not necessary to have mastered this to start). Knowledge of operation symbols such as + and – can be taught along the way. Specific intervention steps and considerations in teaching number combinations and the most effective counting strategies are described in more detail in Chapter 6; however, we provide an overview here. Learning basic calculations begins with learning addition number combinations (usually, sums to 10 and then sums to 20). Students should always be taught a counting strategy when learning to solve number combinations, by using their fingers, counters, or a number line (preferably, teaching them how to use all three).
74
Academic Skills Problems
Initially, when learning a number combination like 2 + 4, students may use a basic counting strategy such as “counting all” whereby they count out the first number (on their fingers or with counters), count out the second number, and then count all to arrive at the solution. Although this approach is correct, it involves a lot of counting, and the more counting that is involved, the greater the possibility of errors. Thus, students can learn a “counting on” strategy whereby they start with the first number (i.e., 2), count up the second number (4) to reach the answer. A breakthrough in understanding computation occurs when students learn the communicative property of addition: It does not matter what order the numbers are added (i.e., 2 + 4 = 4 + 2). This understanding, plus the ability to identify the larger of two numbers, allows students to learn the counting on from larger strategy: Students start with a larger number and count up the smaller number to arrive at the answer. Counting on from larger is more efficient; it involves less counting and is therefore faster and less prone to error. Counting strategies are also used for subtracting number combinations. When learning subtraction, students tend to count down, reflecting the “take away” aspect of subtraction. For example, to solve 8 – 5, students may learn to start with 8 and count backward five fingers, thus arriving at the answer of 3. As Powell and L. S. Fuchs (2012) noted, a problem with counting down is that young children make more errors when counting backward. It is therefore more efficient to teach students to count up when solving subtraction number combinations, by first teaching them to start with the subtrahend and count up to the minuend. In the example of 8 – 5, students learn to start with 5, count up to 8, and the answer is the number of fingers or hops on a number line. Not only is counting up a more efficient and reliable strategy for solving subtraction number combinations, it also reinforces two important aspects of conceptual knowledge: (1) that subtraction represents a difference between two numbers; and (2) that addition and subtraction are intertwined, not discrete skills (NRC & Mathematics Learning Study Committee, 2001; Powell & L. S. Fuchs, 2012). Over time, students practice number combinations using their counting strategies. Through sufficient practice opportunities, they begin to commit number combinations to memory because it is more efficient to do so than counting each one out (Powell & L. S. Fuchs, 2012). Teachers can encourage students to memorize the number combinations and use flashcards and practice drills with this goal in mind. However, students will still have their counting strategies as backup if their memory fails. This is also a reason why it is good practice to teach students to use their fingers to count because fingers are a support that students will always have with them (unlike counters). Conceptual understanding of underlying mathematics calculations can be deepened through instruction and practice approaches that (1) highlight the connected, inverse relationships between operations such as addition and subtraction, (2) reinforce concepts such as the commutative property and correct meaning of symbols such as the equal sign, and (3) begin to teach pre-algebraic conceptual knowledge. For example, students can learn to solve simple addition and subtraction problems in which a missing quantity or the equal sign varies its position, such as 3 + = 4 + 3 = 7 6 = 8 – – 2 = 3
Choosing Targets for Assessment and Intervention 75
in addition to the standard presentations. This helps establish the knowledge that subtraction can be used to solve addition problems and vice versa. The reader will also notice that these problems resemble miniature equations in which an unknown quantity must be determined from known quantities, an idea that is reflected in the algebraic reasoning involved in word-problem solving and algebra itself. Additionally, these types of activities can reinforce the true meaning of the equal sign, which many teachers (and consequently, students) misinterpret as a reflexive signal for an answer. Instead, what the equal sign really represents is that both sides of the sign are balanced, in other words, equal. Number lines have been mentioned several times across this section. They are particularly useful for using counting strategies to teach number combinations. They provide a visual reference for using counting strategies to determine answers to addition and subtraction problems. They reinforce number identification and how numbers relate to each other in terms of order and quantity. Later, students will learn multiplication and division number combinations using a number line, where they learn to skip count (i.e., count by 2’s, 3’s, 4’s, 5’s) to solve multiplication facts. Like the inverse relation between addition and subtraction (i.e., subtraction “undoes” addition), students learn that multiplication and division also share an inverse relation where division “undoes” multiplication (NRC & Mathematics Learning Study Committee, 2001). Thus, developing automaticity with number combinations represents a significant step toward the development of mathematics proficiency. When number combinations are recalled automatically, not only are students more accurate problem solvers, but it also frees cognitive resources to focus on other aspects of the problem and promote deeper understanding, more flexible problem solving, and understanding the connections across different types of skills and operations. It facilitates success with larger and more complex procedural computation and solving word problems (Clarke et al., 2015; L. S. Fuchs, D. Fuchs, Compton, et al., 2006; Geary et al., 2007). It is involved in solving calculations with fractions and other rational numbers, implicated in solving some geometry and measurement problems, and embedded in solving algebraic equations. It is a skill that students will benefit from throughout their mathematics careers. Based on standards such as the CCSS, students are expected to know addition and subtraction number combinations within 20 (and fluency with combinations within 10) by the end of first grade. Fluency with multiplication and division number combinations within 100 is expected by the end of third grade (National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010).
Key Mathematics Skill Domains In the text that follows, we describe the mathematics skill domains that support algebra and more advanced forms of mathematics, depicted to the right of number combinations in the keystone model in Figure 2.3. Success in these domains is made possible by emerging proficiency with the early numerical competencies and basic computation skills that precede them. Fluency with all number combinations is not necessary before instruction in the other domains can begin, as proficiency with number combinations can continue to develop as students start to learn these new concepts and operations. Scholars have worked to dispel outdated notions that students must reach a certain age or grade to begin to learn certain concepts and skills, such as fractions, word-problem solving, or pre- algebraic reasoning. Large-scale research reviews have demonstrated that, rather than being “old enough,” the key factors that determine whether a student will be successful in a new mathematics skill depend on whether they have learned the
76
Academic Skills Problems
foundational skills needed before it (NRC & Mathematics Learning Study Committee, 2001; NMAP, 2008). Whole-number concepts are foundational to all other domains of mathematics; therefore, building proficiency with whole numbers (including number combinations) serves as a primary mathematics skill foundation. Within the description of each domain in the following sections, we indicate the foundational conceptual knowledge and skills that students should have to suceed in that domain. The absence of these foundational skills indicates areas to focus assessment and intervention. PROCEDURAL AND MULTIDIGIT COMPUTATION
This area refers to learning procedures and algorithms to compute multidigit numbers (e.g., 379 + 256). Success in procedural computation is made possible by key foundational concepts and skills. First is automaticity with number combinations, without which solving multidigit computation problems would be slow and highly prone to error. Consider the example of 379 + 256 and the number of single-digit number combinations needed to complete it. Second is a working understanding of place value, which will be further developed as students build skills in this area. The third conceptual area that benefits multidigit computation (and is made possible by number combinations and place value) is understanding the composite nature of numbers. For example, understanding that 22 is made up of 10 + 10 + 2, or 20 + 2, can help ease students’ completion of multidigit computation problems through decomposition, allowing them to quickly compute some multidigit problems mentally. Decomposing numbers for computation is made possible by understanding the associative property, which states that numbers can be added or multiplied in any groupings without changing the result. For instance, when solving 22 + 28, students can decompose 22 into 10 + 10 + 2, and 28 into 10 + 10 + 8. The associative property states that students can add the numbers in any order, meaning that for some, it may be easier to count up the 10’s to make 40, then add 8 + 2 to make another 10, and thereby reach the answer of 50. Although students can learn ways to compute some multidigit problems mentally through decomposition, they will still need to learn multistep procedures for solving more complex problems. These multistep procedures are called algorithms and can include regrouping (previously referred to as carrying and borrowing). Recommendations by the NCR and Mathematics Learning Study Committee (2001) and NMAP (2008) stress that, as in most other areas of mathematics, students understand the concepts underlying the procedures in addition to learning algorithms. Understanding what the algorithm is designed to do helps students connect concepts to procedures. The NCR and NMAP also recommend that students learn a standard algorithm and alternatives for solving multidigit computation problems, as opposed to just one. Conceptual knowledge combined with knowing alternative algorithms provides students with greater flexibility for solving problems and deepens conceptual understanding. Students can also learn to compare algorithms to determine which are more or less useful. National standards such as the CCSS indicate when procedural computation skills are expected. Adding numbers within 100, as well as being able to mentally add or subtract 10 from a two-digit number, is expected by the end of first grade. Adding up to two four-digit numbers, adding and subtracting numbers within 1,000, and mentally adding or subtracting 10 or 100 from a two- and three-digit number are expected by the end of grade 2. Multiplying and dividing numbers within 100, and multiplying a one-digit number by 10, are expected by the end of third grade. Multiplying and dividing multidigit numbers are expected by the end of fourth grade.
Choosing Targets for Assessment and Intervention 77
Rational Numbers and the Importance of Fractions Rational numbers, in contrast to whole numbers, include fractions, decimals, and percent. Understanding what rational numbers are, how they relate to whole numbers, and performing operations with them (i.e., addition, subtraction, multiplication, and division) are important for more advanced mathematics. In particular, conceptual knowledge and operations with fractions have been identified as critical foundations for algebra given the extensive use of fraction operations in solving equations (Booth et al., 2014; NMAP, 2008). Rational number knowledge is also crucial to understanding ratio and proportion, which are heavily involved in algebra and advanced mathematics. Instruction in rational numbers typically begins in third grade, beginning with fractions. As with nearly all domains of mathematics, effective instruction in fractions integrates conceptual knowledge. Traditional approaches to fraction concepts have tended to emphasize the part–whole aspect of fractions. Although a fraction indeed represents a part of a whole, part–whole interpretations are inadequate for building proficiency with fractions (NRC & Mathematics Learning Study Committee, 2001). Furthermore, emphasizing part–whole tends to encourage students to consider the numerator and denominator of fractions as two separate whole numbers, which can ultimately become problematic because a fraction is a number (L. S. Fuchs, Malone, et al., 2017; NMAP, 2008). Instead, it is recommended that instruction target fraction understanding in terms of their magnitude, including understanding where to place fractions on a number line in relation to other fractions and whole numbers (Gersten et al., 2017; Siegler et al., 2011). In contrast to a part–whole perspective, a magnitude emphasis reinforces the understanding that a fraction is a number and that its position on the number line is determined by its numerator and denominator. Interventions have demonstrated success using a magnitude approach to improving fraction understanding and operations for fourth graders with mathematics difficulties (see L. S. Fuchs, Malone, et al., 2017). Magnitude comparison and the use of number lines can also be helpful as students learn about decimals and percent. Another aspect to developing conceptual knowledge of rational numbers relates to the previous point—that one rational number can be represented multiple ways. For example, ¾ can also be represented as .75 and 75%. As the NRC and Mathematics Learning Study Committee (2001) stated, a rational number “has multiple personalities” (p. 233). The idea that the same number can be represented multiple ways is challenging for students. Once again, number lines come to the rescue—they help show how 3/4, .75, and 75% occupy the same “address” on a number line, how adding decimals increases the precision of the number’s location on the line, and how that position compares to other rational numbers and whole numbers. Conceptual understanding of rational numbers, particularly their magnitude, facilitates success with fraction operations (addition, subtraction, multiplication, and division). Fraction operations may begin with multiplying a whole number by a fraction, and adding and subtracting mixed numbers with like denominators. Additional skills follow, including addition and subtraction of fractions with unlike denominators, and multiplication and division of fractions. The power of automaticity with number combinations applies here again, as solving fraction operations often involves solving multiple number combinations. As will be observed later, fraction operations are often involved in word-problem solving (which often involves building and solving equations) and other pre-algebraic concepts. These skills set the stage for learning ratios and proportional relationships.
78
Academic Skills Problems
In summary, understanding and using rational numbers are facilitated by foundational conceptual knowledge and skills with whole numbers, including counting, number quantity and relations, and fluency with number combinations. Understanding and using rational numbers, especially fractions, are essential foundational skills for algebra and advanced mathematics, and they are skills that can be improved through intervention. Difficulties with fractions are predictive of problems with algebra. Thus, rational number understanding and operations skills are good targets for assessment and intervention for students with mathematics difficulties in middle and late elementary grades.
Word‑Problem Solving and Early Algebraic Reasoning Word problems in the literature (and the way we refer to them here) are mathematics problems that are presented linguistically, involve references to known quantities, and require an arithmetic solution to determine an unknown quantity (NCR & Mathematics Learning Study Committee, 2001; L. S. Fuchs, D. Fuchs, Compton, et al., 2006). Word problems may be presented as sentences that students read, as pictures or icons, or read orally to students. Algebraic reasoning is wrapped up in word-problem solving (WPS) because determining an unknown quantity from other known quantities is exactly what happens in solving algebra equations. Solving word problems involves an array of skills in arithmetic, reasoning and concept formation, language, reading, attention, and other cognitive processes (L. S. Fuchs, D. Fuchs, Compton, et al., 2006; Lin, 2021; Spencer et al., 2020). Success with WPS has significant positive implications for students’ achievement in algebra, advanced mathematics, and college and career success. Given its importance and complexity, WPS is a significant part of mathematics instruction across grade levels, even as early as kindergarten where students learn to formulate and solve problems in story form or using pictures. As will be apparent in this section, WPS serves as source for teaching pre-algebraic concepts and skills from the early elementary grades. WPS is influenced by skills that underlie reading comprehension, such as word reading efficiency and vocabulary knowledge. Skills like vocabulary knowledge are implicated even when word problems are read aloud to students (L. S. Fuchs, D. Fuchs, Compton, et al., 2006). Considering the SVR discussed in the earlier sections on reading, if students cannot read the words accurately, or if they can read the words but do not understand what the terminology or phrases in the problem mean, reading comprehension will be impaired and their ability to solve problems correctly is highly unlikely. Therefore, an assessment of a student with difficulties in WPS should determine whether reading difficulties or a lack of familiarity with necessary vocabulary are affecting WPS performance, especially when a classroom expectation is that students can solve word problems by reading. In addition to reading and language skills, research indicates that several mathematics skill domains are particularly important for WPS. SCHEMA‑BASED PROBLEM SOLVING AND EARLY ALGEBRAIC REASONING
Earlier approaches to teaching students to solve word problems (which are still common in some schools) involved teaching students to look for “keywords” to signal what operation students should use to solve a problem, such as “combined” or “take away.” Scholars have noted several problems with keyword approaches to WPS (NRC & Mathematics Learning Study Committee, 2001). Many words and phrases in word problems are
Choosing Targets for Assessment and Intervention 79
irrelevant for solving the problem or are included by the problem writer to intentionally mislead. Some word problems may not use any of the keywords that students have been taught. Keyword approaches can also distract students from the overall structure of the problem, which is needed to build accurate mental models of the overall situation in the problem and connect that model to a standard solution framework. Rather than keyword approaches to WPS, more effective methods involve schema- based instruction. A “schema” refers to a structure or framework used to organize and represent information or procedures. Word problems that elementary students encounter generally represent three types: total (i.e., combine quantities to find a new amount), difference (i.e., compare amounts to determine a difference between them), and change (i.e., increase or decrease a starting amount by another amount to identify a new amount). In each of these types of problems, the known and unknown amount can vary across word problems. The important aspect, however, is that word problems of each type share similar characteristics in terms of their structure, and each involves a different but standard type of computational approach (i.e., a problem-solving schema) to solve it. Thus, schema-based approaches to WPS involve teaching students to recognize and categorize word problems by type (i.e., total, difference, change), and then apply an organizational framework (schema), often using a graphic organizer or diagram, for depicting the known quantities, unknown quantities, and the arithmetic needed to solve the problem. The framework is designed to help students focus on relevant information and provide a structure for completing the arithmetic needed to accurately solve it. For example, students learn a standard framework for depicting and solving total problems, a standard framework for solving difference problems, and so on. Schema-based instruction for improving WPS has been studied extensively and interventions are reviewed in more detail in Chapter 6. A key aspect to some of this is the integration of algebraic reasoning and concepts within WPS, in addition to number combinations fluency and procedural computation (e.g., L. S. Fuchs, Powell, et al., 2009; Powell, Berry, et al., 2021). These methods teach students to identify the type of word problem and represent it with a simple equation that includes X to represent the unknown quantity (e.g., 10 + X = 18). Students learn to “find X,” like a pirate would in finding treasure on a map, in solving the equation. NUMBER COMBINATION FLUENCY AND CALCULATION SKILLS
The role of computation and calculation skills in WPS is self-evident: Once a word problem is correctly interpreted in terms of identifying the relevant known and unknown quantities, and a correct solution strategy is determined, all that stands in the way of the correct answer are accurate calculations. In her meta-analysis, Lin (2021) found that mathematics computation skills (i.e., skills requiring use of algorithms) were one of the strongest predictors of WPS performance for students overall. When looking specifically at studies with younger students, number combinations fluency was the primary computation skill predicting WPS, whereas with older students, skills in more complex computation (of which number combinations is a part) were the primary computation predictor. Similar findings were observed by L. S. Fuchs et al. (2006) with third graders, but in this case, fluency with number combinations was the primary predictor of WPS over multidigit computation (perhaps owing to the types of calculations needed in the word problems students were asked to solve). Regardless, research illustrates the powerful and broad-reaching benefits of students’ accuracy and automaticity with number combinations, and subsequently, skills in algorithmic computation.
80
Academic Skills Problems
Additionally, as reviewed in Chapter 6, several contemporary WPS interventions embed instruction and practice in building number combination fluency as part of the intervention, such as Pirate Math (L. S. Fuchs, Powell, et al., 2009) and Pirate Math Equation Quest (Powell, Berry, et al., 2021). Furthermore, evidence suggests that embedding training in number combinations, teaching the meaning of the equal sign (i.e., that it means both sides of the sign are balanced, or equal), and learning to balance (i.e., solve) simple equations with number combinations such as X + 2 = 5 or 7 – X = 3 are particularly effective; they not only improve number combination fluency and WPS, but enhance students’ ability to transfer skills to procedural computation and algebra (L. S. Fuchs, Powell, et al., 2009; Powell & L. S. Fuchs, 2010; Powell, Berry, et al., 2020). LANGUAGE, ESPECIALLY VOCABULARY
As depicted in the keystone model in Figure 2.3, language skills and knowledge are involved in multiple aspects of mathematics development. However, language has a particularly strong relation to WPS given the roles that reading comprehension or listening comprehension play in understanding a word problem. Studies indicate that language, including vocabulary, is uniquely predictive of WPS performance even when skills in computation, attention, and other factors are accounted for (L. S. Fuchs et al., 2006; Lin, 2021). Mathematics vocabulary is extensive (Hughes et al., 2016); having underdeveloped vocabulary knowledge overall, or inadequate knowledge of mathematics vocabulary, can play a significant role in WPS performance and whether students benefit from WPS instruction. Thus, language comprehension and vocabulary knowledge in particular serve as targets for assessment and intervention for students with WPS difficulties. ATTENTION AND SELF‑REGULATION
Another important factor in WPS performance includes students’ ability to regulate their attention, engagement, and effort. Although self-regulation and attention are involved in all aspects of mathematics, and in fact all aspects of academic achievement, WPS is a context in which they are particularly important. Solving a word problem takes longer than other academic responses, such as reading a word or solving a number combination, and therefore requires sustained engagement. WPS also requires the coordination of multiple skill and knowledge sources. It requires considerable concentration for identifying a problem-solving schema, representing the relevant information arithmetically, and solving the problem. Therefore, it is not surprising to see attentive behavior play a significant and unique role in predicting WPS performance even when other mathematics and language skills are accounted for (L. S. Fuchs, D. Fuchs, Compton, et al., 2006; Lin, 2021). It is also a reason why many WPS interventions embed strategies for promoting students’ motivation, engagement, and sustained effort, as noted in Chapter 6. In summary, WPS is a complex skill that has important implications for subsequent mathematics achievement and is one of the most common areas of mathematics difficulty (NMAP, 2008). WPS difficulties can result from skill deficits with whole numbers and computation, but because additional factors influence WPS, deficits in WPS can also exist even when whole-number and computation skills are adequate. Therefore, assessment and intervention for WPS difficulties involve considering several primary factors that underlie it.
Choosing Targets for Assessment and Intervention 81
Geometry and Measurement The next keystone skill depicted in Figure 2.3 is geometry and measurement. NMAP (2008) noted that aspects of geometry and measurement are foundational to success in algebra and advanced mathematics. Foundations of geometry begin in PreK and kindergarten with shape recognition, and the use of number lines in counting and computation instruction in early elementary grades provides a foundation for measurement (NCR & Mathematics Learning Study Committee, 2001). Consistent with NMAP (2008) recommendations, other geometry and measurement skills relevant for algebra success are expected by grades 5–7. These skills include solving problems involving perimeter and area of triangles and quadrilaterals, surface area and volume, and relationships among triangles and slope of a line. Solving these types of problems involves algebraic reasoning in determining an unknown (e.g., the length of a side) from other known information, and whenever calculations are involved, fluency with number combinations will be involved in promoting students’ success. Operations with both whole and rational numbers are involved in solving geometry and measurement problems, as they are with algebra.
Algebra As illustrated across this section, algebra is the culmination of several domains of foundational skills, and the absence or underdevelopment of one or more areas has negative implications for success in algebra. Increasing attention is being directed toward conceptual understanding and procedural fluency with whole and rational numbers for success with algebra (Ketterlin-Geller & Chard, 2011). Because Algebra I is not typically taught until eighth grade, it is very likely that students with mathematics difficulties will be identified in years prior. Thus, referrals for mathematics difficulties at this stage are less common, especially in schools with a well-functioning universal screening system in elementary grades. Nevertheless, referrals for students in middle and secondary grades for mathematics difficulties still occur. It is possible that in earlier grades, some students may be able to get by with moderately adequate mathematics skills, but those deficits become increasingly problematic as the complexity of the operations and concepts increases in middle and secondary grades. Although Algebra I may be the first time a student is referred for mathematics difficulties, it does not necessarily mean the student has an “algebra” problem. Like the way that problems in reading comprehension do not necessarily mean the student has a “reading comprehension problem,” the multicomponent nature of algebra will often mean that a student’s problems are not algebra-specific but due to a lack of conceptual knowledge and procedural fluency in one or more areas that underlie it. Assessment and intervention for a student at this stage require assessing their proficiency in foundational skills that make algebra success possible. These difficulties may include inadequate conceptual knowledge or procedural fluency with number combinations, procedural computation, fractions concepts and operations, and word-problem solving. Interventions specific to algebra exist and are discussed in Chapter 6, but very often this would include remediation of some foundational skills. Additionally, students in middle and secondary grades may be referred for academic difficulties that might be less related to skill deficits. Studies have demonstrated that academic motivation and overall engagement often decline as students progress through
82
Academic Skills Problems
grades, particularly in middle and high school (e.g., Wang & Eccles, 2013). Students become less interested in succeeding academically or develop other interests that lead to lower rates of homework completion, poor study habits, ambivalence about grades, and low effort on work and assignments; all of which can contribute to poor achievement and may consequently result in referrals. Additionally, internalizing problems such as anxiety and depression becomes more evident across the middle and secondary grades (McLaughlin & King, 2015) and can significantly impact academic performance. Anxiety related to mathematics is especially prevalent (Dowker et al., 2016). For these reasons, it is particularly important that when students in middle and secondary grades are referred, evaluators consider affective factors such as motivation and possible internalizing issues in addition to their academic skills.
Writing Writing is highly important for school success, and beyond high school, good writing skills create opportunities for postsecondary education and employment that would not otherwise be available. The challenge, and why so many students struggle with writing, is that written expression is a complex skill dependent on numerous interrelated subskills, motor skills, and orchestrated cognitive, behavioral, and linguistic processes. As Graham et al. (2017) noted, “It requires the orchestration of handwriting, typing, spelling, and sentence construction skills that allow composing to take place; strategies for planning, evaluating, monitoring, drafting, and revising text; topic, genre, linguistic, and semantic knowledge for creating meaning; and the motivational aspirations to put these skills, strategies, and knowledge into play” (p. 199). Writing skills are historically a common area of difficulty for students, especially from middle elementary grades and beyond. Writing difficulties also often co-occur with other academic skills problems and learning disabilities (Fletcher et al., 2019). Difficulties in writing are particularly common among struggling readers; Katusic et al. (2009) observed across a population-based cohort that among students with writing difficulties, 75% also experienced problems in reading. Gender differences are most prominent in writing difficulties—girls tend to outperform boys across most studies, and boys are more likely than girls to be identified with a learning disability in writing (Berninger et al., 2008; Katusic et al., 2009). A keystone model of writing is displayed in Figure 2.4. This model is based on extensive theory and research on writing development and writing difficulties (e.g., Abbott & Berninger, 1993; Abbott et al., 2010 Berninger et al., 2006; Gillespie & G raham, 2014; Graham, McKeown, et al., 2012; Graham, Collins, et al., 2017; Hayes, 2012; Santangelo, 2014; Troia et al., 2013). Like the keystone models for reading and mathematics, the writing model is not intended to describe all aspects of writing development. It is designed to reflect skills that, based on theory and evidence, make primary targets for assessment and intervention.
Common Characteristics and Skill Difficulties of Struggling Writers The writing of students with writing difficulties, compared to typically achieving peers, reflects several common characteristics. In a review of research on students with writing difficulties, Graham et al. (1991) observed that their writing tends to be sparse, unpolished, and lacking coherence. Their output and production are lower, they write more slowly and make more errors when writing, both when writing by hand and when typing.
Choosing Targets for Assessment and Intervention 83 Oral Language and Reading Transcription: Handwriting (or Typing) and Spelling Accurate
Efficient
Writing Quality Composition: Planning, Drafting, Grammar, Syntax, Revising Self-Regulation, Motivation
FIGURE 2.4. The keystone model of writing.
Revising, a key aspect in improving writing organization and overall quality, is minimal for students with writing difficulties, and when it occurs, revisions are limited to surface- level changes such as spelling, punctuation, or phrase changes that have little bearing on the overall quality of the composition. They also lack knowledge of the writing process. In a more recent and comprehensive review, Graham et al. (2017) examined the writing skill areas in which the magnitude of skill difficulties between students with and without learning disabilities was greatest. Areas with the largest differences included spelling, conventions (i.e., measures of overall correctness of spelling, syntax, grammar, or handwriting legibility), quality (overall assessed quality of writing), organization, vocabulary, output, genre elements (i.e., inclusion of characters, setting, and events), sentence fluency, handwriting, grammar and syntax, self-efficacy, and motivation. Clearly, students with writing difficulties can struggle across a host of writing skill areas.
Two General Domains of Writing Skills Although writing is influenced by numerous skills and knowledge sources, writing assessment and intervention can be simplified by categorizing writing difficulties into two interrelated domains (depicted in Figure 2.4): transcription skills (i.e., handwriting, spelling) that are used to put ideas into writing, and composition skills that involve the planning, formulation, organization, and revising of a written product. Transcription and composition are influenced by oral language skills and processes involved in self- regulation, motivation, and executive function that influence the quantity and quality of a student’s writing. We first discuss transcription and composition (as they would be the primary areas targeted in assessment and intervention for writing), and then discuss how language, self-regulation, and motivation influence students’ writing products. TRANSCRIPTION SKILLS
Transcription refers to skills needed to put words on paper, specifically, handwriting (or keyboarding) and spelling (Berninger & Amtmann, 2003; Berninger, Abbott, et al., 2006). A review by Graham et al. (2017) observed that several of the areas in which students with writing difficulties struggled the most were in transcription and included
84
Academic Skills Problems
spelling, conventions (overall correctness of spelling, syntax, grammar, or handwriting legibility), overall writing output, sentence fluency, and handwriting. The reasons why transcription skills are so important is yet another instance of the ways in which automaticity with lower-order skills makes higher-order, complex academic skills possible, and how difficulties arise when automaticity with those lower-order skills is not present. The more quickly and efficiently that words can be transcribed saves cognitive resources for focusing on writing quality (Limpo et al., 2017). This makes difficulties with transcription a primary “bottleneck” in preventing more productive and better-quality writing products. Young children’s ability to tell stories far exceeds their ability to write about them because their basic skills in writing are underdeveloped. The same is often true for struggling writers at any age; when students’ transcription skills are poor, they struggle to write words, must constantly stop to think about how letters are formed or words are spelled, struggle to think of the right words for the context, or labor over putting words into grammatically or syntactically correct sequences. This is even the case for students in middle and secondary grades (Limpo et al., 2017). Hence, these students write very little and what they produce is of poor quality. Their writing is often sparse and disorganized, and the effort needed on basic aspects of writing makes it difficult for them to see their comprehension from a holistic level. Little motivation is left over for revising to improve it. The discussion above illustrates that good transcription skills involve more than just simple legibility with handwriting or simple accuracy with spelling. Fluency with these skills is critical. As in reading and mathematics, fluency reflects efficient use of finite cognitive resources—it is the result of skills easily retrieved and called up when needed with very little conscious effort—so that those resources can be used elsewhere. Thus, handwriting and spelling fluency are important targets for assessment for students with writing difficulties. Further underscoring the importance of handwriting and spelling as assessment targets is that they can be improved through interventions and that improvements in these transcription skills improve students’ writing fluency and output (Graham, Harris, & Fink, 2000; Graham et al., 2002, 2018; McMaster et al., 2018). Keyboarding (i.e., typing on computers and using word-processing programs) also benefits students’ writing, especially for students with handwriting difficulties (Graham & Perin, 2007; Graham et al., 2012). In short, improving transcription removes barriers to written expression. However, as reviewed in Chapter 6, additional intervention components beyond transcription are needed to improve composition quality. COMPOSITION SKILLS
Composing text involves skills and strategies in formulating ideas, planning, drafting, reviewing, editing, and revising. After transcription difficulties, it is the skill area in which students with writing difficulties struggle the most, especially in later grades. Harris et al. (2003) noted, “Research indicates that students with learning disabilities lack critical knowledge of the writing process; have difficulty generating ideas and selecting topics; do little to no advance planning; engage in knowledge telling; lack important strategies for planning, producing, organizing, and revising text; have difficulties with mechanics that interferes with the writing process; emphasize mechanics over content when making revisions; and frequently overestimate their writing abilities” (pp. 1–2). Effective composition involves treating writing as a multicomponent process, with activities and strategies required before, during, and after writing. As reviewed in Chapter
Choosing Targets for Assessment and Intervention 85
6, some of the most effective forms of intervention for students with writing difficulties explicitly teach students strategies for planning, drafting, and revising (Gillespie & Graham, 2014; Graham, MacArthur, et al., 2007; Graham, McKeown, et al., 2012). Difficulties with composition will be clearly evident in a student’s piece of writing. In addition to being short, the composition will likely be disorganized and lack a coherent theme or argument, lack details, and will include multiple errors of various kinds. Students may struggle with composition even despite adequate transcription skills. Thus, a key aspect of an assessment of writing skills is to determine if the students’ difficulties primarily involve problems with transcription, composition, or both. Both may be important targets of assessment and intervention.
Other Skills, Processes, and Knowledge Sources Underlying Writing Although transcription and composition are the two primary writing skill areas to attend to when assessing students’ writing difficulties and identifying interventions, two other domains deserve attention to help tailor the intervention recommendations for a student’s needs. MOTIVATION AND SELF‑REGULATION
Motivation to put forth effort and do well is important across academic areas, but it is particularly relevant for writing. Good writing involves several strategic, goal-directed behaviors beyond simply writing words; for instance, planning and revising require the motivation and commitment to make one’s writing better. Thus, the difference between good and poor writing not only involves writing skills, but also the desire and effort to make it better. Although intervention is often needed to improve skills in transcription or composition, low motivation to write will often result in writing products that are sparsely written and of low quality; the result of the student getting it done as quickly as possible. Studies have consistently demonstrated a link between students’ motivation, attitudes, and self-efficacy about writing with writing achievement (Graham, MacArthur, et al., 2007; Graham, Collins, et al., 2017; Troia et al., 2013). These studies also tend to demonstrate that boys are more likely to have more negative attitudes about writing and less motivation to write. Attitudes and motivation can be addressed in writing interventions (see Chapter 6); therefore, it may help to determine during an assessment whether low motivation and negative attitudes about writing are contributing to the student’s low writing achievement. ORAL LANGUAGE AND READING
Writing involves transferring thoughts to text, which naturally implicates language skills including vocabulary, syntax, and overall discourse-related skills. Language skills are positively correlated with students’ writing achievement (Abbott & Berninger, 1993; Kim & Schatschneider, 2017). Students with stronger language skills have access to a broader vocabulary base, greater familiarity with diverse syntax, and often are better able to articulate thoughts and craft sentences in various ways. Consequently, they are in a better position to produce more informative, effective, and interesting writing. Access to this linguistic knowledge base may be limited for students who have experienced less exposure to rich language, are learning English, or have an underlying language disability.
86
Academic Skills Problems
Another skill domain is closely intertwined with language and writing: reading. Studies have indicated that students’ reading skills are related to their writing achievement in ways that are unique from the relation of language to writing (e.g., Ahmed & Wagner, 2020; Abbott & Berninger, 1993). Writing and reading are connected by language, a close interrelationship that Berninger et al. (2003) referred to as “language by hand and language by eye.” Reading skills are predictive of writing achievement because reading strengthens and deepens language. Reading exposes students to more varied and sophisticated vocabulary, syntax, and general language compared to what they hear in everyday speech (Crain-T horeson et al., 2001). Through reading, students also become familiar with story and text structures. This base of knowledge accumulated through reading likely contributes to stronger writing skills. The same can be said for graduate students learning to write in a scholarly style; the more research papers that students read, the more accustomed they become to writing that way. This is not to say that if students are merely provided opportunities to read and write that they will automatically become good writers. This is the assumption behind the workshop model (e.g., Writers Workshop; Calkins, 2003) and other whole-language approaches that primarily rely on exposure to rich literature and extended student opportunities to write, which have demonstrated weaker evidence of effectiveness (Gillespie & Graham, 2014; Graham et al., 2012). As reviewed in Chapter 6, explicit writing instruction is critical. Reading benefits writing, but writing also benefits reading. In a meta-analysis of experimental studies, Graham and Hebert (2011) found that writing instruction improved students’ reading comprehension, as well as reading fluency, and that increasing how much students wrote improved reading further. Writing likely benefits reading comprehension by drawing more attention to text, facilitates more careful attention and scrutiny of it: “[Writing] provides students with a tool for visibly and permanently recording, connecting, analyzing, personalizing, and manipulating key ideas in text” (Graham & Hebert, 2011, p. 712). Thus, reading and writing share a reciprocal relationship, one that is connected by language. These factors underscore the importance of assessing the reading skills of students referred for writing difficulties and vice versa. It is also a reason to consider the integration of reading and writing strategies for students with difficulties in one or both areas.
Summary: Identifying Targets for Assessment and Intervention Three key aspects are consistent across the models of reading, mathematics, and writing. The first is that in each academic domain, automaticity with basic subskills makes more complex processes possible. In reading, it is word reading efficiency; in mathematics, it is automaticity with number combinations; and in writing, it is transcription fluency. Without accuracy and efficiency in these subskills, reading comprehension, complex mathematics, and written expression are extraordinarily difficult. The second aspect consistent across models is the role of language. The central influence that language has across all academic skills is not always appreciated. It is hoped that the preceding discussion has helped remind practitioners of its importance in addition to other keystone skills. The third aspect consistent across models is the presence of interactive, reciprocal relationships among skill and knowledge sources. Skills interact and support each other to facilitate more complex skills; they rarely develop in isolation.
Choosing Targets for Assessment and Intervention 87
The identification of targets for assessment that contribute meaningfully to designing intervention is made possible through understanding the critical keystone skills involved in the development, and difficulties, in an academic domain. This knowledge helps the evaluator better recognize the relevant skills to assess, identify measures of those skills, and interpret the data. This knowledge also places the evaluator in a better position to identify interventions to address the skills most relevant for improving achievement.
SKILLS AND PROCESSES THAT ARE RARELY GOOD TARGETS FOR ACADEMIC ASSESSMENT AND INTERVENTION By this point, it is likely that readers will notice that we did not identify specific neuropsychological and cognitive processes as important potential targets, even though they are commonplace in much school psychology assessment practice. We recognize that academic skills are the result of cognitive processing. However, as discussed in Chapter 1, our perspective, backed by evidence, is that assessment of neuropsychological and cognitive processes such as working memory, visual-spatial reasoning, executive functioning, processing speed, sensory–perceptual–motor abilities, and overall estimates of intellectual ability are often of little value for designing intervention and, as such, hold little value for assessment. Across decades, studies have observed that although they are correlated with academic achievement, neuropsychological and cognitive processes are weak predictors of students’ ability to learn and responsiveness to instruction, poor discriminators of students who struggle academically versus those who do not, and when targeted specifically through intervention, are not associated with appreciable improvements (Arter & Jenkins, 1979; Burns et al., 2016; Cirino et al., 2017; Fletcher et al., 2019; Fletcher & Miciak, 2017; Jacob & Parkinson, 2015; Kassai et al., 2019; Kavale & Forness, 1987; Kearns & Fuchs, 2013; Melby-Lervåg et al., 2016; Redick, 2019; Redick et al., 2013; Stuebing et al., 2015). Consistently, the literature indicates that even when specific processes underlying basic academic skills could be adequately identified and targeted through training, changes are observed only for the component skill trained, and any transfer from improvements in these processes to increases in academic performance tended to be negligible. Working memory training is one common example; in Chapter 1, we reviewed evidence that attempts to train working memory do not result in improvements in academic skills.
SUMMARY AND CONCLUSIONS: IDENTIFYING TARGETS FOR ASSESSMENT AND INTERVENTION What are the primary areas in which clinicians and educators should focus an assessment of academic skills? Although it is never straightforward, the answers primarily rest with the variables that are directly involved in learning and academic responding: 1. Academic skills relevant to the academic skills problem. If academic skill improvement is the goal, focus assessment and intervention on relevant and important academic skills. In this chapter, we identified “keystone” skills that are important for development in reading, mathematics, and writing, and are reasons for academic difficulties when
88
Academic Skills Problems
absent, and therefore represent highly useful targets. Many academic difficulties have their root in a lack of accuracy and automaticity in basic subskills that make higher-order proficiency possible, and these skills should be primary targets for assessment. Language plays critical roles in multiple ways; it forms a foundation for development, affects the student’s interaction with instruction, and facilitates higher-order skills and proficiency. Therefore, the student’s language development is considered a keystone in reading, mathematics, and writing. 2. The academic environment. A well-designed classroom environment can motivate and support learning, foster and reinforce important behaviors, and teach skills in the most effective ways. Regardless of the individual differences in academic skills or behavior that students bring to the situation, a productive and predictable academic environment can mitigate the effect of a student’s existing difficulties and maximize their strengths. Conversely, a dysfunctional academic environment can be distracting, unpredictable, ill-suited for learning, and either create or exacerbate a student’s learning difficulties. Academic skills problems are often due to a combination of factors, and the educational environment can play a significant role, thus making it a key target of assessment and intervention. 3. Skills and behaviors that facilitate and support learning. Learning-related skills such as attention to task, self-regulation, following directions, and task persistence are critical for academic achievement and are responsive to intervention. Sometimes, deficits in these areas are a cause of academic difficulties; other times, they might make existing skill weaknesses even worse. Improving learning-related skills can be an important part of an academic skills intervention, which makes identifying when a lack of learning- related skills may be causing or exacerbating academic performance a significant aspect of an academic evaluation. Shapiro’s model of assessment and intervention considers the interrelated influences of all three areas, and the process is described in detail across the subsequent chapters. First, an initial hypothesis regarding the nature and causes of the students’ academic difficulties is formulated based on any referral information, interviews with the student’s teacher(s), review of previously collected assessments or progress monitoring data, and permanent products of the student’s completed work, and systematic direct observations of the student in classroom settings. Information obtained in the first step allows for refinement of the initial hypothesis, helps identify what skills will be important to assess with the student and what measures will be most relevant, using an understanding of academic skills development and the areas in which students with academic difficulty most commonly struggle. This assessment may include CBA measures, judicious use of standardized tests of relevant academic skills, and other assessments as needed. Data from the direct assessment are used to refine the testable hypothesis and directly inform the implementation of intervention strategies to address the problem. Intervention is then implemented, progress is monitored to evaluate the student’s response and, when needed, adjust the intervention. Further refinement of the hypothesis, intervention, and determination of the need for continued or intensified support are based on the student’s responsiveness to the intervention. The model incorporates both idiographic and nomothetic perspectives; it considers the student in their immediate environment and their individual academic strengths and
Choosing Targets for Assessment and Intervention 89
weaknesses, as well as the student’s functioning compared to their peers on keystone skills known to be highly important for academic proficiency. The model acknowledges that each student comes to school with profound differences in experiences with language, home environment, equity and marginalization, and access to care and support. It considers the student’s language development and academic skills relevant to any concerns they experience and aims to develop an intervention that capitalizes on the student’s strengths while targeting areas in need of improvement. The model aims to be pragmatic and practical. We acknowledge there is a genetic contribution to some academic skills difficulties and that academic skills are the outcomes of neuropsychological processes. However, we are concerned primarily with the skills and variables that directly affect academic performance, their interaction with the present environment, are malleable through intervention, and those in which improvement is associated with stronger academic functioning. The model acknowledges that learning difficulties and learning disabilities are real. Not all academic difficulties involve disabilities, but not all academic difficulties result from inadequate instruction or low motivation. How academic difficulties are labeled, or what formal diagnoses are or are not identified, matters less in how to assess and intervene in a way that helps students. The model acknowledges that most academic skills deficits are challenging to remediate. There are rarely any quick fixes and no magic interventions. Academic skills most certainly can be improved for any student at any age, but it takes careful and informed assessment, evidence-based interventions implemented with fidelity, data-driven decisions on student progress, and a lot of hard work.
CHAPTER 3
Step 1: Assessing the Academic Environment
A
n assessment of academic skills incorporates several different methods, each designed to contribute to the overall understanding of the problem. It is important to remember that because reading, mathematics, and writing responses occur in an instructional environment, an effective evaluation of academic problems must contain more than just an assessment of academic skills. Academic skills problems are multifaceted and influenced by multiple factors. To truly understand a student’s academic difficulties, the evaluator must spend time in the environment and contexts in which learning is expected. Therefore, our assessment model begins with an assessment of the academic environment. A critical assumption of assessing the academic environment is that academic skills are a function of environmental and individual variables. Rather than focus solely on the child’s academic skills, it is important to examine teaching procedures, contingencies for performance, classroom structure, and other such instructional events that may contribute significantly to the child’s problems and inform remediation strategies. At the same time, this perspective also means that academic skills problems are not caused entirely by the immediate instructional environment. For example, reading difficulties may result from underdeveloped literacy skills or the cumulative impact of inadequate instruction and a lack of opportunities to read that spans several previous years, and difficulties in mathematics and writing can have similar longitudinal origins. The student’s academic environment may support learning and mitigate the impact of a student’s underlying difficulties, or the environment may lack such supports and exacerbate the effect of existing skill weaknesses. Consideration of the environment is critical for children exposed to multiple languages at home and school, or when the language spoken in instruction differs from the language or dialect with which the student is most familiar. A child’s history of unequal access to knowledge-building experiences and opportunities to learn also interacts with the environment and has significant influence on academic performance and motivation. Understanding a student’s academic environment, and the extent to which it helps alleviate or aggravate their academic difficulties, is a unique contribution of Shapiro’s model and an important step in assessing academic skills.
90
Step 1: Assessing the Academic Environment 91
Procedures employed in the assessment of the academic environment are similar to those used in a behavioral assessment for nonacademic problems. Specifically, interviews, direct observation, examination of permanent products, and completion of rating scales are used. Data from these sources are integrated with information collected in other aspects of the assessment. The goal of this stage of the assessment is to better understand the student’s problem, ascertain the academic and behavioral expectations of the instructional setting, and examine the nature and quality of the instruction provided. Several objectives should be considered. First, it is important to determine the degree to which the academic ecology contributes to the observed academic problem (Lentz & Shapiro, 1986). Understanding how events such as instructional presentation, feedback, and class structure relate to academic responding may provide significant clues regarding potential factors that might be associated with current academic difficulties, and possible intervention strategies for remediating the academic problems. Second, the assessment can identify situations in which a child’s problem is a performance deficit (i.e., “won’t do”) versus a skill deficit (i.e., “can’t do”; VanDerHeyden & Witt, 2008). A performance deficit implies a problem with behavior or motivation—the student knows what to do and how to do it but chooses not to. In contrast, a skill deficit indicates that the student wants to complete the assigned academic tasks but lacks the knowledge or skills to complete them accurately. Related to skills deficits are times when inadequate mastery of a skill results in slow and laborious task completion. In these cases, students have basic skills or understanding of the tasks and the desire to complete them but may have lacked sufficient opportunities to practice. Some students may demonstrate some combination of issues. The intervention recommendations would be different in each case (Daly et al., 1997; VanDerHeyden & Witt, 2008). Although sufficient data to determine whether the problem is a skill or performance deficit are usually not complete until direct assessments are administered, the information collected in the evaluation of the environment is the first step. A third objective of assessing the academic environment is to determine the degree of alignment (or mismatch) between the student’s present skill levels and what is expected of them in the curriculum used for instruction. This may involve establishing where in the curriculum the student has achieved mastery, where they should be taught, and what content is too difficult to be useful (i.e., frustration level). It is surprising to find how often students are permitted to move on in curricular materials despite their failure to achieve mastery of prerequisite skills. Because skill proficiency often depends on mastery of foundational skills, failure to master certain skills may lead to difficulty in subsequent parts of the curricula. More detailed and specific information on the student’s academic skills is obtained in the next step, the direct assessment of academic skills (see Chapter 4). However, at the present stage, it is helpful to examine the curriculum of instruction related to the student’s skill areas of concern. When assessing academic skills, the child’s academic behavior and the academic skills directly underlying the outcomes of what is expected of students should be the targets for evaluation. This requires that the assessment identify what the teacher expects of students in the classroom, in the relevant skill areas, when the assessment is conducted. This information helps the evaluator better understand why the student was referred for an assessment and identify the magnitude of the difference between the student’s current performance and what the teacher expects of students for that time of year. This also helps identify skill gaps that explain the student’s lack of success or progress. Identifying the tasks and assignments the student is expected to complete and evaluating their success in those situations are part of this process.
92
Academic Skills Problems
In Chapter 1, we discussed different types of academic assessment, including curriculum-based evaluation (Howell et al., 1993), curriculum-based assessment (Gickling & Havertape, 1981; Idol et al., 1996; Shapiro & Lentz, 1986; Tucker, 1985), and curriculum-based measurement (Deno et al., 1985). Although each of these emphasizes somewhat different aspects of the evaluation process, they all consider a student’s academic problems in the context of the curriculum in which the child is taught. One important difference, however, is that most models other than the one Shapiro and Lentz described (Shapiro, 1987a, 1989, 1990, 1996a, 2004; Shapiro & Lentz, 1985, 1986) do not incorporate significant efforts at evaluating the academic environment, along with the child’s skills. Efforts to add this component have been offered by Ysseldyke and Christenson (1987, 1993). The model described in this text is consistent with the assumptions and methods of behavioral assessment. As such, this model of CBA is conceptualized as the behavioral assessment of academic skills.
ASSESSING THE ACADEMIC ENVIRONMENT: OVERVIEW Assessment of the academic environment involves examining the characteristics of the instruction provided, and how the classroom is structured and organized. As discussed in Chapter 2, several instructional and classroom structure variables have been found to affect academic performance and student functioning: academic engaged time (time-ontask, opportunities to respond) and the proportion of active versus passive engagement; instructional characteristics such as explicit instruction, feedback, and pace of instruction; classroom contingencies; classroom rules, expectations, and routines; and the presence of challenging or disruptive behaviors that interfere with students’ attention to instruction. Although all of these variables can be directly assessed through observation, it would be impractical to try to observe each directly. Instead, the use of teacher interviews, combined with targeted direct observations in the classroom, teacher-completed rating scales, and the examination of the student’s completed work, allows the examiner to narrow the field to the variables that appear to be the most relevant to the academic skills of concern. Beyond these data, it is also helpful to determine the student’s perspective on their academic functioning and instructional environment. For example, does the student know how to access help when they have trouble with an assignment? Does the student know what they are expected to do in a given lesson? Does the student understand the behavioral expectations of the classroom and the different instructional settings? What is the student’s self-perception of their academic skills and ability to successfully meet the task’s requirements? Answers to these questions may add to the analysis of the instructional ecology and can be determined through a student interview. Table 3.1 provides a list of the relevant variables for assessing the academic environment and which method (teacher interview, student interview, observation, or permanent products) offers data on that variable. Figure 3.1 includes a flowchart of a suggested sequence of methods. The process begins with the teacher interview.
TEACHER INTERVIEWS The teacher interview starts the assessment process and helps the evaluator form an initial hypothesis regarding the causes and maintaining factors of the academic problem. Although there may be existing referral information, the interview verifies the referral concern and
Step 1: Assessing the Academic Environment 93
TABLE 3.1. Assessment Procedures for Achievement-Related Variables Variable
Procedures
Skill area(s) of concern and teacher perception of skills
Teacher interview, rating scale
Student’s placement in curriculum and achievement relative to expectations
Teacher interview
Allotted time for instruction
Teacher interview, direct observation
Opportunities to respond
Direct observation, permanent products
Active and passive engaged time
Direct observation
Explicit instruction
Direct observation
Teacher feedback
Direct observation, teacher interview, student interview
Pace of instruction
Direct observation
Classroom contingencies
Direct observation
Classroom rules and routines
Direct observation, teacher interview, student interview
Interfering behaviors
Direct observation
Student’s academic skills
Direct assessment using measures of relevant academic skills
Student perceptions of teacher expectations
Student interview, permanent products
Student motivation
Teacher interview, student interview, rating scale
Student academic self-efficacy
Student interview
Structured Teacher Interview and Rating Scales (problem, settings, procedures, environmental variables)
Direct Classroom Observation
Student Interview
Permanent Product Review
Direct Assessment of Academic Skills
FIGURE 3.1. Flowchart of methods and procedures for assessing academic skills.
94
Academic Skills Problems
helps make the following assessment activities more efficient. Rather than assessing a broad range of skills and behaviors that waste time and resources, the teacher interview helps focus the rest of the assessment by indicating what to look for and where to look. During the teacher interview, information about the student’s relative strengths and difficulties in reading, mathematics, and writing is obtained. More attention is naturally paid to the referral area of concern. However, because academic difficulties tend to occur together (e.g., Willcutt et al., 2019), information on the students’ performance in other skill areas is also gathered in the interview. If there are multiple areas of academic concern, it is best to focus on the primary area of concern first. In addition to questions about the academic skill areas, questions are asked about the curriculum and instructional procedures. Data are sought regarding the child’s current skills and instructional level in the curriculum, the specific materials used for instruction, the current skills and level of proficiency of “typical” students in the classroom relative to the skill level of the target student, specific behaviors the student demonstrates that support or interfere with learning, the types of instructional groups used by the teacher (large group, small group, learning centers, cooperative learning), monitoring procedures employed to assess student progress, specific contingencies for performance, details of specific interventions that have already been tried, and global indications of the child’s behavior during the instructional process. The format for our interview is based on the behavioral consultation process described by Bergan (1977), and Bergan and Kratochwill (1990). Their model incorporates a series of interviews designed to identify the problem, analyze the critical variables contributing to the problem, design and implement intervention strategies, and evaluate the effectiveness of the intervention. Their interview process offers a comprehensive analysis of verbal behavior and a methodology for training consultants to conduct these interviews. Bergan’s interview procedures can be used reliably (Bergan, 1977; Erchul et al., 1995; Gresham, 1984; Kratochwill et al., 1995), and the types of consultant verbalizations needed for a problem analysis interview have been validated (Gresham, 1984; Witt, 1990; Witt et al., 1991). Although there have been other behavioral interviewing formats for school consultation (e.g., Alessi & Kaye, 1983; Lentz & Wehmann, 1995; Witt & Elliott, 1983), the format described by Bergan appears to be most often cited in the behavioral consultation literature. For example, Graden, Casey, and Bonstrom (1985) and Graden, Casey, and Christensen (1985) used Bergan’s behavioral consultation model to investigate a prereferral intervention program across six schools. Noll et al. (1993) reported that a prereferral intervention program using a behavioral consultation model over a 3-year period resulted in greater percentages of students with behavior disorders able to be successful in less restrictive settings.
The Teacher Interview Form Appendix 3A, at the end of this chapter, provides a form to facilitate the teacher interview process. Shapiro developed the original form through years of working directly with students with academic difficulties, and through having his graduate students in school psychology use it in assessments conducted as a part of his class in academic assessment. It has been refined for this edition of the text, with an eye toward greater efficiency and emphasis on keystone skills. This form is designed to be conducted as an interview with the teacher, in person. Telephone or videoconference interviews are acceptable as well. The form should never be given to the teacher to fill out independently as a questionnaire.
Step 1: Assessing the Academic Environment 95
Section 1 The first section of the interview form is a general section to identify or verify the reason(s) for the referral. Here, the evaluator asks the teacher to indicate the primary area of academic concern (i.e., reading, mathematics, or writing). This should be the skill area the teacher believes is the most problematic. If the reason for referral is already known, the evaluator can use this space to verify the concern. The second question in this section asks the teacher to identify any other academic skills that are also areas of difficulty for the student. Academic difficulties tend to cooccur and understanding the scope and extent of the students’ academic difficulties is important for guiding the following assessment activities. In some cases, however, academic skill difficulties may be isolated to a specific skill area. The third item in Section 1 provides a place to identify the areas of relative strength for the student. This question should be posed so that the teacher understands you are asking about skill areas that are strong compared to other skills for this student, not other students. Very often, students with academic difficulties will be performing below the level of their peers in many (if not all) areas. However, all students will have areas of relative strength—skills that are stronger for the student than in other areas. This information is useful for understanding the student’s overall functioning, understanding the direct assessments results, and later building an intervention plan that capitalizes on the student’s relative strengths.
Section 2 Section 2 focuses specifically on the student’s primary area of concern. Question 2A is meant to gather or confirm details on the specific aspects of the student’s difficulty in the primary area of concern, provided the teacher had not already provided those specifics. For example, if the primary area of concern is mathematics, teachers can provide information on the specific types of math skills or problem types that are problematic for the student. As with all questions, we encourage users to be judicious. If teachers have already provided this information earlier, this question could be skipped or perhaps used to confirm or expand on information provided earlier. Questions under 2B are used to gather information on what strategies (if any) have been implemented to address the student’s primary area of difficulty and the extent to which they were successful. These interventions could range from simple strategies (e.g., adding practice drills, improving motivation, or changing the way feedback is given) to more complex interventions, such as altering the curriculum materials, the modalities of instruction, or adding supplementary intervention supports or tutoring. Teachers may respond to this question with the strategies they have tried in the classroom, which is relevant to this question, but be sure to identify whether other supplemental interventions have been implemented, such as pull-out intervention groups or tutoring. Teachers often mention computer-based instruction or practice programs, in which case the program or application should be noted. Questions under 2C are used to gather information on the curriculum and instructional materials used in the primary area of difficulty (i.e., reading, mathematics, or written expression). Note that this question does not ask about curricula used in all academic areas—only the curriculum used in the primary skill area of concern. Often, a primary curriculum is referred to as a core instruction program. For example, this may be a specific published curriculum, such as Everyday Math or Open Court Reading. Some schools may not use a published core program and instead may provide teachers
96
Academic Skills Problems
with locally developed resources or expect teachers to gather the materials themselves. In other situations, a core program may be used and supplemented with other materials. In any case, it is important to gather information on what materials are used for instruction, because programs vary in quality and the extent to which they reflect evidence-based practices. Section 2 concludes by asking the teacher to indicate the time the subject is taught so that direct observations can be arranged.
Section 3 The third section is included to gather information on a secondary area of academic difficulty if one is identified. This information will be helpful in determining additional skills to evaluate in the assessment, and to figure out whether difficulties in this area are interrelated with difficulties in the primary area. This section can be skipped if there are no secondary areas of concern.
Section 4 Section 4 is designed to gather information on the student’s behaviors that may either facilitate or interfere with learning and the classroom environment in general. Rather than ask about behaviors more broadly, which may elicit long responses that may not be specifically focused on behaviors of interest, the section begins with a rating scale that is focused on behaviors particularly relevant to academic instruction and student learning. Readers will note that the behaviors in the scale align with key learning-related behaviors discussed in Chapter 2. The rating scale format is designed to gather information quickly and identify specific behaviors that the interviewer may want to gather more detailed information about in follow-up questions. As with the rest of the interview, the scale is designed for the interviewer to conduct as a dialogue and not by providing the teacher with the scale to fill out. Teachers typically add useful information in conversation about their ratings that they may not do if simply filling out the form. However, it can be useful sometimes for teachers to see the scale you are completing as you ask each question. A script is provided at the start of the section to explain the rating scale values. During the interview, users should remind teachers that 0 represents “never” and 5 represents “always,” as needed. The area below the rating scale is provided for the interviewer to gather more information on specific items or behaviors the teacher noted, which may be behaviors that the student demonstrates are particularly helpful or facilitative of their achievement (e.g., highly engaged, consistently shows effort), or behaviors that are especially problematic and interfere with learning or the classroom environment. This information will be helpful in guiding the subsequent direct observations and assessment activities.
Section 5 Section 5 is focused on obtaining additional information about the student’s reading difficulties. If the teacher verifies that the student has no difficulties in reading, it would be best to skip this section. However, if reading skills are a primary or secondary concern for the student, this section should be included in the interview. To maintain continuity, if reading skills were a primary or secondary area of concern, the interviewer may wish to complete this section immediately after discussing reading skills as a primary or secondary area of concern. Knowing when to skip ahead and cycle back within the form comes with practice. Questions in Section 5 focus on critical literacy and language skills that have causal connections to reading development and, when deficient, can explain a student’s reading
Step 1: Assessing the Academic Environment 97
difficulties. These areas reflect the keystone skills of reading development described in detail in Chapter 2. In each skill area, the teacher is asked to indicate whether the student’s level of proficiency with the skill is below expectations, meets expectations, or exceeds expectations for students in that grade level for the time of year. Some skills, such as reading comprehension, should not be reasonably expected until students have developed adequate skills in reading words with accuracy and efficiency (i.e., reading comprehension cannot exist if the student can’t read the words on the page with relative ease). Therefore, “low” reading comprehension skills for a kindergartener or first grader should usually not be considered a “problem” because most students in those grades are developing skills in word and text reading that allow comprehension to take place. The importance of each of these skill areas and interpreting the student’s areas of relative strength and weakness is discussed in detail in Chapter 2. It is also important to note that interview questions for the teacher regarding the student’s reading skills (and any of the academic areas) are intended to provide preliminary information about the student’s difficulties. The information that teachers provide is useful for formulating initial hypotheses regarding the reasons for the student’s reading difficulties but is not intended to be exhaustive in formally evaluating the students’ reading skills. Research on teachers’ judgments of their students’ academic skills indicates that teachers are generally accurate in identifying students with reading difficulties (Snowling, 2013; Speece et al., 2010). However, more specific data on the student’s reading skills are collected through direct assessments at a later step. Nevertheless, the information obtained from the teacher at this stage assists in formulating hypotheses about the student’s reading difficulties and guides subsequent decisions regarding what skills to assess and what measures will be best suited for that purpose. The next question asks about the student’s skills in oral comprehension—the extent to which they understand spoken language, which includes understanding language they hear, comprehending stories read to them, understanding instruction and verbal directions, and so on. Vocabulary knowledge is one of the most important aspects of linguistic comprehension. Ascertaining skills in this area helps provide a better picture of their academic problems, particularly when difficulties with reading comprehension are indicated. Reading comprehension problems are often due to inadequate skills in reading words accurately or reading efficiently. This profile is very common in early grades but has been shown to persist through adolescence (Brasseur-Hock et al., 2011). In many cases, struggling readers demonstrate little to no difficulties processing oral language. However, reported difficulties in this area can signal underdeveloped vocabulary knowledge or perhaps an underlying disability in language. This would indicate the need to evaluate language skills more specifically and may even involve a referral for an evaluation by a speech and language pathologist. It is also important to distinguish situations in which low oral comprehension may be related to a student learning English—in these cases, low oral comprehension may be due entirely to their relative lack of exposure to English vocabulary, grammar, syntax, inflection, and the other aspects involved in comprehending another language, not an underlying language disability. These factors underscore the importance of asking teachers to describe the students’ oral language comprehension. The interviewer also asks how instructional time is allotted and divided. It is helpful to know the extent to which reading is taught in large or small groups, the size of the target student’s group, the expectations for students when they are not engaged in direct teacher instruction, and other structural aspects of the teaching process. The form includes a question about the presence of difficulties in spelling. Although spelling is more difficult than reading even for proficient readers, reading difficulties
98
Academic Skills Problems
(particularly in reading words) will often be accompanied by problems in spelling given that both rely on a common foundation of skills in phonemic awareness, alphabetic knowledge (especially letter–sound correspondence), and orthographic knowledge (i.e., knowledge of spellings of letter combinations and whole-word spellings). Spelling difficulties, in turn, will affect written expression. This information can help provide a clearer picture of the scope of the student’s academic difficulties and afford a better idea of what should be assessed directly with the student at the next step in the process.
Section 6 Section 6 of the interview form includes areas for collecting more specific information on the student’s mathematics skills. As with the reading section, this section will be useful if mathematics is identified as a primary or secondary concern. It can be administered immediately after that section for continuity. This section lists several key mathematics skill areas and asks the teacher to evaluate whether the student is below, meeting, or exceeding expectations in a skill for that grade and time of year. The skill areas align with the keystone skills of mathematics described in detail in Chapter 2. In most contemporary mathematics curricula, targeted skills generally progress hierarchically. They are taught in such a way that acquiring one skill provides a basis for learning later skills. For example, students are usually taught single-digit addition number combinations (i.e., math facts) with sums less than 10 before sums greater than 10 are taught. However, there is also a good deal of variability across curricula with regard to the specific skills taught and the strategies and methods students learn to solve problems. Some curricula may emphasize conceptual understanding and minimize procedural knowledge, and others may take the opposite approach. What is important to obtain in the interview is the teacher’s perspective on the student’s areas of strength and weakness in the key domains of mathematics, including both relative strengths and weakness for the student, and compared to expectations for the grade and time of year. The responses to these interview questions play an important role in the subsequent direct assessment of mathematics skills. The skills in the table in Section 6 are generally arranged in a hierarchical format going from the most basic to the most sophisticated. Obviously, some of the skills listed will be well beyond expectations for a given grade level; for instance, it is not appropriate to ask a teacher about a first grader’s skills with rational number operations. Therefore, the interviewer should be thoughtful in avoiding skill areas that are not applicable for a given age or grade. This is another aspect that develops as the evaluator gains experience and learns more about academic skills development (see Chapter 2). Given the variability in contemporary mathematics curricula, it is helpful to obtain a curriculum scope and sequence for the grade level (i.e., a “map” of the unit and lesson objectives for the year), which is typically available in the teacher’s curriculum materials or from a district curriculum coordinator. Reviewing the scope and sequence chart before the interview promotes more specific questions regarding mathematics skills expected of students at a given time and the extent to which the student has demonstrated success and difficulties in a previous section of the curriculum. The evaluator can refer to the scope and sequence materials when interpreting the overall results of the assessment. Teachers may sometimes not have clear knowledge of a student’s performance on specific instructional objectives within the curriculum. Because one can divide mathematics objectives into broad categories, interviewers can use general questions in each of these areas when teachers cannot identify a student’s skills from a list of computational objectives. Figure 3.2 provides a list of these broad categories.
Step 1: Assessing the Academic Environment 99 Computation operations
Computation skills
Addition
• Number combinations (i.e., “facts”) within 10 • Number combinations within 20 • Multidigit computation, no regrouping • Multidigit computation with regrouping
Subtraction
• Number combinations (i.e., “facts”) within 10 • Number combinations within 20 • Multidigit computation, no regrouping • Multidigit computation with regrouping
Multiplication
• Number combinations (i.e., “facts”): 1, 2, 5 • Other facts • Single digit × 2 digits • Multiple digits
Division
• Number combinations (i.e., “facts”) : 1, 2, 5 • Other facts • 2 digits ÷ single digit • Multiple digits, no remainders • Multiple digits, with remainders
FIGURE 3.2. Order of assessing computation skills across operations.
Section 7 The next section of the form pertains to student difficulties in writing. Writing difficulties can span a broad range, including problems in handwriting and spelling that impair writing production (i.e., transcription), to higher-order aspects and strategies such as planning, coherence, organization, creativity, and revising. Therefore, a series of categories are provided in Section 7 to guide information gathering and are consistent with keystone skills in writing discussed in Chapter 2. These areas are arranged in a general hierarchy from lower- to high-order writing skills, thus allowing the interviewer to determine whether the student’s overall writing difficulties are due more to problems with basic transcription or if basic skills are adequate and problems involve higher-order strategies and skills. Finally, the teacher is asked to describe the types of writing assignments used in the classroom and the skills students are expected to demonstrate, which can help the interviewer gauge the nature and extent of the student’s writing difficulties relative to current expectations.
Section 8 Section 8 includes questions about the school’s multi-tiered system of support/response to intervention (MTSS/RTI) model, if one is in place. For efficiency, it may be best to obtain this information outside of the teacher interview. There are some situations in which teachers may not be fully aware of the details of the model, and a school or district- level MTSS coordinator or coach may better answer those questions. Because MTSS can also be implemented as a schoolwide model for managing behavior, the questions would include information about MTSS as implemented for academics, behavior, or both. Of course, many users of the form may be employees of the school or district where the assessment is being conducted, and therefore an MTSS model or other systems of support will be well known to the interviewer. In these situations, these sections may be omitted from the interview.
100
Academic Skills Problems
Section 9 The last section of the form provides a guide for formulating an initial hypothesis statement about the cause(s) and maintaining factors of the student’s academic difficulties. The evaluator completes this section after the interview. This hypothesis helps to guide the subsequent assessment and intervention activities, and will be refined and revised as new data are collected. Users may wish to wait to write the initial hypothesis until after they have completed direct observations.
Sample Completed Interview Form: Mario Figure 3.3 illustrates a completed teacher interview form for a third grader named Mario. Although he was reported to have academic difficulties in all three areas, his teacher, Ms. T, said that he experienced the most significant difficulties in reading and writing. She also reported that Mario experienced difficulties staying on-task during academic activities, especially reading. The interview conducted with Ms. T included all areas of academic skills development. Approximately 90 minutes each day are allotted for reading; this time is divided into teacher-led large-group instruction, followed by small-group instruction mixed with individual seatwork assignments in which the teacher works with specific groups. Mario is currently in Ms. T’s lowest reading group, and she describes his reading skills as among the lowest in her class. During large-group instruction, the teacher uses modeling, direct instruction, and teaching strategies for comprehension. Students are expected to practice these skills in small groups and during independent work assignments. With regard to Mario’s reading skills, his teacher indicated that she believed his phonological awareness, alphabetic knowledge, and vocabulary knowledge were consistent with grade-level expectations, but that he was below expectations in word reading, reading fluency, and comprehension. Ms. T reported that teachers in previous grades had noted that Mario’s reading skills tended to be lower level compared to other students in the class; however, in their view, his difficulties were never significant enough to warrant referrals. It seems that in third grade, as word and text difficulty are increasing more rapidly than in previous grades, Mario is having difficulty keeping up. Mario has received supplemental Tier 2 intervention supports through the school’s MTSS model for three months in third grade, and prior to that in second grade, which Ms. T described as additional guided reading groups. In mathematics, Mario was reported to have difficulties with basic computation skills, whereby he was not fluent with addition, subtraction, and multiplication facts. However, she noted that math facts are not typically a focus of instruction in the district mathematics curriculum (which emphasizes conceptual understanding of computation problems). Hence, Mario makes frequent errors in math computation, and completing problems is laborious for him. He has had trouble learning alternative strategies for solving computation problems, such as decomposing numbers. Mario is struggling with areas that serve as early key indicators for the development of algebra proficiency, and unless these are specifically targeted for remediation, Mario will likely struggle in achieving competency in mathematics across subsequent years. Interventions that have been tried by Ms. T include peer tutoring, reminder cards, and requiring completion of homework during free time if it was not done at home. It was also noted that Mario was struggling in aspects of writing; however, his difficulties were reported to primarily consist of problems with low production. He tended (text resumes on p. 107)
Teacher Interview Form for Identification of Academic Difficulties Student Mario Teacher Ms. T School Swiftwater Elementary
Grade 3
Interviewer
Date of interview 4/7/21 Suggested introductory language: The purpose of this interview is to gather some basic information on [student’s] areas of academic difficulty and functioning in the classroom. This information will be highly useful in guiding my assessment. Some of the questions will involve the academic curricula, materials, and instructional groupings or routines that you use. These questions are meant to better understand the student’s difficulties in the current context, and are not meant to evaluate any aspects of your teaching. 1. General Information and Referral Concerns What is the primary academic area of concern (reading, math, or written expression)? reading Are there any additional areas of difficulty (reading, math, or written expression)? writing, math Areas of relative strength for the student (i.e., skill areas that are strongest for this student specifically):
Oral language (good vocabulary), engagement, and motivation 2. Primary Area of Concern
2A. What specific aspects of [primary area of difficulty—reading, math, or written expression] are problematic? Word reading, reading fluency, reading comprehension; experiencing
difficulty as texts are getting longer and more complex 2B. Intervention and Support Strategies Have they received any supplementary support or intervention in this area? This year, Tier 2
intervention (since January); last year (2nd grade) was in guided reading groups What kind of strategies have been tried, and to what extent were they successful? In core: text
previewing, picture walks, preteach vocabulary. Not sure what happens in Tier 2 intervention; seems to be phonics-based. Approaches do not seem to have helped. 2C. Curriculum and Instruction in Area of Primary Concern Title of curriculum or series used in this area Fountas & Pinnell, teacher-developed materials Are there other instructional materials used in addition or in place of the curriculum? At this point in the school year, what types of skills are students expected to demonstrate in this area? Comprehension; should be able to read third-grade text and understand, retell,
identify main idea What time do you typically teach this subject? 8:15–9:30 (continued)
FIGURE 3.3. Completed Teacher Interview Form for Mario.
101
3. Secondary Area of Concern (if applicable) 3A. What specific aspects of [secondary area of difficulty—reading, math, or writing] are problematic?
Math: calculation, frequent errors in addition, subtraction, multiplication. Has difficulty learning strategies for multidigit computation. In writing; writes as little as possible, spelling errors are common 3B. Intervention and Support Strategies Have they received any supplementary support or intervention in this area? No What kind of strategies have been tried, and to what extent were they successful? Math: partner work,
reminder cards, completing work or homework during free time. A little success with partner work (and he likes it). Writing: no additional strategies yet 3C. Curriculum and Instruction in Secondary Area of Concern Title of curriculum or series Everyday Math; Writer’s Workshop Are there other instructional materials used in addition or in place of the curriculum? At this point in the school year, what skills are students expected to demonstrate in this area? Math:
add and subtract multidigit with regrouping, single-digit multiplication, compare fractions, some word problems. Writing: write narrative story with characters, setting, conclusion. Improving grammar. Write simple essay with supporting details What time do you typically teach this subject? Math: 12:30–1:30, Writing; usually with reading 4. Behavior Next I’d like to ask about [student’s] behavior and learning-related skills during academic instruction and activities. On a scale of 0 to 5, with 0 being “never” and 5 being “always,” please indicate how often the student demonstrates the behavior during academic instruction and activities. Never Always a. Stays engaged (on-task) during teacher-led large group instruction
1 2 3 4 5
b. Stays engaged (on-task) during teacher-led small group instruction
1 2 3 4 5
c. Stays engaged (on-task) during partner work or independent work
1 2 3 4 5
d. Follows directions
1 2 3 4 5
e. Shows effort and persistence, even when work is difficult
1 2 3 4 5
f. Asks for help when needed
1 2 3 4 5
g. Completes tests or classwork in allotted time
1 2 3 4 5
h. Completes homework on time
1 2 3 4 5
i. Engages in behaviors that disrupt instruction or peers’ learning
1 2 3 4 5
Is [student’s] behavior especially problematic in some academic subjects or activities than others?
Much less motivated to write (does not like it), gives up easily in writing (continued)
FIGURE 3.3. (continued) 102
Additional information on the student’s behavior or social skills that either facilitate or interfere with their learning or the classroom environment (follow up on items rated as problematic above)
Gets along well with other kids, good sense of humor 5. Reading This space is available to note if the student demonstrates difficulties in reading. If there are no indicated problems with reading, this section should be skipped. Next I’d like to ask about [student’s] skills in some specific areas related to reading, and whether they are below expectations, meeting expectations, or above expectations in each area at this time of the year. Phonological and Phonemic Awareness: Able to identify sounds in words, rhyme, blend, segment, etc.
Seems to meet expectations (not sure)
Alphabet Knowledge and Letter Recognition: Able to identify printed letters, able to correctly associate printed letters (and letter combinations) with sounds
Meets expectations
Word Reading/Decoding: Reads words accurately; able to decode (i.e., sound out) unfamiliar words; reads gradeappropriate words with ease and automaticity
Below expectations
Reading Fluency: Able to read text smoothly, accurately, with expression
Below expectations
Reading Comprehension: Understands what is read; able to answer both literal and inferential questions from a passage; comprehends both narrative and expository texts
Below expectations
Vocabulary: Has age-appropriate knowledge of word meanings and definitions
Meets expectations
If Word Reading/Decoding skills are a concern, what types of words does the student find challenging?
Multisyllable words, words with vowel pairs, affixes What types of words is the student more successful at reading? “Sight words,” simple decodable words How would you describe this student’s listening (oral) comprehension skills—can they understand your directions, and understand stories or answer questions correctly after listening? Strong, he has no trouble
with oral comprehension How is instructional time in reading typically divided between large group instruction, small group instruction, and partner or independent work? Usually, teacher works with small groups while other
students work independently. Sometimes students work in pairs Does this student also have difficulty with spelling? Yes, frequent spelling errors in writing (continued)
FIGURE 3.3. (continued)
103
6. Mathematics This space is available to note if the student demonstrates difficulties with specific mathematics skill areas. If there are no previously indicated problems with mathematics, this section should be skipped. Next I’d like to ask about [student’s] skills in some specific areas related to math, and whether they are below expectations, meeting expectations, or above expectations in each area at this time of the year. Early Numerical Competencies (Number Sense, Early Appears to meet expectations Numeracy): Age/grade-appropriate skills and understanding in counting, number recognition, quantity discrimination, cardinality Addition and Subtraction Math Facts: Grade-appropriate accuracy and fluency with addition and subtraction math facts within 20
Below expectations
Multidigit Addition and Subtraction Operations: Grade-appropriate skills in applying procedures/algorithms for accurately solving addition and subtraction problems
Below expectations
Multiplication and Division Math Facts: Grade-appropriate accuracy and fluency with multiplication and division math facts within 100
Below expectations
Multidigit Multiplication and Division Operations: Grade-appropriate skills in applying procedures/algorithms for accurately solving multiplication and division problems
N/A
Fractions, Decimals, Percent: Grade-appropriate understanding and skills in rational numbers including comparing magnitude, accurately completing operations, converting, etc.
Below expectations
Word Problem Solving Skills: Able to solve grade-appropriate word problems
Below expectations
Geometry and Measurement: Conceptual knowledge and ability to solve grade-appropriate geometry and measurement problems
Meets expectations
Pre-Algebra and Algebra: Conceptual knowledge and ability to solve grade-appropriate pre-algebra and algebra operations
(not sure)
How is instructional time in math typically divided between large group instruction, small group instruction, and partner or independent work? Large-group instruction, followed by partner work
and/or independent work 7. Writing This space is available to note if the student demonstrates difficulties with specific writing skill areas (Note: if a student has reading difficulties, it is very possible they have difficulties in writing as well).
(continued)
FIGURE 3.3. (continued) 104
Next I’d like to ask about [student’s] skills in some specific areas related to writing, and whether they are above expectations, below expectations, or meeting expectations in this area at this time of the year. Handwriting
Meets expectations
Typing/Keyboarding (if applicable)
Meets expectations
Spelling
Below expectations
Capitalization and/or punctuation
Below expectations
Grammar and syntax
Below expectations
Planning and Formulating Ideas Before Writing
Below expectations
Organization and Coherence
Below expectations
Story/passage length
Below expectations
Editing and Revising
Below expectations
What types of writing assignments are given at this time of year, and what types of skills are students expected to demonstrate? (see 3C) Does the student have difficulty with low motivation to write, and/or self-regulation skills that affect their writing output and quality? Yes, tries to finish as quickly as possible; doesn’t like to write 8. School MTSS/RTI Model Information on the schoolwide multi-tiered system of support (MTSS) or response to intervention (RTI) model in academics and/or behavior, if one exists, can be obtained below. For efficiency, this information might be better obtained outside of the teacher interview. What does the model look like: Grade levels covered, skill areas targeted, etc. K–4, reading only What Tier 2 interventions are available? A phonics program (not sure of the name) Is there a Tier 3, and what does that entail? Does not think there is a Tier 3 program How are students identified for Tier 2 or Tier 3 interventions (e.g., universal screening)? DIBELS,
Running Records How often is progress monitored for students receiving Tier 2 or Tier 3 interventions, and what measure(s) are used? Running records, 1X month (continued)
FIGURE 3.3. (continued)
105
What and who determines when students move between tiers or interventions are adjusted? RTI team
(teachers, principal) 9. Preliminary Hypothesis Formation (to be completed after the interview) Primary area of difficulty: Reading Suspected skill deficits that are the reason for the difficulty: Word reading difficulties, possibly skill gaps
in foundational skills Difficulties with behaviors or learning-related skills that may be contributing to the problem: No Possible environmental and instructional factors contributing to the problem: Possibly a lack of explicit
instruction, insufficient practice reading with teacher support Relative strengths (academic or social/behavioral) that may mitigate the problem: Strong oral language
(vocabulary) skills, motivated, good social skills Preliminary Hypothesis Statement Framework. This is meant as a guide to assist hypothesis writing. It will be refined and revised across the subsequent assessment. Separate hypotheses can be written for secondary areas of difficulty.
Mario ’s difficulties in [reading/mathematics/writing] are due to inadequate or underdeveloped skills in word reading, especially complex words
. These difficulties appear [or do not appear] to be
related to the student’s behaviors or learning-related skills, which may include . The student’s difficulties appear [or do not appear] to be related to instructional or classroom environment factors, which may include lack of explicit instruction and OTR
.
Compared to their area(s) of difficulty, the student demonstrates relative strengths in oral language,
particularly vocabulary. Phonemic awareness may be adequate (need to check)
.
Mario ’s difficulties in [reading/mathematics/writing] are due to inadequate or underdeveloped skills in basic calculations (number combinations)
. These difficulties appear [or do not appear] to be
related to the student’s behaviors or learning-related skills, which may include . The student’s difficulties appear [or do not appear] to be related to instructional or classroom environment factors, which may include lack of explicit instruction and OTR
.
Compared to their area(s) of difficulty, the student demonstrates relative strengths in oral language,
particularly vocabulary, foundational math skills?
.
Mario ’s difficulties in [reading/mathematics/writing] are due to inadequate or underdeveloped skills in word spelling production
. These difficulties appear [or do not appear] to be
related to the student’s behaviors or learning-related skills, which may include low motivation and
persistence in writing
. The student’s difficulties appear [or do not appear] to be related to
instructional or classroom environment factors, which may include lack of explicit instruction and OTR
.
Compared to their area(s) of difficulty, the student demonstrates relative strengths in oral language,
particularly vocabulary. Handwriting appears to be OK? FIGURE 3.3. (continued) 106
.
Step 1: Assessing the Academic Environment 107
to write very little on any writing assignments, and his writing included minimal content. Related to his reading difficulties, his written expression was marked by frequent spelling errors, which further reduced the quality of his writing. In the area of behavior and learning-related skills, Ms. T reported that Mario demonstrated a fair degree of engagement during class activities. He is generally interested in participating and rarely exhibits any behaviors that disrupt class activities. However, she noted that over the past 2 months Mario has become increasingly discouraged when he encounters difficulties, especially in reading. He has expressed statements such as “I’m terrible at reading,” and in these situations, he tends to disengage and stop trying. Ms. T reported that during a recent reading group, Mario became discouraged after making several reading errors, put down his head on the desk, and refused to participate for the remainder of the lesson.
Considerations for the Teacher Interview: Emphasize Efficiency The entire interview should take no more than 15–20 minutes once an interviewer is skilled in its administration. It is important to become familiar with the questions since teachers will often provide responses applicable to several categories when asked a single question. Evaluators with experience interviewing for other situations should find this easy to pick up. With practice, the interview can become more like a conversation. Although the process of interviewing teachers is common in assessment, the types of questions being asked here are not typical of most interview practices. Asking teachers to describe their instruction, how they monitor student progress, the nature of previous interventions that have been tried, and how they make instructional decisions about moving students through the curriculum can sometimes result in unexpected defensiveness on their part. Although asking teachers questions about their instruction and decision making may seem to pose risks to establishing a collaborative rapport with teachers, it has been found, over hundreds of cases, that the common response to being asked these questions is very positive. Often, teachers remark that they had not asked themselves these types of questions, which they recognize are important for understanding their students’ problems. Nevertheless, it is helpful to explain the purpose of this type of interview to teachers before starting. A suggested statement is provided at the start of the form to help new interviewers become comfortable with this aspect. Throughout the interview, it is important to keep some things in mind: 1. Keep it brief. Remember that teachers’ time is short, and they may typically schedule the interview during their planning period, their lunch break, or a time they otherwise use to catch up on student work or arrange the next activity. Never keep a teacher beyond the agreed upon time. Ask questions clearly. Rather than ask teachers to explain something further that you did not quite follow, ask them to provide an example. Most of all, skip irrelevant questions or irrelevant topics, or questions on which the teacher has already provided information. For example, suppose a teacher reports that a student experiences significant difficulty reading most simple words with accuracy and has significant gaps in their early literacy skills. In that case, questions regarding the student’s reading comprehension are irrelevant. Interviewers are also encouraged to move along if time is short and gather information only on the most critical areas. 2. The interview is the first step and is not meant to tell you everything. Keep in mind that this is the first activity in the assessment process—it is meant to set the stage
108
Academic Skills Problems
for the assessment, to develop initial hypotheses, and to indicate the skills and behaviors to assess directly. Your direct assessments will obtain the most precise information on students’ skill difficulties. The information obtained in the teacher interview does not need to be exhaustive. Filling in gaps with a brief follow-up conversation or email is also appropriate.
Teacher‑Completed Rating Scales As an adjunct to the teacher interview, users may consider having teachers complete a rating scale on the student’s learning-related skills and behaviors. Rating scales can be useful for gathering additional information on students’ behaviors that facilitate or support learning and achievement, such as effort, task persistence, following directions, and impulse control. A rating scale might be considered to supplement data provided by the teacher interview form and can be particularly helpful if time does not permit a full interview. DuPaul et al. (1991) developed the Academic Performance Rating Scale (APRS), designed to provide teacher-based ratings of student academic performance in classroom settings in grades 1–6. A total score captures students’ overall academic performance (including behaviors supportive of learning including attention and task completion), and a factor analysis of the scale supported the measurement of three subscales: Academic Success, Impulse Control, and Academic Productivity (as judged by the teacher). Additional analyses by DuPaul et al. (1991) observed high reliability of the scale. Considering the APRS in an academic assessment context, the Academic Success subscale reflects the student’s work quality, work accuracy, and ability to learn and recall new information. The Impulse Control subscale involves the teacher’s perception of how carefully the student begins and completes their work. Finally, the Academic Productivity subscale measures the amount of work the student completes, their accuracy in following instructions, and their ability to work independently. Overall, the APRS offers data in several areas that may be useful to an academic assessment. Another rating scale measure that may be useful for understanding a students’ learning- related skills and behaviors is the Academic Competence Evaluation Scale (ACES; DiPerna & Elliott, 2000). The ACES asks specific questions about a student’s motivation, engagement, study skills, and interpersonal behavior across academic areas. Specific questions are also asked about reading, mathematics, and critical thinking skills. Versions of the ACES for teachers, parents, and students have been developed. In addition, the measure covers the entire K–12 grade period, along with versions that have been developed for preschool- and college-age students. Another measure that can provide data on students’ behaviors that impact academic achievement is the Social Skills Improvement System Rating Scales (SSIS; Gresham & Elliott, 2008). There is a good deal of overlap between a student’s ability to successfully navigate a social setting (like a classroom) and skills related to learning. Forms are available for teachers, parents, and students to complete in English or Spanish. The SSIS provides standard scores and percentile ranks (based on a nationally representative sample) on the following domains: communication, cooperation, assertion, responsibility, empathy, engagement, and self-control. Additionally, the SSIS assesses students’ competence in reading, mathematics, and motivation to learn, as well as problem behaviors that can compete with academic engagement such as hyperactivity, inattention, externalizing behaviors, internalizing behaviors, bullying, and behaviors characteristic of autism spectrum disorder. Questions sometimes arise about whether information obtained from interviews and rating scales is redundant. Research has supported the strong relationship between the
Step 1: Assessing the Academic Environment 109
use of these two methods (e.g., McConaughy & Achenbach, 1989, 1996), and others have noted the complementary rather than redundant nature of interviews and rating scales (DuPaul et al., 1991; Elliott et al., 1993). Some rating scales, such as the ACES or SSIS, provide more specific data on sets of learning-related skills and behaviors that may be overlooked in a typical interview. Thus, interviews and rating scales can complement each other and may provide a fuller picture of a student’s academic functioning. At the same time, interviewing may offer the assessor an opportunity to delve into details in areas not explicitly covered by a rating scale. Data from the interview and rating scales are an important first step in the assessment process. However, they represent indirect data sources because they are based on an individual’s perception of the student’s skills and behaviors (Shapiro & Kratochwill, 2000). Indirect reports are subject to inference and potential bias. Academic assessment requires the confirmation of key variables through direct observation and direct assessment.
DIRECT OBSERVATION Direct observation data provide quantitative measurements of the student’s behavior in instructional situations. One objective of collecting data through direct observation is to verify the information obtained through the teacher report. For example, data collected on the levels of on-task and disruptive behavior during different instructional groupings may confirm the teacher’s report that a student is substantially less engaged when working independently than when part of small-group instruction. The second objective of direct observation is to provide data on instructional variables and student– teacher interactions under natural conditions of instruction. It is important that the assessment involve evaluation of not only individual student behaviors that may be related to effective development of academic skills, but also the types of student–student and student–teacher interactions that may be related to academic performance. These interactions include the frequency and nature of the teacher’s feedback for academic responding (which consists of both affirmations that responses are correct and error correction for incorrect responses); praise and positive recognition of appropriate behavior, engagement, and effort; and the nature and consistency of behavioral corrections when they are needed. Finally, given the extensive literature on the role of academic engaged time and academic performance (e.g., Greenwood, 1991), the direct observation of academic skills should include estimates of engaged time. This would include data obtained for ontask behavior or opportunities to respond (e.g., Greenwood, Delquadri, et al., 1984; Greenwood, 1996). In combination with direct observation, it is also useful to examine the products of the student’s academic performance while the direct observations are conducted. Reviewing worksheets, a writing sample, or other things the student completed during the observation can allow a more accurate interpretation of the observational data. For example, data collected through direct observation showed that a student had a high level of ontask behavior during an independent work activity, but examination of the worksheet produced during the observation revealed that the student completed only a small number of items correctly and spent most of the time doodling on a portion of the page. Thus, this may be a student who has learned to appear engaged, but in fact is often off-task and struggling academically.
110
Academic Skills Problems
Getting Reading to Observe Before one begins the observation process, it is helpful to determine the classroom rules, expectations, and anticipated routine during the observation period. These rules may affect how certain behaviors are defined, what is expected of students, and when a student’s behavior is consistent or inconsistent with the teacher’s expectations. For instance, some teachers may expect that students must always be seated in their chairs during instruction or activities; therefore, any occasion on which a student is out of their chair during a lesson would be an instance of not following classroom expectations. Conversely, other teachers (e.g., in early elementary grades) may allow students to kneel on their chairs, lean against the desk or table, half-sit, and so on, as long as they are attending or completing an assigned task. Therefore, these behaviors should not be treated as “out of seat” provided the student is on-task and following directions. Sometimes the same teacher’s expectations may vary across different instructional activities. Information on the teacher’s rules and expectations can be obtained during the initial interview, looking to see whether they are posted in the classroom, and paying attention to the behaviors teachers praise versus those they discourage during the lesson. Understanding what behaviors are expected and permissible improves one’s interpretation of the direct observation data. It is also useful to determine the teacher’s planned schedule of activities during the observation period. Knowing what is supposed to occur should help the observer identify the times when the targeted student should be observed. It may also be helpful to sketch out a seating chart depicting where the target student sits in relation to their peers. Components of the physical classroom structure can be noted, such as the use of learning centers, location of the teacher’s desk, student seating arrangements during different activities, and places where educational materials are stored. These aspects may be useful in later interpreting the quantitative data. Upon arriving at the classroom, the observer should find an unobtrusive location that provides a clear view of the referred student. Being close enough to hear the student can be helpful but is not always necessary. Overall, the observer should be careful not to be disruptive or so prominent that they end up being distracting to students or the teacher. When possible and agreeable to the teacher, it can be helpful for the observer to arrive at the classroom before the students return from another location (e.g., library, cafeteria, recess). This way the observer can get seated, settled, and gather information on the room setup before students arrive. All materials should be ready before the observation process begins. During the teacher interview, the observer should have identified information to be verified while observing. Specifically, this includes the actual time allotted for instruction, contingencies for accuracy or work completion, instructional arrangements described by the teacher, and so forth. The observer may want to jot down these items at the bottom of the observation form to serve as reminders.
When Should Observations Be Conducted? Observations should be conducted primarily during activities related to the child’s referred problems. This is one reason why it is helpful to conduct the teacher interview before the direct observations. During the interview, the observer should determine the most important instructional periods for observation. For example, if reading is the primary area of concern and the teacher expresses dissatisfaction with the child’s performance during large-group and independent seatwork activities in reading, it is important to observe
Step 1: Assessing the Academic Environment 111
the child during both of these situations. If the teacher reports that the student is having difficulties in more than one academic area, observations may be needed in both instructional periods. Of course, evaluators’ time is short, and typical caseloads do not allow for multiple, extended observations of an individual student. The evaluator must therefore prioritize direct observations for the times in which observation data will be most useful for the evaluation. Forming an initial hypothesis following the teacher interview can guide the evaluator in determining what information will be most important to collect. The second time to consider observations is at least one type of setting or activity in which the teacher indicates the student’s behavior or performance is stronger. Data from these settings provide a comparison to the more problematic situations. Comparison data are particularly beneficial when the teachers report discrepancies in the student’s behaviors during different activities or instructional arrangements. For example, if a teacher states in the interview that the referred student is disruptive and fails to complete work accurately during all independent seatwork activities (regardless of academic subject area), but is usually engaged and cooperative during teacher-led group activities, then the observation should be planned to sample the student’s behavior during independent work, as well as at least one teacher-led group activity. The purpose of these two observations would be to (1) confirm the teacher’s report of the discrepancy, and (2) contrast the differences in instruction, task demands, teacher behaviors, and peer behaviors between the two instructional settings. These differences can result in different instructional “environments” across situations that affect the behavior of the target student. Any comparison observations conducted in settings where the student is more successful can be briefer and less extensive than the observations targeted in the more problematic settings, and treated as “samples” for comparison. It is also a good idea to let the teacher know beforehand that you plan to conduct a brief observation in a situation in which the student is more successful.
How Often Should Observations Be Conducted? The question of how often and how long observations should be conducted is usually answered with “it depends.” Ideally, sufficient observation data should be collected to provide an accurate sample of the referred child’s behavior and the instructional environment, and sufficient to gather information that is useful for refining the initial hypothesis and guiding the next steps of the assessment. The teacher interview may provide some guidance for deciding observational frequency if the interviewer asks questions concerning the variability of student behavior. Students whose behavior is variable across class settings, academic subjects, or days may require more observations to determine what variables might contribute to the variability in the student’s behavior. Time constraints also matter—the number of observations that can be conducted will be constrained by the amount of time available to whomever is conducting the observation. A “best-guess” recommendation is to observe for at least one class period in which the primary problems exist, a sample of a period or activity in which performance is stronger, and then portions of the more problematic settings or other situations on another day, if needed. It is best to observe the student in the different instructional arrangements used in the classroom (as indicated by the teacher) so that the student’s behavior can be compared across different conditions. Spreading the observations across 2 or more days may be necessary if a student’s behavior is highly variable from day to day or was atypical on the first day of observation. It is important to recognize that the process of conducting observations may alter a student’s behavior in the classroom, which is referred to as reactivity. Anyone who has conducted direct observations has had teachers tell them when an observation is finished:
112
Academic Skills Problems
“I can’t believe how good Kevin was today. He’s never like this.” These types of reports are helpful in determining whether the observational data collected are an interpretable sample of the student’s usual behavior. Following an observation the observer should ask the teacher whether the student’s behavior during the observation was typical. If the teacher suggests that the student’s behavior was atypical, an additional observation under the same conditions is recommended. The primary objective of direct observation is to obtain a picture of the student’s typical behavior. It should be kept in mind that reactivity to observation is often a temporary phenomenon. Students quickly habituate to observers in classrooms and act in ways consistent with other periods of the instructional process. The observer is cautioned, however, to not reveal the purpose of the observation to the target student or their peers. The target student’s identity should be protected. Sometimes, teachers may ask how you (the observer) want to be introduced to the class or how they should respond if students ask who you are. We have found that it is best for the teacher to introduce you with something like: “This is , who is here to watch what we do in reading” or a similar statement. The teacher could also indicate that you are there to observe their teaching.
Observation of Comparison Children We recommend comparing the behavior of the referred student to peers within the classroom. Peer-comparison data can be very useful in interpreting the observations of the target student because they offer information about the degree of discrepancy between the target student’s behavior and typical levels of behavior for other students in the classroom. In other words, peer-comparison data provide a miniature “local norm.” It is noted that no names or identifiable information are collected about peers, only behavior data demonstrated by a randomly sampled group of peers in the classroom that will be aggregated across students for comparison purposes. To select a group of students for collecting peer-comparison data, we recommend randomly selecting five to seven students who are visible to the observer from all parts of the room or seating arrangement. The students observed as peer comparisons should be students involved in the same activity as the target student because this allows the observer to compare the behavior of the target student to that of their peers under the same instructional conditions. For example, if the target student is part of a small group working with the teacher while other students are working independently at their desks, the students selected for peer comparison should be part of the small group with the target student. This also means that if the target student is the only one involved in a particular activity (e.g., the teacher is working individually with the target student while all other students work independently), collecting peer-comparison data is not useful. The observer can systematically include peer-comparison students in the observation process in different ways. One method is to mark off a subset of intervals (e.g., every fifth interval) during which only the peer-comparison students are observed (the use of interval-based recording methods is explained later in this chapter). For example, if interval 5 is selected as a comparison interval, the observer completes the observation of the referred student during intervals 1–4, and then switches to the comparison students for interval 5. If the observer uses this method, then a different peer-comparison student is observed at each comparison interval (e.g., intervals 5, 10, 15). Another way to observe peers is to alternate between the referred student and the peer-comparison students each interval. For example, the target student would be observed in interval 1, a comparison student in interval 2, the target student in interval
Step 1: Assessing the Academic Environment 113
3, a different comparison student in interval 4, and so on. Still another way is to observe simultaneously both the referred child and randomly selected peers during each interval. This procedure obviously requires a modification of the observation form that allows recording both referred and comparison children simultaneously. Although this last method can result in extensive comparison data, it may also result in less than accurate data collection. Regardless of the method used to collect the peer-comparison data, the intervals in which peer data are collected should be aggregated as if a single individual were being observed. By aggregating these intervals, a peer norm is provided for comparison against the behavior of the referred student. This also mitigates concerns regarding the observation of peers who are not the target of the assessment. Because the data are aggregated across the group of peer-comparison students, data for individual comparison students are never reported.
Using Existing Observational Codes Procedures for collecting data systematically through direct observation have been well articulated in the literature (see Daly & Murdoch, 2000; Hintze & Shapiro, 1997; Hintze, Volpe, et al., 2002; Shapiro, 1987a; Shapiro & Clemens, 2005; Skinner et al., 2000; Volpe et al., 2005). After the target behaviors are identified and operationally defined, a procedure appropriate for the data collection process is chosen. The method may involve a simple collection of frequency counts of behaviors, or it may involve a more complex system based on a time-sampling procedure. Although systematic observation systems could be individually tailored to each specific situation, it is unnecessarily time-consuming and can result in data that cannot be compared or aggregated across situations. Instead, data should be collected on the same set of target behaviors, in the same manner, across settings or activities. Sometimes these behaviors may be individualized for the target student and the referral concern, but the key aspect is that the same behaviors are recorded with the same methods across each setting. This results in observation data that are easier to collect and, most importantly, can be aggregated and compared across days, settings, or activities. These factors support using a standardized observation system. Many observational systems have been developed. Some of these systems have been used in investigations and were found to be particularly valuable for use in school settings. Volpe et al. (2005) provided a list of seven codes found in the literature that have been used for purposes of assessing school-based behaviors (see Table 3.2). Alessi and Kaye (1983) described a code designed primarily to assess types of onand off-task behavior, as well as teacher and peer response to these behaviors. Although not described in any specific research studies, the code included excellent training materials and appears potentially useful for in-class observation of student behavior. Saudargas and Creed (1980) developed the State–Event Classroom Observation System (SECOS), designed specifically for school psychologists. Unlike the Alessi and Kaye (1983) code, the SECOS offers the opportunity to assess at least 15 different student behaviors, and 6 teacher behaviors. On the SECOS, behaviors are divided into states (those behaviors that are continuous and recorded as present or absent) and events (those behaviors that are discrete and are counted on an occurrence basis). It also allows for behavioral categories to be added that previously were not defined by the code. Furthermore, the system has been used in at least two studies in which certain behavioral categories appear to be significant discriminators of students with challenging behaviors (e.g., Saudargas & Lentz, 1986; Slate & Saudargas, 1986).
114 1. Interference 2. Motor movement 3. Noncompliance 4. Nonphysical aggression 5. Off-task 1. Active engaged time 2. Passive engaged time 3. Off-task motor 4. Off-task verbal 5. Off-task passive 6. Teacher-directed instruction 1. Interference 2. Minor motor movement 3. Gross motor standing 4. Gross motor vigorous 5. Physical aggression 6. Verbal aggression 7. Solicitation of teacher 8. Off-task 9. Noncompliance 10. Out-of-chair behavior 11. Absence of behavior 1. On-task
• Partial interval • 15-second intervals
• Momentary time sample • Partial interval • 15-second intervals
• Partial interval • Whole interval
• Predominant activity
No
Palm
No
No
ADHD School Observation Code (SOC; Gadow et al., 1996) Available from www.checkmateplus.com
Behavioral Observation of Students in Schools (BOSS; Shapiro, 2004) Computer version available from www. pearsonassessments.com
Classroom Observation Code (COC; Abikoff & Gittelman, 1985) This reference provides a detailed description of code and the observation protocol.
Direct Observation Form (DOF; Achenbach, 1986) Available from www.aseba.org
1. Withdrawn–Inattentive 2. Nervous–Obsessive 3. Depressed
Scales
Academic engaged time
• Duration
No
Academic Engaged Time Code of the SSBD (AET; Walker & Severson, 1990) Available from www.sopriswest.com
sampling • 4-point Likert scale for 97 problem items
Behavior categories
Recording method(s)
Computerized?
Code/availability
TABLE 3.2. Characteristics of Reviewed Systematic Observation Codes
Low
High
Moderate
Moderate
Low
Training requirements
10 minutes
32 minutes
15 minutes
15 minutes
15 minutes
Typical length of observation
115
• Momentary time sample • 30-second intervals • 3-point Likert scale for
Windows Palm
BASC-2 Portable Observation Program (POP; Reynolds & Kamphaus, 2004) Available from www.pearsonassessments. com
1. Inappropriate movement 2. Inattention 3. Inappropriate vocalization 4. Somatization 5. Repetitive motor movements 6. Aggression 7. Self-injurious behavior 8. Inappropriate sexual behavior 9. Bowel/bladder problems
Problem behaviors
1. Response to teacher/lesson 2. Peer interaction 3. Works on school subjects 4. Transition movement
Adaptive behaviors
1. Out of seat 2. Approach child 3. Other child approach 4. Raise hand 5. Calling out to teacher 6. Teacher approach
Events
1. School work 2. Looking around 3. Other activity 4. Social interaction with child 5. Social interaction with teacher 6. Out of seat
States
Unclear
Moderate
15 minutes
20 minutes
Note. Low, up to 10 hours; moderate, between 11 and 25 hours; high, >25 hours. Adapted from Volpe et al. (2005, pp. 457–459). Copyright 2005 by the National Association of School Psychologists. Reprinted by permission.
65 behavior items—some items permit scoring of whether the behavior was disruptive to the class
• Momentary time sample • Frequency recording • 15-second intervals
No
State–Event Classroom Observation System (SECOS; Saudargas, 1997) Available from [email protected]
4. Hyperactive 5. Attention Demanding 6. Aggressive
116
Academic Skills Problems
Although normative data have been collected on the SECOS, the use of such data to interpret observational codes presents a dilemma. On the one hand, the observer may want to know how the level of performance for the targeted student compares to peers observed in other schools and classrooms under similar conditions. On the other hand, such information may be irrelevant since a local context is needed to accurately interpret the data. Given this dilemma, the collection of data on peers from the same classroom as the target student is important. However, establishing whether the target student’s behavior is within the level expected in similar-age peers on a larger scale can offer helpful feedback to teachers and evaluators. Haile-Griffey et al. (1993) completed SECOS observations on 486 children across grades 1–5 in general education classrooms from one school district in eastern Tennessee. Children were all engaged in independent seatwork, while their teacher worked at their desk or conducted a small-group activity in which the targeted student did not participate. These data allow an evaluator to compare the obtained rates of behavior on the SECOS for the particular targeted student against a large, normative database suggesting typical levels of performance. It is important to remember, however, that these normative data can only be used for comparison when a target student is observed under conditions of independent seatwork. The age of these normative data is also problematic. Greenwood and colleagues (Carta et al., 1985, 1987; Greenwood, Carta, et al., 1991, 1993, 1994; Greenwood et al., 1985) developed a series of computer-based observational measures designed to assess the nature of the academic environment surrounding student classroom performance. Their measures, the Ecobehavioral Assessment Systems Software (E-BASS; Greenwood et al., 1994), contain codes that are used for assessing general education student environments (Code for Instructional Structure and Student Academic Response [CISSAR]), students with disabilities in general education settings (MS-CISSAR), and students in preschool settings [Ecobehavioral System for Complex Assessments of Preschool Environments (ESCAPE)]. Each of the codes includes measures that examine the domains of teachers, students, and the classroom ecology. Each code includes over 50 individual categories grouped into the three domains. The measure uses 20-second momentary time sampling for each of the categories, but its complexity requires extensive training. Table 3.3 provides a listing of the CISSAR categories and codes. Similar sets of codes are used for the MS-CISSAR and ESCAPE. As one can see, the code is comprehensive and offers substantially detailed information about teacher and student behavior. Greenwood, Delquadri, et al. (1984) make clear that the concept of opportunities to respond is not identical to the more typical observational category of on-task behavior common to many direct observational data collection systems. The key difference is the active responding involved in opportunities to respond, compared to the more passive response of on-task behavior. The entire measure uses software that resides on a laptop, which offers quick analysis and feedback once data are collected. The measure has been extensively used in research (e.g., Carta et al., 1990; Greenwood, Delquadri, et al., 1985; Kamps et al., 1991); however, the code’s complexity makes it less useful for everyday use by practitioners.
Observation Codes for Instructional Quality Other observational tools have been developed to gather information on the nature and quality of teachers’ instruction, including opportunities to respond (OTR) and the extent to which instruction is explicit, engaging, and effective. The Instructional Content Emphasis—Revised (ICE-R; Edmonds & Briggs, 2003) is used to collect data on teachers’ reading instruction and includes ratings for (1) the types of reading instruction and targeted skills, (2) time allocated to instructional components, (3) student grouping
Step 1: Assessing the Academic Environment 117
TABLE 3.3. CISSAR Categories, Descriptions, and Codes Ecological categories
Number of codes
Description
Codes
Activity
12
Subject of instruction
Reading, mathematics, spelling, handwriting, language, science, social studies, arts/crafts, free time, business management, transition, can’t tell
Task
8
Curriculum materials or the stimuli set by the teacher to occasion responding
Readers, workbook, worksheet, paper/pencil, listen to lecture, other media, teacher–student discussion, fetch/put away
Structure
3
Grouping and peer proximity during instruction
Entire group, small group, individual
Teacher position
6
Teacher’s position relative to student observed
In front, among students, out of room, at desk, side, behind
Teacher behavior
5
Teacher’s behavior relative to student observed
Teaching, no response, approval, disapproval, other talk
Specific, active response
Writing, reading aloud, reading silently, asking questions, answering questions, academic talk, academic game play
Student behavior categories Academic response 7
Task management
5
Prerequisite or enabling response
Attention, raise hand, look for materials, move, play appropriately
Competing (inappropriate responses)
7
Responses that compete or are incompatible with academic or task management behavior
Disrupt, look around, inappropriate (locale, task, play), talk nonacademic, self-stimulation
Total codes
53
Note. From Greenwood, Dinwiddie, et al. (1984, p. 524). Copyright 1984 by the Society for the Experimental Analysis of Behavior, Inc. Reprinted by permission.
patterns, (4) level of student engagement (for students on a group basis; not individual students), and (5) overall instructional quality. The technical adequacy of the ICE-R has been established across several studies (e.g., Ciullo et al., 2019; McKenna & Ciullo, 2016; Swanson & Vaughn, 2010). Another observation tool for capturing instructional quality is the Classroom Observation of Student–Teacher Interactions (COSTI; Smolkowski & Gunn, 2012). The COSTI can be used for observing any academic area. It focuses on several key features of explicit instruction, providing data on (1) the frequency of teachers’ explicit modeling and demonstration of skills, (2) rate of independent practice opportunities provided to students (i.e., OTR), (3) student errors, and (4) error correction and academic feedback provided to students based on their responses. As such, the COSTI captures data on the cyclical sequence of teacher–student interactions (i.e., teacher model–student response–teacher
118
Academic Skills Problems
feedback or error correction) that is characteristic of explicit instruction (Smolkowski & Gunn, 2012). The technical adequacy and utility of the COSTI have been demonstrated in instruction in reading (Nelson-Walker et al., 2013; Smolkowski & Gunn, 2012) and mathematics (Doabler et al., 2015, 2021). Instructional observation tools like the ICE-R and COSTI are designed to focus on the teacher’s instruction. Although they provide data on student OTR and engagement, these data are collected at a group level, not individual students. Nevertheless, these tools may be useful for practitioners who suspect that a major reason for a student’s academic difficulties is inadequate instruction. Data from these types of tools would complement direct observation of the student’s behaviors collected in the same situations. Additionally, the ICE-R or COSTI may be particularly helpful for school leadership or coaching teams that evaluate the overall quality of core instruction, which might be done as part of MTSS implementation or other system-wide improvement efforts to improve teaching.
Behavioral Observation of Students in Schools Seeking a code that is practitioner friendly, but targets behaviors known to be strongly related to outcomes for students’ academic achievement, Shapiro (1996a, 2003a) developed the Behavioral Observation of Students in Schools (BOSS), based in part on some of the codes described above. The BOSS offers practitioners a simple observational tool designed to assess key components of student academic functioning, including the student’s overall on- and off-task behavior, whether students are actively or passively interacting with their academic tasks, and the nature of off-task behavior types. A description of the code is provided here, with a detailed description in the workbook accompanying this text. The BOSS divides on-task behavior into two types: active and passive engaged time (see Appendix 3B). Three subtypes of off-task behavior are observed: verbal, motor, and passive. In addition, data are collected on the teacher’s instruction. The code is also arranged to collect data on peer-comparison students. Identifying information is added at the top of the data sheet, using appropriate consideration of student privacy and confidentiality. In the space provided for the academic subject, the observer should write in the exact activity observed. For example, during reading, students may be involved in small-group work with the teacher, using a workbook, or engaged in silent reading. Indicating the specific activity (rather than simply noting “reading”) allows for subsequent comparisons of behavior across different instructional activities and arrangements. To the right of this line are a series of numbers with abbreviated codes. These represent the four most common instructional situations: 1. ISW:TPsnt—Individual seatwork, teacher present: The target student is engaged in an individual seatwork activity, while the teacher is circulating around the room checking work or working with individual students. The teacher may also be seated at their desk. 2. ISW:TSmGp—Individual seatwork, teacher working with a small group of students: The target student is engaged in an individual seatwork activity, while the teacher is working with a small group of students that does not include the target student. 3. SmGp:Tled—Small group led by the teacher: The target student is part of the group of students being taught by the teacher. 4. LgGp:Tled—Large group led by the teacher: The target student is part of a large group of students, defined as at least half the class.
Step 1: Assessing the Academic Environment 119
Occasionally, the observer will encounter a situation that is not covered by any of these four options. One such common situation occurs when learning centers or cooperative learning are being used. This different situation should simply be noted on the form. If the instructional arrangement changes during the observation, this change should be noted by circling the appropriate interval number where the setting changed and recording the nature of the changed instructional environment. Before beginning the observation, the observer will need to obtain some type of timing device to keep track of the intervals. It is strongly recommended that a stopwatch not be used. Conducting these types of observations requires careful and constant attention to the occurrence of behavior. Using a stopwatch requires the observer to look down frequently at the watch and then back up at the student. Instead of a stopwatch, the observer should use some kind of cueing device—for example, a device that makes an audible sound at the set number of seconds (used with an earphone), or that vibrates at specified intervals. Some of the best timers for direct observations are interval timer applications for smartphones, and many free options are available in both the Apple App Store and the Google Play Store. The key is to find a so-called “looping” timer app that (1) allows the user to specify the length of the intervals (e.g., 15 seconds, as used in the BOSS), (2) uses your phone’s vibrate feature, and (3) continuously repeats (i.e., “loops”) the timer at equal intervals, signaling at the end of each interval. These looping interval timers are commonly used in cross training and other interval-based exercise routines. Readers can search the app store using “interval timer” or “looping timer” as the keywords. Typically, observations are conducted for a period of not less than 15 minutes at any one time. Some observations may last up to 30 minutes. If 15-second intervals are used, at least two data sheets (60 intervals) will be needed to conduct the observation. It is suggested that the observer develop sets of recording sheets with intervals numbered 1–120 to permit the collection of data for up to 30 minutes at a time. The next part of the form lists the behaviors to be observed down the left-hand column. Intervals are listed across the top row, with every fifth interval depicted in gray, indicating that these are peer-comparison intervals. Data on active and passive engaged time are collected using momentary time sampling (i.e., the behavior is recorded if it occurs when the interval started). Off-task behaviors are collected using a partial- interval recording system (i.e., the behavior is recorded if it occurs at any point during the interval). Teacher-directed instruction (TDI) is sampled once every five intervals using a partial-interval recording system. Each behavioral category and recoding method are carefully defined through examples of occurrence and nonoccurrence in the BOSS manual, which is provided in the Academic Skills Problems Fifth Edition Workbook that accompanies this volume. For example, both active engaged time (AET) and passive engaged time (PET) require that the student first be “on-task,” defined as attending to their work or assigned task. If the on-task student is actively writing, raising their hand, reading aloud, answering questions, talking to others (peers or teacher) concerning academics, or finding their page in a book, an occurrence of AET would be scored. Similarly, if the student is on-task but their behavior is passive, such as reading silently, listening to the teacher or a peer, looking at the blackboard during instruction, or looking at academic materials, an occurrence of PET is scored. As each 15-second interval begins, the student is scored for the presence or absence of AET or PET at the instant the interval is cued (momentary time sampling). During the remainder of the interval, any off-task behaviors (motor, verbal, or passive) are each scored once as soon as they occur at any point during that interval (partial-interval time sampling). The BOSS requires only that the behavior has occurred, not a count of its
120
Academic Skills Problems
frequency. In other words, if during an interval a student gets out of their seat and talks to a peer, a mark would be placed in the Off-Task Motor (OFT-M) and Off-Task Verbal (OFT-V) categories. However, if the student in the same interval then talked to a second peer, only a single mark, indicating that the behavior had occurred, would be scored for that interval. Therefore, momentary time-sampling and partial-interval time-sampling methods provide an estimate of the occurrence of the behaviors, not a precise frequency count. The benefit of these methods is that it allows the observer to collect data on multiple behaviors at the same time, which would be impossible if every instance of a behavior had to be counted. Every fifth interval, the observer using the BOSS randomly selects a different nontarget student in the class to observe. Also in that interval, any TDI is marked. Thus, when the observation is completed, the BOSS provides data on the targeted student, an aggregated peer comparison data, and a score on the estimated time the teacher spent instructing the class.
Interpreting Data from the BOSS Data from the BOSS offer information about the frequency, intensity, consistency, and nature of a student’s on-task and off-task behavior. Although less detailed than the SECOS or E-BASS, the measure provides important insights into the level of engagement that a student is exhibiting in their classroom. The BOSS does not provide extensive information on the student–student or student–teacher contact patterns, academic ecology, or teachers’ instruction as would be captured by measures such as the E-BASS or instructional observation tools such as the ICE-R or COSTI. It has been our clinical experience over many years that the BOSS offers the essential data needed to more clearly understand the aspects of the educational environment that may be impacting student performance. Additional data can be provided by other tools, as needed. Data from the BOSS can provide a fuller understanding of the nature of academic engagement among students with academic or behavior difficulties. For example, in a study of academic skills problems of children with attention-deficit/hyperactivity disorder (ADHD), Junod et al. (2006) and DuPaul et al. (2006) used the BOSS to examine differences in the nature of academic engagement and off-task behavior. Their study compared 92 students between first and fourth grades with confirmed diagnoses of ADHD, along with 63 control students matched to the students with ADHD based on gender and grade. Each student was observed using the BOSS for 15 minutes during a period of independent seatwork in both reading and math instruction. Students with ADHD had significantly lower rates of passive academic engagement and higher rates of off-task behavior than normal controls. Of particular importance was that differences on the BOSS code categories discriminated among students with and without ADHD. For example, Junod et al. (2006) found that students with ADHD demonstrated significantly lower rates of AET than randomly selected peers, which is notable because providing opportunities for active engagement is one of the most important instructional considerations for students with ADHD. They also observed that OFT-M was a particularly powerful discriminator among students with ADHD and controls. Overall, the studies revealed that the BOSS is sensitive to specific forms of engagement and off-task behavior, which can play an important role in assessment for students whose classroom behavior affects their achievement. Data collected from direct observations can be organized in specific ways for interpretation. Although data can be aggregated across observations, it is also important to separately summarize results collected in different academic subjects, activities, or instructional groupings, which allows behavior to be compared across different conditions. For
Step 1: Assessing the Academic Environment 121
example, the rates of the target student’s engagement and off-task behaviors could be compared across large-group teacher-led instruction, small-group teacher-led instruction, and an independent computer activity. This may also include summarizing results from observations that occur on different days, even if the same instructional activity is observed. Thus, if independent seatwork is observed during reading on Monday for 15 minutes and on Tuesday for 20 minutes, each of these sets of observations is scored independently. They also can be aggregated if behavior was similar in both observations. If observations are conducted of the same academic subject in the same day, but during different instructional settings (e.g., large group led by the teacher [LgGp:Tled] vs. individual seatwork, teacher present [ISW:TPsnt]), these sets of observations should also be scored separately. The observer must be careful to check whether the instructional setting changes during an observation. For example, if during a 30-minute observation the observer finds during interval 45 that the setting goes from ISW:TPsnt to SmGp:Tled, the data scored during intervals 1–45 should be treated separately from those scored on intervals 45–120. Differences across activities or instructional groupings may reveal that a student experiences the most difficulty under only certain types of instructional conditions. Table 3.4 shows the outcomes of BOSS observations conducted with Mario (the child described in Figure 3.3) during two different types of writing assignments. The data indicated that his behavior was much more disruptive (much higher motor and OFT-V behavior) relative to peers during an independent writing activity compared to a whole-class instructional period. Mario’s level of engagement, both active and passive, was also significantly below his peers during the independent writing activity. It was important to note that even under whole-class instruction when Mario showed overall engagement equal to that of his peers (i.e., the combination of active and passive engagement), Mario’s level of active engagement was only one-fourth that of his peers. Given the importance of active engagement for students who struggle academically, interventions targeted at increasing Mario’s level of active engagement appear to be warranted. Although data on each of the specific off-task behaviors (OFT-M, OFT-V, OFT-P) may be clinically useful, these data can also be aggregated into a total off-task score. Some studies have found that this approach is more straightforward and still conveys important information about the intensity of the student’s off-task behaviors (e.g., DuPaul et al., 2004). Users may also wish to report data on the individual off-task behaviors, as well as the aggregate total off-task. TABLE 3.4. Comparison of BOSS Observations in Whole-Class and Independent Writing Assignments for Mario Whole-class instruction Mario (48 intervals)
Peers (12 intervals)
Active Engaged Time
04.17%
Passive Engaged Time
87.50%
Off-Task Motor
00.00%
Independent writing Mario (48 intervals)
Peers (12 intervals)
16.67%
14.58%
41.67%
75.00%
08.33%
33.33%
04.17%
60.42%
25.00%
Off-Task Verbal
02.08%
00.00%
16.83%
04.17%
Off-Task Passive
02.08%
00.00%
02.08%
04.17%
Teacher-directed Instruction (12 intervals)
83.33%
58.33%
122
Academic Skills Problems
Similar aggregation of behaviors can be carried out with the separate engagement behaviors. In many situations, the most important behaviors measured by the BOSS are AET and PET. Together, these behaviors represent time on-task or academic engagement. Assessors can add these two categories together if they want to report the overall level of on-task behavior observed for the student, and this technique has been used in some research studies with the BOSS (e.g., (Steiner, Frenette, et al., 2014; Steiner, Sheldrick, et al., 2014). When aggregated, the data represent an estimate of the percentage of time that the student was attentive and engaged with the assigned activity. However, also considering the relative distribution between active and passive engagement can be a meaningful way of discussing the data. Given the established links between a student’s level of active engagement and academic outcomes (e.g., Greenwood, 1991; Greenwood, Dinwiddie, et al., 1984; Greenwood, Horton, et al., 2002), the degree to which a student shows levels of active engagement that approach or exceed passive engagement would be viewed as evidence of a strong academic instructional program. Of course, such interpretations must be tied to the nature of the activity observed during the instruction. Greater rates of active engagement do not necessarily mean that instruction is always more effective than situations in which passive engagement is expected. For example, suppose a teacher is teaching a large group and using a group question-and- answer format. In that case, the level of active engagement is likely to be lower than if a more cooperative grouping method were being used, but instruction may be effective in both situations. At the same time, if during a cooperative group lesson, a referred student has a very low level of active engagement but a high level of passive engagement, the contrast suggests the possibility that the student is not significantly contributing to the group responses. Overall, the relative proportion of active engagement, passive engagement, and total engaged time should be interpreted considering the individual target student, their unique needs, and specific instructional context. Understanding why a student may have lower than expected levels of AET and PET is also facilitated by noting what the student was doing instead of working. For example, a high level of Passive Off-Task (OFT-P) behavior during an independent seatwork activity indicates the student was not paying attention to the assignment, but the interpretation of the observation is augmented by noting that the student was more interested in what the teacher was doing in small-group instruction with other students. Alternatively, if the student shows high levels of OFT-V, it suggests that the student’s interactions with peers may interfere with assignment completion. By anecdotally noting the quality of these interactions (e.g., joking around vs. teasing each other) and the content (e.g., discussing the assignment, even though peer interactions were not permitted), the observer can determine the possible reasons for the high levels of peer interaction. The level of TDI in the classroom indicates the degree to which the teacher actively teaches the students during the observation. These data play a role in determining whether the frequency of teacher–student interaction is sufficient to expect students to maintain effective levels of AET and PET. For example, as seen in Table 3.3, the TDI under wholeclass instruction was much higher than under independent work conditions. One possibility that may need to be investigated further for our example student, Mario, is that he is more responsive and engaged during teacher-led direct instruction. Of course, conclusions about which events in the classroom affect student performance should consider the goals of the instructional activities observed (i.e., there are some activities in which direct instruction is less relevant or not feasible), and conclusions should not be based on teacher behaviors alone. Other information collected during the interview and observed during the systematic data collection may be involved in the student’s behavior during instruction, such as contingencies for work completion, the frequency and immediacy of
Step 1: Assessing the Academic Environment 123
teachers’ corrective and affirmative feedback, and students’ awareness of the rules and expectations for a given activity. Relevant factors should be noted during the observation and considered during the analysis of the observational data. When performing calculations on the BOSS, the observer must separate the intervals marked for the referred student and those marked for the comparison students. Once the protocol is scored for the target student, similar data should be calculated for the peer-comparison observations. As noted previously, data for all peer-comparison intervals should be aggregated across students for each behavior. We remind readers to never report identifying information or individual data for peer-comparison students. Examining the work products (e.g., worksheets, workbook pages, writing assignments, math sheets) that the student produced during the observation can also help interpret the BOSS data. For example, if during the work session the student is found to have a 40% level of overall engagement (AET + PET) but completes the work with 95% accuracy, it would suggest problems with the behavior expectations of the activity or competing behaviors that interfere with the student’s engagement, as opposed to issues with the skill involved in the assigned task. On the other hand, if the student is found to complete 50% of the work with low levels of accuracy, then this may imply that the student lacks sufficient skills to complete the assignment accurately. Furthermore, examination of work products may show that the student completes only 50% of the assignment but with 100% accuracy. This may suggest either that the student has a problem with rate of performance (accurate but too slow) or that the student lacks sufficient motivation (in which case the contingencies for reinforcement need to be examined).
BOSS Adaptations BOSS‑Modified The structure of the BOSS is useful in allowing for multiple behaviors to be observed and recorded within each interval. As described earlier, the specific forms of engagement and specific types of off-task behaviors can be of high utility in assessments of students with behaviors that interfere with learning. Over the years, after conducting hundreds of direct observations with the BOSS and similar variations, training students and research staff to use it, and talking with school psychologists, there are some aspects I (Clemens) have found that can be simplified and streamlined. These experiences have resulted in the BOSS—Modified (BOSS-M). There is a case to be made for coding overall engagement, as opposed to separately coding active and passive engagement (i.e., AET and PET, respectively). First, it should be noted that the primary utility for coding AET and PET separately is to capture the opportunities for active engagement in the environment. However, there are occasions where differences in AET and PET across settings may be difficult to interpret; low AET demonstrated by the student do not necessarily mean there were low opportunities for active engagement. Second, differences in AET and PET demonstrated by the student may matter less than their overall level of engagement. Although active engagement is certainly important for learning and maintaining students’ attention, passive engagement is still valuable and may be the only option in some situations. Therefore, there are times when simply coding engagement, whether active or passive, is sufficient for answering questions regarding the on-task behavior for a student across instructional situations. This is how Steiner and colleagues aggregated the data in a series of studies that used the BOSS in observing students with ADHD (Steiner, Frenette, et al., 2014; Steiner, Sheldrick, et al., 2014). For these reasons, overall engagement is coded on the
124
Academic Skills Problems
BOSS-M using “On-Task” behavior, which is defined as “Student behavior that meets expectations for the situation (e.g., engaged with assigned task, attending to teacher or relevant work, waiting appropriately).” AET and PET are subsumed under On-Task and are not coded separately. An additional limitation of the standard BOSS pertains to coding off-task behaviors. Sometimes, the resulting data on the off-task behaviors simply reflect the inverse of engagement. Rather than collect data on off-task behavior in which the interpretation is inversely redundant with engagement, it can be helpful to collect data on disruptive behavior. Disruptive behaviors interfere with learning, both for the target student and potentially the teacher’s instruction. Disruptive behavior on the BOSS-Modified is defined as a “Student action that interrupts classroom activity, interferes with instruction, or distracts other students (e.g., out of seat, playing with objects which distracts others, acting aggressively, talking/yelling about things that are unrelated to classroom instruction).” It complements the measurement of academic engagement and makes the observation data more comprehensive by capturing behaviors that are more impactful to the learning environment than just off-task behavior. Intervention studies have indicated that challenging behaviors are one of the primary reasons for students’ inadequate responsiveness to intervention (Al Otaiba & Fuchs; 2002; Nelson et al., 2003; Torgesen et al., 1999). For an individual student, disruptive behaviors negatively affect their learning and social relationships, which poses risks to their academic growth and social-emotional development. In addition, disruptive behaviors negatively affect the learning environment; they often require teachers to stop instruction to manage the behavior and interrupt other students’ engagement. Thus, measuring disruptive behaviors helps capture the nature and severity of the target student’s difficulties and effects on the instructional setting. Studies of the technical properties of the BOSS-Modified are ongoing, so users are cautioned to view this as a pilot version. The BOSS-Modified is included in Appendix 3C and the Academic Skills Problems Fifth Edition Workbook that accompanies this text. It represents a simple adjustment to the standard BOSS. On-Task is coded using momentary time sampling and Disruptive Behavior is coded using partial-interval time sampling. It uses the same 15-second interval structure and maintains options for collecting peer- comparison data, instructional grouping, activities, and TDI.
BOSS Early Education Hojnoski and colleagues adapted the BOSS for use in PreK and early childhood education settings (Hojnoski et al., 2020). The BOSS Early Education (BOSS-EE) utilizes the same 15-second interval-based recording method as the standard BOSS, but includes adaptations and refinements to the observed behaviors that make them appropriate for observing young children. Coding of active and passive engagement are maintained; however, the BOSS-EE definition for active engagement includes motor and verbal behaviors that characterize many learning activities in PreK settings. An additional adaptation includes coding “interfering” behaviors rather than the three off-task behaviors. Interfering behaviors are those outside the assigned activity’s requirements and are distracting, disruptive, or potentially harmful to the target child or others in the setting. Preliminary evidence for the reliability and validity of the BOSS-EE behavior definitions and observer data has been established (Hojnoski et al., 2020), and previous studies found that 15-second intervals (as used on the standard BOSS) strike an appropriate balance between accuracy of behavior estimates and feasibility for regular use (Wood et al., 2016; Zakszeski et al., 2017).
Step 1: Assessing the Academic Environment 125
Direct Behavior Ratings Although direct observations offer significant advantages for evaluating behavior in the academic environment, they are time-intensive and difficult to repeat. It can be very difficult for school staff with a large caseload to schedule time for direct observations, especially multiple observations of the same student. In these situations, users can consider direct behavior ratings (DBRs; Chafouleas, 2011; Chafouleas et al., 2009). By merging direct observation and rating scale formats, DBRs are flexible and efficient tools that reduce resource limitations involved in conducting repeated direct observations. DBRs draw on the validity of single-item scales, such as the pain scale (i.e., when a health care worker asks you to rate your current level of pain on a scale of 0–10) that has been used extensively in medicine and has established validity (Ferreira-Valente et al., 2011). DBRs simply involve having an observer (often a teacher) rate, on a scale of 0–10, the extent to which a behavior occurred during a specified time frame. DBRs can be used for a single behavior, can be adapted to include behaviors specific to a particular student, or can be used with a set of keystone behaviors. A standard DBR form includes three keystone behaviors that encompass most behaviors that impact student learning and the educational environment: academically engaged (i.e., on-task), respectful (i.e., compliant, follows directions), and disruptive (externalizing behavior). Each is rated on a scale of 0–10. Extensive development and evaluation work found that DBRs demonstrate strong interrater reliability and criterion-related validity with systematic direct observation (Chafouleas, 2011; Christ et al., 2011; Riley-Tillman et al., 2008). For example, Riley-Tillman et al. (2008) found that DBRs collected by teachers on two behaviors, on-task and disruptive behaviors, demonstrated strong agreement and high correlations with systematic direct observation data of the same behaviors by trained observers. Evidence also indicates that DBRs are sensitive to behavior change over time (Chafouleas et al., 2012), making them useful for monitoring progress in classroom behavior. Ratings for a single student can be completed in a matter of seconds. There are several advantages to including DBRs in an assessment. The most obvious advantages are their efficiency and the potential for practitioners to use them in conjunction with direct observation. DBRs can effectively complement direct observations in several ways. First, they can be used following the teacher interview (but before direct observations) to (1) verify information from the interview, and (2) help identify specific class periods in which direct observation data will be most beneficial (e.g., class periods in engagement is lowest, and times when engagement is strongest). Second, DBRs can be used as a data source when a student’s behavior is infrequent and “difficult to catch,” or when it tends to occur at times of day when the observer cannot be present. Third, DBRs can be used after a direct observation to (1) corroborate the data collected, (2) provide comparison data from another class period that was not observed, or (3) collect follow-up data if the student’s behavior was atypical during the direct observation. Fourth, DBRs provide an excellent source of progress monitoring data to evaluate changes in student behavior after an intervention has been implemented. DBR materials can be obtained from the authors’ website at dbr.education.uconn.edu.
STUDENT INTERVIEW The student’s perspective adds to one’s understanding of the academic environment. Knowing whether students understand the expectations for tasks and assignments, if they know how to access help, and their knowledge of classroom rules are all helpful
126
Academic Skills Problems
to accurately interpreting the outcomes of direct observations. Students can provide the evaluator with an idea of their perceptions of their relative academic strengths and difficulties, possible sources of confusion in academic instruction, and how malleable they believe their academic skills to be (i.e., the extent to which they believe they can improve). This information can inform the development of intervention strategies. To learn more about student perspectives of the academic environment, an interview with the student is conducted in conjunction with completing a systematic direct observation. Although many student interview formats are available, the one developed by Shapiro (1996a, 1996b, 2004) may be beneficial (see Figure 3.4). The interview, which is specific to a single area of academic performance, is conducted typically after the student has completed an academic task (preferably one they completed independently). It is recommended that the interview focus on the academic skill area of concern; however, it can also be useful to gather the student’s perspectives on a skill area that is a relative strength. Suggested questions to guide the interview are provided at the bottom of the form including questions in reference to the specific work assignment and more general questions on overall academic skills. The interviewer then uses the information to complete the scale at the top. Student perspectives can offer important insights when designing intervention programs. For example, Figure 3.5 provides the outcome of an interview conducted with Mario following a writing assignment. Mario indicated that he understands the teacher’s expectations, feels he can do the assignments, likes the subject, and only complains about the amount of time given. He indicated that he likes writing (although not the cursive form), likes to write his own stories, and enjoys working with others when he is struggling. He also believes he can get better in writing with effort and support. A comparison to Figure 3.3 shows that Mario’s teacher had a very different perspective on his writing performance. His teacher notes that he struggles greatly in the subject, does not really appear to like writing, and has difficulty with all components of the process. In designing an intervention in writing, Mario’s belief that he can get better at writing is a good indication, and intervention can aim to bolster these feelings of self-efficacy. While it is important to consider the student’s perspectives on their academic skills and classroom functioning, evaluators should appropriately balance the information obtained in the student interview with information from the rest of the assessment. This is true of any single data source, but it is perhaps more important to consider with information from the student interview. Some students can be rather inaccurate evaluators of their academic skills; their perceptions may dramatically over- or underestimate their skills in a particular area. Over- or underestimation of one’s skills is more common among younger students, and these estimates can also vary based on the peer group to which the student is comparing themself (Renick & Harter, 1989). A student’s perception of their academic skills that departs significantly from the data in the assessment can help an evaluator design interventions that might take advantage of a student’s high self-efficacy, or foster beliefs they can improve through effort and persistence.
PERMANENT PRODUCT REVIEW The final step in analyzing the instructional environment involves a review of permanent products (i.e., recent work the student completed). In almost every classroom, students produce materials such as worksheets, compositions, tests, quizzes, and reports. These materials can assist the evaluator in learning more about a student’s strengths and weaknesses and their academic performance under the naturally occurring contingencies of the classroom.
Student Interview Form Student name Subject Date STUDENT-REPORTED BEHAVIOR
None completed for this area Understands expectations of teacher
Yes
No
Not sure
Understands assignments
Yes
No
Not sure
Feels they can do the assignments
Yes
No
Not sure
Likes the subject
Yes
No
Not sure
Feels they are given enough time to complete assignments
Yes
No
Not sure
Feels like they are called on to participate in discussions
Yes
No
Not sure
Feels like they can improve in [referred skill area] with effort and support
Yes
No
Not sure
General comments:
Questions used to guide interview:
?
Do you think you are pretty good in
you liked, what would it be?
If you had to pick one thing about
you don’t like, what would it be?
If you had to pick one thing about
What do you do when you are unable to solve a problem or answer a question ? with your assignment in
Tell me if you think this statement is True or False about you: “I believe I can get better in [skill area] if I work hard and someone teaches me.”
Do you enjoy working with other students when you are having trouble ? with your assignment in
Does the teacher call on you too often? Not often enough? In
FIGURE 3.4. Student Interview Form.
127
?
Student Interview Form Student name Mario Subject Writing Date 4/10/21 STUDENT-REPORTED BEHAVIOR
None completed for this area Understands expectations of teacher
Yes
No
Not sure
Understands assignments
Yes
No
Not sure
Feels they can do the assignments
Yes
No
Not sure
Likes the subject
Yes
No
Not sure
Feels they are given enough time to complete assignments
Yes
No
Not sure
Feels like they are called on to participate in discussions
Yes
No
Not sure
Feels like they can improve in [referred skill area] with effort and support
Yes
No
Not sure
General comments:
Not enough time to do writing because he does not understand what he needs to do. In writing a letter, takes long time to figure out what to write. Can’t do neat cursive since he just learned to do it.
Questions used to guide interview:
Do you think you are pretty good in writing?
?
Maybe, sometimes cursive is not too good.
If you had to pick one thing about writing
you liked, what would it be?
Do my own stories.
If you had to pick one thing about writing
you don’t like, what would it be?
Writing stories in class.
What do you do when you are unable to solve a problem or answer a question with your assignment in writing ?
Ask a friend.
Tell me if you think this statement is True or False about you: “I believe I can get better in [skill area] if I work hard and someone teaches me.”
True
Do you enjoy working with other students when you are having trouble with your assignment in writing ?
Yes.
Does the teacher call on you too often? Not often enough? In
A little too much. FIGURE 3.5. Completed Student Interview Form for Mario. 128
?
Step 1: Assessing the Academic Environment 129
Review of student work products can be particularly informative in three skill areas: spelling, writing, and mathematics. More information on interpreting student’s work in these areas is provided in Chapter 4. In short, reviewing students’ spelling provides insight into their understanding of how letters and letter combinations connect to sounds, as well as their knowledge of irregular spelling patterns and whole-word spellings (i.e., orthographic knowledge). In writing, a review of students’ compositions offers information on a range of skills including transcription (i.e., handwriting and spelling), overall amount of writing produced, grammar and syntax, organization, and cohesion. For example, Figure 3.6 shows the results of a first-grade student’s written-language assignment. The student shows some letter reversals (backward p in play) as well as poor letter formation (e.g., lowercase d). Knowledge of irregular endings (such as y in funny and play) is not evident. However, the student does understand the concept of punctuation (notice the use of a comma after TV). A writing example from Mario’s assessment is shown in Figure 3.7. The writing occurred as part of an in-class assignment. Mario’s writing sample shows his difficulties in many aspects of mechanics, including poor spelling and lack of punctuation. His story also lacks creativity, detail, and depth. The third area in which permanent product review is helpful is mathematics. Reviewing the student’s recent work can help identify skill gaps, misunderstandings, or lack of attention to details such as operation signs. It can also reveal situations in which a student’s problem-solving strategies are not flexibly applied to similar problem types and other situations when skill generalization is not evident. A permanent product review is useful when combined with systematic direct observation. For example, a systematic observation using the BOSS during which a student is asked to complete a page of mathematics problems may show that the student is actively engaged for over 90% of the intervals. However, an examination of the worksheet
FIGURE 3.6. Example of written-language assignment used for permanent product review. Upper panel: “Watch TV, and go outside and play”; lower panel: “When I make a funny face.”
130
Academic Skills Problems
FIGURE 3.7. Written-language sample for Mario.
produced during that observation reveals that the student completed only 3 out of 20 problems. Thus, although the student is highly engaged, the rate of work completion is so slow that the student is likely to be scoring poorly on mathematics tests. In summary, the combination of permanent product reviews, along with other data collected, can be a helpful method for evaluators to better understand the nature of the classroom demands and their influence on student performance.
HYPOTHESIS FORMATION AND REFINEMENT This chapter described the initial steps in conducting an academic assessment. The teacher interview, student interview, direct observations, and review of permanent products combine to identify the problem and assess the instructional environment. The teacher interview allows the evaluator to form an initial hypothesis regarding the causal and maintaining factors of the student’s academic difficulty. This hypothesis is reevaluated and refined based on the findings from the direct observation, student interview, and review of permanent products. The hypothesis then guides the direct assessment of academic skills, any additional assessment activities, and ultimately intervention development. A framework to support the development of a hypothesis statement is provided at the end of the Teacher Interview Form (see Step 1, Form 1, in the Academic Skills Problems Fifth Edition Workbook). This framework simply provides a guide to getting started. One’s hypothesis need not follow this format exactly. Readers will note that the hypothesis statement identifies the primary area of academic difficulty, the suspected reason(s) for the difficulty in terms of inadequate or underdeveloped skills, difficulties with behavior or learning-related skills that may be related to the problem, and instructional or classroom environment factors that may be contributing. These factors can then be formulated into a hypothesis statement that will guide the assessment process going forward. The hypothesis will be refined and revised as additional data are gathered. The next portion of the assessment, directly evaluating the student’s academic skills, is described in Chapter 4.
APPENDIX 3A
Teacher Interview Form for Identification of Academic Difficulties Student Teacher School Grade
Interviewer
Date of interview Suggested introductory language: The purpose of this interview is to gather some basic information on [student’s] areas of academic difficulty and functioning in the classroom. This information will be highly useful in guiding my assessment. Some of the questions will involve the academic curricula, materials, and instructional groupings or routines that you use. These questions are meant to better understand the student’s difficulties in the current context, and are not meant to evaluate any aspects of your teaching. 1. General Information and Referral Concerns What is primary academic area of concern (reading, math, or written expression)? Are there any additional areas of difficulty (reading, math, or written expression)? Areas of relative strength for the student (i.e., skill areas that are strongest for this student specifically):
2. Primary Area of Concern 2A. What specific aspects of [primary area of difficulty—reading, math, or written expression] are problematic?
2B. Intervention and Support Strategies Have they received any supplementary support or intervention in this area?
What kind of strategies have been tried, and to what extent were they successful?
2C. Curriculum and Instruction in Area of Primary Concern Title of curriculum or series used in this area Are there other instructional materials used in addition or in place of the curriculum? At this point in the school year, what types of skills are students expected to demonstrate in this area?
What time do you typically teach this subject? (continued)
131
3. Secondary Area of Concern (if applicable) 3A. What specific aspects of [secondary area of difficulty—reading, math, or writing] are problematic?
3B. Intervention and Support Strategies Have they received any supplementary support or intervention in this area? What kind of strategies have been tried, and to what extent were they successful?
3C. Curriculum and Instruction in Secondary Area of Concern Title of curriculum or series Are there other instructional materials used in addition or in place of the curriculum? At this point in the school year, what skills are students expected to demonstrate in this area?
What time do you typically teach this subject? 4. Behavior Next I’d like to ask about [student’s] behavior and learning-related skills during academic instruction and activities. On a scale of 0 to 5, with 0 being “never” and 5 being “always,” please indicate how often the student demonstrates the behavior during academic instruction and activities. Never Always a. Stays engaged (on-task) during teacher-led large group instruction
1 2 3 4 5
b. Stays engaged (on-task) during teacher-led small group instruction
1 2 3 4 5
c. Stays engaged (on-task) during partner work or independent work
1 2 3 4 5
d. Follows directions
1 2 3 4 5
e. Shows effort and persistence, even when work is difficult
1 2 3 4 5
f. Asks for help when needed
1 2 3 4 5
g. Completes tests or classwork in allotted time
1 2 3 4 5
h. Completes homework on time
1 2 3 4 5
i. Engages in behaviors that disrupt instruction or peers’ learning
1 2 3 4 5
Is [student’s] behavior especially problematic in some academic subjects or activities than others?
(continued)
132
Additional information on the student’s behavior or social skills that either facilitate or interfere with their learning or the classroom environment (follow up on items rated as problematic above)
5. Reading This space is available to note if the student demonstrates difficulties in reading. If there are no indicated problems with reading, this section should be skipped. Next I’d like to ask about [student’s] skills in some specific areas related to reading, and whether they are below expectations, meeting expectations, or above expectations in each area at this time of the year. Phonological and Phonemic Awareness: Able to identify sounds in words, rhyme, blend, segment, etc. Alphabet Knowledge and Letter Recognition: Able to identify printed letters, able to correctly associate printed letters (and letter combinations) with sounds Word Reading/Decoding: Reads words accurately; able to decode (i.e., sound out) unfamiliar words; reads gradeappropriate words with ease and automaticity Reading Fluency: Able to read text smoothly, accurately, with expression Reading Comprehension: Understands what is read; able to answer both literal and inferential questions from a passage; comprehends both narrative and expository texts Vocabulary: Has age-appropriate knowledge of word meanings and definitions If Word Reading/Decoding skills are a concern, what types of words does the student find challenging?
What types of words is the student more successful at reading? How would you describe this student’s listening (oral) comprehension skills—can they understand your directions, and understand stories or answer questions correctly after listening?
How is instructional time in reading typically divided between large group instruction, small group instruction, and partner or independent work?
Does this student also have difficulty with spelling? (continued)
133
6. Mathematics This space is available to note if the student demonstrates difficulties with specific mathematics skill areas. If there are no previously indicated problems with mathematics, this section should be skipped. Next I’d like to ask about [student’s] skills in some specific areas related to math, and whether they are below expectations, meeting expectations, or above expectations in each area at this time of the year. Early Numerical Competencies (Number Sense, Early Numeracy): Age/grade-appropriate skills and understanding in counting, number recognition, quantity discrimination, cardinality Addition and Subtraction Math Facts: Grade-appropriate accuracy and fluency with addition and subtraction math facts within 20 Multidigit Addition and Subtraction Operations: Grade-appropriate skills in applying procedures/algorithms for accurately solving addition and subtraction problems Multiplication and Division Math Facts: Grade-appropriate accuracy and fluency with multiplication and division math facts within 100 Multidigit Multiplication and Division Operations: Grade-appropriate skills in applying procedures/algorithms for accurately solving multiplication and division problems Fractions, Decimals, Percent: Grade-appropriate understanding and skills in rational numbers including comparing magnitude, accurately completing operations, converting, etc. Word Problem Solving Skills: Able to solve grade-appropriate word problems Geometry and Measurement: Conceptual knowledge and ability to solve grade-appropriate geometry and measurement problems Pre-Algebra and Algebra: Conceptual knowledge and ability to solve grade-appropriate pre-algebra and algebra operations How is instructional time in math typically divided between large group instruction, small group instruction, and partner or independent work?
7. Writing This space is available to note if the student demonstrates difficulties with specific writing skill areas (Note: if a student has reading difficulties, it is very possible they have difficulties in writing as well).
(continued)
134
Next I’d like to ask about [student’s] skills in some specific areas related to writing, and whether they are above expectations, below expectations, or meeting expectations in this area at this time of the year. Handwriting Typing/Keyboarding (if applicable) Spelling Capitalization and/or punctuation Grammar and syntax Planning and Formulating Ideas Before Writing Organization and Coherence Story/passage length Editing and Revising What types of writing assignments are given at this time of year, and what types of skills are students expected to demonstrate? Does the student have difficulty with low motivation to write, and/or self-regulation skills that affect their writing output and quality? 8. School MTSS/RTI Model Information on the schoolwide multi-tiered system of support (MTSS) or response to intervention (RTI) model in academics and/or behavior, if one exists, can be obtained below. For efficiency, this information might be better obtained outside of the teacher interview. What does the model look like: Grade levels covered, skill areas targeted, etc. What Tier 2 interventions are available? Is there a Tier 3, and what does that entail? How are students identified for Tier 2 or Tier 3 interventions (e.g., universal screening)?
How often is progress monitored for students receiving Tier 2 or Tier 3 interventions, and what measure(s) are used?
(continued)
135
What and who determines when students move between tiers or interventions are adjusted?
9. Preliminary Hypothesis Formation (to be completed after the interview) Primary area of difficulty: Suspected skill deficits that are the reason for the difficulty:
Difficulties with behaviors or learning-related skills that may be contributing to the problem: Possible environmental and instructional factors contributing to the problem:
Relative strengths (academic or social/behavioral) that may mitigate the problem:
Preliminary Hypothesis Statement Framework. This is meant as a guide to assist hypothesis writing. It will be refined and revised across the subsequent assessment. Separate hypotheses can be written for secondary areas of difficulty.
’s difficulties in [reading/mathematics/writing] are due to inadequate or underdeveloped skills in . These difficulties appear [or do not appear] to be related to the student’s behaviors or learning-related skills, which may include . The student’s difficulties appear [or do not appear] to be related to instructional or classroom environment factors, which may include
.
Compared to their area(s) of difficulty, the student demonstrates relative strengths in
.
’s difficulties in [reading/mathematics/writing] are due to inadequate or underdeveloped skills in . These difficulties appear [or do not appear] to be related to the student’s behaviors or learning-related skills, which may include . The student’s difficulties appear [or do not appear] to be related to instructional or classroom environment factors, which may include
.
Compared to their area(s) of difficulty, the student demonstrates relative strengths in
.
’s difficulties in [reading/mathematics/writing] are due to inadequate or underdeveloped skills in . These difficulties appear [or do not appear] to be related to the student’s behaviors or learning-related skills, which may include . The student’s difficulties appear [or do not appear] to be related to instructional or classroom environment factors, which may include
.
Compared to their area(s) of difficulty, the student demonstrates relative strengths in
.
136
APPENDIX 3B
Behavioral Observation of Students in Schools (BOSS) Child Observed: Academic Subject: Date: Setting: ISW:TPsnt SmGp:TPsnt Observer:
ISW:TSmGp LgGp:TPsnt
Time of Observation:
Interval Length: Other:
Moment AET PET Partial OFT-M OFT-V OFT-P TDI
1
2
3
4
5*
6
7
8
9
10*
11
12
13
14
15*
S
P
T
Moment AET PET Partial OFT-M OFT-V OFT-P TDI
16
17
18
19
20*
21
22
23
24
25*
26
27
28
29
30*
S
P
T
Moment AET PET Partial OFT-M OFT-V OFT-P TDI
31
32
33
34
35*
36
37
38
39
40*
41
42
43
44
45*
S
P
T
Moment AET PET Partial OFT-M OFT-V OFT-P TDI
46
47
48
49
50*
51
52
53
54
55*
56
57
58
59
60*
S
P
T
Target Student Total Intervals Observed
S AET S PET S OFT-M S OFT-V OFT-P
% AET % PET % OFT-M % OFT-V % OFT-P
*Peer Comparison S AET S PET S OFT-M S OFT-V S OFT-P
137
% AET % PET % OFT-M % OFT-V % OFT-P
Teacher S TDI % TDI Total Intervals Observed
APPENDIX 3C
Behavioral Observation of Students in Schools—Modified Student Date School Teacher Time Start Time End Subject Observed and Activity Grouping/Format On-Task: Momentary time sample (score at start of interval). Student behavior that meets expectations for the situation (e.g., engaged with assigned task, attending to teacher or relevant work, waiting appropriately). Disruptive: Partial interval (score any time during interval). Student action that interrupts classroom activity, interferes with instruction, or distracts other students (e.g., out of seat, playing with objects that distract others, acting aggressively, talking/yelling about things that are unrelated to classroom instruction). Every fifth interval: Observe comparison peer (optional) 15-second intervals 1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20
On-Task Disruptive 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 On-Task Disruptive 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 On-Task Disruptive 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 On-Task Disruptive 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 On-Task Disruptive # Intervals Occurred Target Student
# Intervals Observed
On-Task Disruptive
Peer On-Task Comparisons Disruptive Comments
138
Percentage (occurred / observed) × 100
CHAPTER 4
Step 2: Direct Assessment of Academic Skills and Assessing Instructional Placement
A
fter the teacher interview, applicable rating scales, direct observations, student interview, and examination of permanent products have been completed, the evaluator is now ready to directly evaluate the student’s academic skills. This step involves (1) directly assessing academic skills that, based on information collected in the previous step, are suspected to be reasons for the student’s academic difficulties, and (2) assessing the student’s placement in the curriculum of instruction. In many cases, this step is where the most important information will be obtained for identifying an initial intervention. Shapiro’s model of academic skills assessment emphasizes direct measurement of academic skills that are relevant to the student’s area of difficulty. As a result, they serve as potential targets of intervention. These skills can be assessed using available tools that provide the most valuable and interpretable information for understanding the nature of the problem and what to do about it. Our use of the term direct assessment refers to measures that provide the evaluator with clear, unambiguous data that require minimal inference for understanding the student’s performance on skills known to be important for the academic domain and that directly translate to intervention recommendations. Knowing what skills to assess is based on understanding the keystone skills that underlie development and difficulties in reading, mathematics, and writing. Detailed discussions of these critical skills are provided in Chapter 2. When keystone skills are missing or underdeveloped, the acquisition of more complex skills “downstream” is made much more difficult. Identifying these skill gaps directly informs what interventions should target. We will refer again to the keystone models of reading, mathematics, and writing to indicate good starting points and a general roadmap for assessment. This chapter is divided into three sections; each describes a general process and considerations for assessing reading, mathematics, and writing (spelling is embedded within reading and writing given its integral role in both areas). Each section describes the assessment of academic skills, which always focuses on critical, relevant, and practically
139
140
Academic Skills Problems
important skills believed to be the cause of the problem and potential targets for intervention. Skills are not assessed simply because they are interesting. Although some measures may be used more than others, there is no standard battery of tests that are universally administered to all students. The skills assessed and the measures used should be considered for each individual case. We describe what skills to consider and provide examples of measures available for evaluating those skills. Both curriculum-based measures and standardized, norm-referenced measures are provided as examples. It matters less that measures are curriculum-based or standardized; what matters most is that the selected measures offer information valuable for designing interventions. When used strategically, they can complement each other and provide a fuller picture of a student’s academic difficulties than if only one type was used exclusively. However, we continually stress the need for parsimony in assessment. Although the examiner should strive for a comprehensive understanding of the student’s academic skill difficulties, assessment should be limited to what is necessary to develop a data-based, targeted intervention plan. A more focused, parsimonious assessment makes the interpretation of data easier and helps get intervention implemented faster. We also stress that this stage is still the beginning of the assessment process. In Shapiro’s model, assessment is merely a vehicle to arrive at an intervention, and the intervention itself is viewed as a form of assessment to validate the hypothesis about the causes of the student’s academic difficulty. The intervention is subsequently adjusted when progress monitoring data indicate that changes are needed to improve student progress. Across this chapter, and other portions of this text, we frequently describe the use of CBM tools. The reason for this is because of the measurement models that originated within the curriculum-based domain, the framework with the most substantial research base documenting its technical properties and utility is the CBM framework. The development of CBM began with an earlier effort referred to as data-based program modification (Deno & Mirkin, 1977). This program described a methodology for special educators and interventionists that used skill probes taken directly from the students’ curriculum, and repeatedly administered over time, as a strategy for determining student progress in academic subjects. CBM has always been based on a general outcomes measurement (GOM) perspective for monitoring student progress (L. S. Fuchs & Deno, 1991). In contrast to a specific- subskill mastery approach, a GOM approach involves the measurement of skills that themselves require proficiency with multiple subskills and are thus highly indicative of students’ achievement and progress within the overall academic domain. An important objective of initial development work with CBM involved the identification of general outcome indicator skills that demonstrated strong technical adequacy in terms of their reliability and validity, and thus held high utility for monitoring progress in reading, mathematics, spelling, and written expression (Deno, Marston, Mirkin, et al., 1982; Deno, Mirkin, Lowry, et al., 1980; Deno, Mirkin, & Chiang, 1982). Although CBM measures were developed for monitoring students’ progress during intervention, the GOM approach to creating CBM measures made them useful for other purposes (Deno, 2003), such as universal screening tools for identifying students at risk for academic difficulties or low performance on high-stakes tests, and identifying students’ relative standing when comparing their achievement to normative data. Student performance on CBM measures can also provide information about their level of proficiency and related difficulties in the academic domain, thus providing evaluators with a source of data when used alongside other measures in an academic assessment. Additional discussion is provided in Chapter 7, where we describe the use of CBM in its originally intended role: progress monitoring.
Step 2: Assessment of Skills and Instructional Placement 141
The GOM approach used in the creation of CBM measures also allows them to be more easily standardized, and thus able to be used across students and occasions. Today, most commercially available measures developed under the CBM framework, like those available through Acadience, AIMSweb, DIBELS, easyCBM, and FastBridge, are standardized because they all have established rules and procedures for maintaining consistency of administration and scoring. Similarly, the norm-referenced term no longer distinguishes CBM measures from other types because most of the main vendors of CBM tools now maintain robust normative datasets on their measures. This normative information can be quite useful for understanding the severity of a student’s difficulty, setting goals, and comprehending other aspects of their academic difficulties. Nevertheless, despite the fact that standardized and norm-referenced don’t differentiate CBM tools any longer, these terms are still readily understood in the field. Therefore, across the chapter, we refer to standardized measures as those that are commercially published as a “kit” and have been commonly associated with the term over the years, such as the Woodcock–Johnson Tests of Achievement. We refer to CBM measures as those developed under the CBM framework; designed to be indicators of achievement and useful for monitoring progress over time.
MAJOR OBJECTIVES IN DIRECTLY ASSESSING ACADEMIC SKILLS Conducting a direct evaluation of academic skills has several objectives, all of which help inform recommendations for intervention: 1. To identify the student’s relative strengths and weaknesses in the academic skill domain. There is a particular emphasis on identifying skill gaps, lack of automaticity, or inadequate development in foundational skills that directly affect subsequent, more complex skills. Identifying where the student’s difficulties appear to originate highlights targets for intervention. 2. To determine the severity of the student’s difficulties, which informs the intensity, amount, and nature of the resources that should be allocated to intervention. 3. To determine the student’s instructional level and the appropriateness of their placement in the curriculum of instruction, which indicates the level and nature of material that may be most beneficial for intervention. 4. To establish baseline levels (i.e., present levels of performance) that can serve as comparison points for monitoring progress once an intervention is implemented.
DIRECT ASSESSMENT OF READING SKILLS The approach to assessing reading skills described in this section utilizes the keystone model of reading discussed in detail in Chapter 2 (see Figure 2.3). To recap, reading comprehension is the goal and the very reason we read. Foundational skills interact and coalesce to ultimately make reading comprehension possible, but problems with any of the foundational skills can cause problems for everything else that happens after it. For example, difficulties with phonemic awareness impede learning to connect letters to sounds, in turn, difficulties with letter–sound correspondence and segmenting and blending sounds impair the acquisition of reading and spelling words. Difficulty with reading words accurately affects one’s ability to read text with efficiency, thereby limiting
Academic Skills Problems
142
one’s ability to understand what is read. These difficulties with foundational skills have downstream effects; word reading development is limited or severely impaired, and in turn, these difficulties impede the ability to read text with fluency and understand it. For a student with adequate skills in reading words and text, but inadequate vocabulary knowledge or other problems with language in general, reading comprehension difficulties can be explained by the student’s difficulties in processing the language represented by the print. Thus, the goal of an assessment of reading difficulties is to identify the point at which things have broken down. In this section, we describe a model of reading assessment aimed at identifying the cause of a student’s reading difficulty that is situated in a reading development perspective. Figure 4.1 provides a roadmap for this process, in which a suggested sequence of assessment is overlaid on the keystone model of reading. Readers will notice that the model for reading assessment provided in this section does not start at the beginning with phonemic awareness and work up, nor does it suggest administering a set of measures just because they are contained in a test battery. Rather, we recommend a more strategic approach that starts with assessing oral reading and using the results to guide the next steps.
Assessing Oral Reading as a Starting Point For most referrals of students in grades 1 through 12 with reading difficulties, an excellent place to begin an assessment of a student’s reading skills is to start by measuring the student’s oral reading. Based on the findings, this allows the examiner to work backward or forward to determine the source(s) of the student’s reading problem. When reading connected text is inaccurate and dysfluent, the evaluator can work backward to determine if difficulties with foundational skills that make reading efficiency possible are the source of the problem. Or, if text-reading accuracy and efficiency appear adequate, the
Language: Vocabulary and linguistic comprehension, knowledge Phonemic Awareness Letter–Sound Knowledge
Word Reading Accurate
Assess Word Reading, and work backward to find source of problem. Consider language as well. Kindergarten: Start Here
Text Reading Efficiency
Efficient
Problem with Accuracy or Efficiency? YES
Reading Comprehension
Grades 1–12: Start Here Assess Oral Reading
Assess Vocabulary, Listening Comprehension, Reading Comprehension
NO
FIGURE 4.1. Suggested approach to assessing reading skills difficulties, overlaid on the keystone model of reading.
Step 2: Assessment of Skills and Instructional Placement 143
evaluator can work forward to examine language, background knowledge, and other text-processing skills involved in comprehension. The exception to this recommendation is when the student is in the very basic stages of reading development, such as kindergarten or early first grade. For these students, assessment can begin with early literacy skills, described later. Recalling our discussion from Chapter 2, reading efficiency (often referred to reading fluency) refers to accurate, effortless reading of connected text. In this edition of the text, I (Clemens) have tried to minimize the use of “fluency” in reference to the measurement of reading skills to try to reduce the problematic conflation of fluency as “speed.” Reading rate (i.e., words read correctly per minute) is one of the primary ways that oral reading is measured, but rather than think about oral reading in terms of speed, it is more productive to consider it an index of how efficiently a student orchestrates the multiple skill and knowledge sources needed to read connected text with ease. Skillful oral reading is a product of mastery in basic skills that allow printed words to be connected to pronunciations and meanings effortlessly. This automaticity facilitates reading comprehension by freeing cognitive resources so that they can be devoted to connecting and integrating ideas in the text with knowledge (recall the “learning to drive” analogy). Reading connected text is enhanced by reading comprehension, which aids word recognition and makes proper inflection possible. Rather than a single discrete skill, skilled, efficient oral reading represents the successful orchestration of multiple interacting skills. For these reasons, “reading fluency” is used sparingly across the chapter in favor of “reading efficiency or simply “oral reading.” These are reasons why numerous studies have demonstrated robust correlations between oral reading measures and other measures of overall reading proficiency (which includes comprehension), concurrently and on a predictive basis, across grade levels (see reviews by L. S. Fuchs, D. Fuchs, Hosp, et al., 2001; Reschly et al., 2009; Shinn et al., 1992). It is also why the measurement of oral reading became the most studied and relied upon CBM tool, where studies demonstrated the efficiency, reliability, flexibility, and utility for the metric for monitoring reading progress (Wayman et al. 2007). Measuring oral reading is an efficient but powerful window into students’ overall reading skills, from late first grade and beyond. It says a lot about a student’s reading skills in a short time. Most notably, measuring oral reading can indicate whether word reading skills are a problem. Although reading fluency is a product of multiple things, studies indicate that the primary driver is accuracy and automaticity in reading individual words (Jenkins et al., 2003; Eason et al., 2013). For example, a fourth-grade student who reads text slowly and laboriously, and makes frequent errors, immediately indicates problems with word-level skills. On the other hand, a fourth grader with reading accuracy over 98% and reading rates within the average range indicates that reading comprehension difficulties are due to something other than basic word or text reading skills (possibly inadequate language or background knowledge, or problems with attention). Thus, problems observed in oral reading are primarily a symptom of inaccurate and inefficient skills in word recognition. Evaluating oral reading can help an evaluator quickly refine a hypothesis about the causes of student’s reading difficulties.
Measuring Oral Reading Table 4.1 includes examples of published tools for measuring oral reading. The CBM measures listed are also available for monitoring progress, thus providing a seamless connection between assessment and monitoring progress during the intervention. Using
144
Academic Skills Problems
TABLE 4.1. Examples of Measures for Assessing Oral Reading Subtest name/measure
Available for progress monitoring?
CBM and benchmark assessment providers Acadience
Oral Reading Fluency
Yes
AIMSweb
CBMreading
Yes
Oral Reading Fluency
Yes
easyCBM
DIBELS
Passage Reading Fluency
Yes
FastBridge
CBMreading
Yes
Passage Reading Fluency
Yes
Reading Fluency
No
Oral Reading Rate
No
Wechsler Individual Achievement Test (4th ed.)
Oral Reading Fluency
No
Woodcock–Johnson IV Tests of Achievement
Oral Reading Fluency
No
Woodcock Reading Mastery Test (3rd ed.)
Oral Reading Fluency
No
iReady Norm-referenced, standardized Feifer Assessment of Reading Gray Oral Reading Tests (5th ed.)
Note. This table provides a sample of the measures available for assessing oral reading. It is not a complete list, and there are other measures that, due to space constraints, could not be included.
an oral reading subtest from a standardized test battery may be advantageous if other subtests from the same battery are administered. A disadvantage is that these subtests cannot be used for progress monitoring, so a passage set from a CBM publisher will need to be obtained for monitoring progress in oral reading. Originally, CBM measures were developed by each user by sampling passages of text from a student’s reading curriculum. A significant step in the evolution of reading CBM occurred when researchers investigated whether it was necessary that reading passages be derived directly from the curriculum of instruction, or if a generic set of passages independent of the curriculum demonstrated equivalent reliability and validity. Sampling directly from a curriculum takes time and can result in significant variability in the difficulty of passages. In reviewing extensive work on this issue, L. S. Fuchs and Deno (1994) concluded it is not necessary for CBM oral reading measures to be derived directly from the material of instruction and that standardized passage sets are as equally valid indicators of reading proficiency as those derived directly from the curriculum of instruction. They noted several benefits to using a standard set of passages, including greater ability to control for passage difficulty, and more applicability across students and curricula. However, they also noted three critical considerations: (1) The probes in a passage set are as equivalent as possible, (2) the measures reflect valid indices of the overall academic objectives, and (3) the measures allow for both quantitative summaries of achievement and progress (i.e., scores can be tallied and tracked over time to indicate when instruction should be adjusted) and qualitative information that can be gleaned to provide information on how instruction may need to be adjusted, when necessary. Similar advantages to generic passages were observed by Powell-Smith and Bradley-K lug (2001). Results from these studies suggested that it is important to use passages that are controlled for grade-level readability. The practice of sampling from the students’ reading
Step 2: Assessment of Skills and Instructional Placement 145
series is often not advised, given the effort required and the high variability in the resulting passages (Ardoin et al., 2005; Betts et al., 2009; Christ & Ardoin, 2009; Francis et al., 2008). It is also no longer necessary when several high-quality reading passage sets have been developed and are available to educators, in some cases free of cost, through assessment suites such as Acadience, AIMSweb, easyCBM, DIBELS, FastBridge, and iReady. However, there may be select situations in which either standardized passages are not available or passages drawn directly from the student’s reading curriculum are preferred. For this reason, the procedures for constructing oral reading probes (as included in previous editions of this text) are included in Appendix 4A at the end of this chapter. Oral reading measures are administered by having the student read aloud while the examiner monitors the time and records reading errors. Measures are commonly scored in terms of the number of words read correctly in an allotted time (e.g., 1 minute), or the total time required to read the passage so that the number of words read correctly per minute is determined. Correctly read words include words pronounced correctly, which includes words pronounced correctly for the context (e.g., pronouncing read as /red/ in “The girl read the book” would be correct, whereas pronouncing read as /reed/ would be incorrect), reading abbreviations as they would be pronounced in conversation (e.g., reading “Dr.” as “doctor” and not “D-R”), and so on. Errors include mispronunciations (that were not a result of differences in regional dialect, the child’s accent, or speech/articulation issues), skipped words, transposed words (e.g., reading “the sat cat” instead of “the cat sat”), and instances in which the student hesitates or struggles with a word that exceeds a prespecified amount of time (most measures allow 3 seconds per word). Feedback is not given for correctly read words, and errors are not corrected because doing so could invalidate the use of the measure in the future. However, on most measures the examiner is instructed to provide the student with a word over which they hesitate within the prespecified amount of time. Each measure may have specific scoring rules, and as with any measure, users should always be well-versed with the administration and scoring of directions before administering them.
Appropriately Interpreting Results of Oral Reading Measures Scores from oral reading measures are typically summarized in words read correctly per minute. Normative data are available for all of the measures reported in Table 4.1, which allow raw scores to be converted to percentile scores based on the student’s grade and the time of year the data were collected. Although perspectives vary regarding what percentile level constitutes “difficulties,” some consider the 25th percentile as the lower bound of the average range, and thus performance below the 25th percentile constitutes low achievement or difficulty status (e.g., Suggate, 2016). However, scores not far above the 25th percentile (e.g., 25th to 35th percentiles) should still be considered carefully and may reflect low performance. Some measures may include cut scores that indicate the degree of risk reflected by the student’s scores, such as risk categories provided by Acadience and DIBELS. Normative growth in oral reading is displayed in Figure 4.2, which is based on data compiled by Hasbrouk and Tindal (2017). They aggregated data from over 2 million students nationwide between 2010 and 2014. In Figure 4.2, scores at each grade level reflect spring (year-end) data, thereby providing an index for where students finish each school year. Readers will note that, as in so many aspects of human growth and development, oral reading growth is more rapid initially and levels off over time. The leveling off of
Words Correct per Minute
146 210 200 190 180 170 160 150 140 130 120 110 100 90 80 70 60 50 40 30 20 10 0
Academic Skills Problems 90th
75th
50th 25th
10th
Grade 1 Spring Grade 2 Spring Grade 3 Spring Grade 4 Spring Grade 5 Spring Grade 6 Spring
FIGURE 4.2. Normative growth in oral reading by percentile level, based on data aggregated by Hasbrouck and Tindal (2017) with over 2 million students nationwide. The data were reproduced in graphed form with permission from the authors.
oral reading rates as students get older is a reflection of the fact that reading needs only to be fast enough to not impede comprehension. Readers do not continuously increase in rate as reading skills improve, in the same way that people do not continuously increase in speaking rate as language skills improve. Once reading is efficient enough, additional gains in reading rate are not necessary and do not result in further improvements in comprehension. The student’s reading accuracy (i.e., the percentage of words read correctly) also informs interpretation of oral reading data. Reading errors are a hallmark of reading difficulties and will often be a reason for poor text reading. Reading accuracy below 90% should be considered problematic, as 90% accuracy means that the student is making 1 error for every 10 words, on average. This is not a hard-and-fast rule, however; a student may achieve something closer to 95% accuracy and still warrant concern. Struggling to pronounce words, and mispronunciations of words, are the most obvious and common types of errors, but evaluators should also look for students who frequently skip words, skip entire lines, reverse words (e.g., saying “the dog red” when the text reads “the red dog”), or need to frequently self-correct. Any of these issues point to the need to consider word reading skills as a cause of the student’s reading difficulties. Noting qualitative features of oral reading provide insight into a student’s comprehension and language processing while reading. Appropriate pauses and inflection based on punctuation (e.g., rising pitch for questions) reflects the student’s understanding of how punctuation conveys meaning and emphasis in language. Reading with expression and prosody (i.e., changes in tone, timing, stress, and intonation in reading consistent
Step 2: Assessment of Skills and Instructional Placement 147
with the action, dialogue, or events in the text) are primarily the result of reading comprehension because they involve recognizing characters’ intent and emotional states and the meaning and importance of events in the passage.
Misinterpretations of Oral Reading The way that oral reading is typically scored—words read correctly per minute (WCPM)— leaves it prone to misinterpretation. WCPM is easily measured and highly observable. It is indeed a valid and reliable indicator of overall reading proficiency, but also one that is deceptively simple. Its simplicity betrays the complexity of the overall skill it represents, and problems arise when reading rate becomes the sole focus of intervention (something that early CBM researchers never intended). For a proficient reader, fluent reading is usually brisk and quick, but that should not be conflated with “fast.” As Perfetti (2007) noted, the key aspect in reading fluency is efficiency, meaning that the processes of reading text have become efficient and automatic. Efficient reading requires little conscious effort because word spellings are automatically linked to pronunciations and meanings. This resource efficiency often translates to a higher rate. Similar to other behaviors that develop through practice, such as typing, playing piano, doing math problems, or turning a double play in baseball, the rate at which one can perform the behavior usually increases as one becomes more proficient with the skills involved. In reading, it is important to remember that fluent reading reflects reading ease. Reading speed can thus be viewed as a “symptom” of how well learned and well orchestrated the subskill processes are that underlie it. But rather than reading with speed, fluent reading is more accurately viewed as reading with ease. The problem is that when reading fluency is erroneously viewed as only about “speed,” a student’s rate of reading text is more likely to be viewed as the primary assessment target, and consequently, interventions are more likely to be geared toward treating a symptom, not the cause of the reading problem. For example, some perspectives have viewed reading speed as the target behavior that should be changed. These perspectives have assumed that reading rate changes immediately in response to adjusting an intervention or environmental contingencies, like how some behaviors (e.g., calling out in class, on-task) can change directly following manipulations to antecedent or consequence conditions. These perspectives are represented in assessment procedures referred to as brief experimental analysis (BEA; e.g., Daly et al., 1999). Similar to a functional analysis of behavior (Iwata et al., 1990), BEA involves the systematic implementation of different conditions in a single-case research design. The student’s immediate change in reading is measured in response to each condition. To be clear, experimental analysis types of procedures have been implemented to specifically discriminate skill from performance deficits (i.e., “can’t do vs. won’t do”), which we discuss more specifically at the end of this chapter. BEA has also been applied, albeit less often, to mathematics (McKevett & Codding, 2021), early literacy (e.g., Wagner et al., 2017), and writing (e.g., Parker et al., 2012). We focus on how BEA has been used to identify interventions specifically for reading fluency, as it has been applied most frequently (for reviews, see Burns et al., 2017; Burns & Wagner, 2008). In BEA applications to reading fluency, the conditions typically include repeated reading, listening passage preview, phrase drill or other error correction procedures, goal setting, and in some cases, a reward condition in which the student is awarded tangible or edible reinforcement for reaching a prespecified reading rate goal. After systematically presenting and repeating each condition, the condition associated with the
148
Academic Skills Problems
highest oral reading scores is typically viewed as the ideal intervention for that particular student. The data-based, hypothesis-driven approach to BEA is laudable. However, there are several problems with BEA for reading from theoretical and empirical perspectives. First, BEA studies tend to view reading speed as the sole variable of interest and fail to consider that skilled oral reading is the result of proficiency with multiple subskills. Consequently, the strategies commonly tested (e.g., passage preview, repeated reading, reward conditions) are all aimed at increasing rate and not determining (or addressing) the reasons why a student demonstrates inaccurate or inefficient oral reading. Second, BEA mostly assumes that reading rate is a discrete behavior that can be changed immediately. It assumes that, given the right intervention, a struggling reader can immediately become a better reader than they were 1 minute ago, and thus the key is finding a “trick” or previously unidentified intervention that will instantaneously improve their reading skills. What appear to be within-session gains in reading are used as indices for finding this unique intervention. The assumption represents an oversight of how reading proficiency develops, and what reading efficiency really means. As we have discussed at length (see Chapter 2), skilled oral reading is a multiply determined process that takes years to develop, driven in large part by the gradual accumulation of linkages between spellings and pronunciations (i.e., orthographic mapping) that makes automatic word recognition possible. Reading efficiency in particular is more difficult to change than other skills, such as pseudoword decoding (e.g., Flynn et al., 2012; Scammacca et al., 2007; Torgesen et al., 2001; Wanzek et al., 2013), because oral reading is a reflection of overall reading proficiency. Improving reading skills can require considerable intervention intensity and time (e.g., O’Connor et al., 2010). Unlike a behavior such as calling out, off-task, or aggression that can change very quickly by adjusting an antecedent or consequence to the behavior, oral reading should not be expected to improve immediately. Third, BEA studies tend to ignore the role of reading comprehension in reading fluency. Reading efficiency makes comprehension possible by removing effortful word recognition as a barrier, but this does not mean that additional increases in reading speed lead to better comprehension. Reading only needs be as fast to not impede understanding. In other words, once reading is efficient enough, further gains in reading rate do not offer additional advantages to comprehension. For some students, improvements in fluency have no additional benefit for overall reading comprehension (O’Connor; 2018). Additionally, although reading fluency is primarily influenced by word-level efficiency, reading comprehension can ease word recognition processes (Jenkins et al., 2003), but trying to read too fast can disrupt comprehension (Sabatini et al., 2014). Reading rate can also serve as a compensatory mechanism: Readers tend to slow down when they do not understand what they are reading (Walczyk et al., 2007; Wallot et al., 2014). Fourth, BEA studies in reading have rarely considered the variability that occurs in measuring oral reading. Scores for the same student can vary considerably from passage to passage (Ardoin & Christ, 2009), and this variability affects the confidence of decisions regarding a best-performing intervention in a BEA (Mercer, Harpole, et al., 2012). Burns et al. (2017), reviewing research on BEA, observed that it was common for reading fluency scores between two or more conditions in a BEA to fall within the standard error of measurement. This prompts questions regarding the fundamental decision framework for BEA. Even if one accepts the notion that oral reading skills can be changed immediately, if reading fluency scores observed in two or more conditions fall within the margin
Step 2: Assessment of Skills and Instructional Placement 149
of error associated with the measures used to determine a best-performing intervention, the student’s scores cannot be reliably assumed to be different even if a score in one condition appears to be higher than in another. Fifth, BEA has been proposed as a way to help differentiate skill deficits from performance deficits (i.e., “can’t do vs. won’t do”) by including a condition that incentivizes performance. This may have merit in academic areas in which performance is contingent on sustained effort and a willingness to complete problems, as in mathematics (e.g., Codding et al., 2009) or writing (e.g., Parker et al., 2012), where it is possible for students to understand what to do and how to do it, but choose not to. Performance deficits can certainly exist in areas that involve reading, such as refusal to complete assignments, or weak effort attending to text when comprehension is being evaluated. We discuss this point further at the end of the chapter. But reading efficiency is different; it involves recognizing words accurately with virtually no effort, to the point that it becomes almost like a reflex and outside of one’s control. It is difficult to conceive a performance deficit (i.e., “won’t do”) in oral reading; essentially, this would be a situation in which a student is able to read words with ease and efficiency but chooses to read slowly, laboriously, and with errors. In summary, BEA for oral reading and other assessment approaches focused on reading speed are based on problematic interpretations that a measurement metric (reading rate) is the skill that needs to be improved. Kamhi (2014) posited, that in education, notions of performance and learning are often conflated. In BEA, within-session performance is incorrectly assumed to be the same as actual reading improvement, and resulting interventions are aimed at treating a symptom and not the problem. Rather, skilled oral reading is the outcome of a successful orchestration of skills that undergird and influence it. Problems with oral reading mean that something is not right with the skills that make it possible; thus, intervention should target the source of the problem rather than its symptom. The measurement of oral reading is then used to help determine if those interventions are successful over time.
Summary: Interpreting Oral Reading Measures For a small number of students, poor oral reading may represent a rate deficit, meaning that they read words very accurately but, either through lack of practice and exposure or an underlying disability, struggle to read text efficiently. However, a more frequent occurrence is that problems with oral reading are the product of difficulties at the word level (Fletcher et al., 2019), the most common form of reading difficulty. Therefore, problems with oral reading, marked by slow, laborious reading, frequent mispronunciations, skipped words, and skipped lines, signal the presence of word-level reading difficulties. As illustrated in Figure 4.1 earlier in the chapter, the evaluator assesses word reading as a walk backward to determine the source of the problem.
Assessing Word Reading Difficulties Word reading difficulties (i.e., problems reading words accurately and efficiently) are the primary and most common cause of chronic reading failure (Adams, 1994; Perfetti, 1985; Share & Stanovich, 1995) and the most frequent reason that students experience difficulties with reading comprehension (Ritchey et al., 2015; Shankweiler et al., 1999). The discussion that follows includes any students with word reading difficulties. This
150
Academic Skills Problems
might be students who have been identified with a specific learning disability (SLD) in basic reading. Some schools and districts use the dyslexia term to identify students with this skill profile, but it should be noted that in most of the research community, “dyslexia” is synonymous with word-level reading disability or SLD in basic reading. Our discussion of word-level difficulties also includes students who struggle in this area but have not been formally identified with SLD or dyslexia. Ultimately, labels and diagnostic categories mean very little for reading intervention. The skill profiles, root causes of the reading difficulties, and most importantly, the best instructional and intervention approaches rarely differ based on whether a student has been formally diagnosed. As noted in the previous section, oral reading measures will provide the most direct indication of a need to examine word reading more specifically. Preliminary information indicating word-level difficulties may also be evident from the teacher interview. The Teacher Interview Form asks specific questions regarding the student’s skills in reading words with accuracy and efficiency, as well as their skills in reading different types of words. Information pertaining to word reading accuracy and efficiency may also be present in the referral documents or previously collected measures, such as screening or benchmark tests. Referral concerns may also include difficulties retaining word reading skills targeted in instruction. Word reading difficulties are also something the evaluator should rule out even when students are referred for “reading comprehension” difficulties. It is possible that teachers of students in middle or later elementary grades (and beyond) may not recognize a student has underlying word reading difficulties. Evidence suggests that teachers tend to overestimate students’ word and text reading fluency especially when they have reading comprehension difficulties (Hamilton & Shinn, 2003; Meisinger et al., 2009) because comprehension problems are often most salient in instructional contexts for older students. It is also possible that some students can get by in early grades with some basic decoding skills, but their unresolved word reading difficulties emerge in later elementary or middle-school grades where complex and multi-syllabic words are more common in assigned texts. Thus, even for students referred for difficulties in reading comprehension, their accuracy and efficiency in word and text reading should be considered.
Assessing Word Reading Skills: Measures and Important Considerations Examples of CBM and standardized measures that can be used for evaluating students’ word reading skills are listed in Table 4.2. The table also indicates whether they measure reading real words or pseudowords, and whether the measures are timed. There are several relevant considerations for selecting measures, as follows. THE IMPORTANCE OF MEASURING DECONTEXTUALIZED WORD READING
One important consideration in the assessment of word reading skills is the importance of evaluating student’s skills word reading words out of context, meaning that words are read outside of connected text (i.e., words in isolation). Measures that have students read words in list form, or one at a time, are good ways of measuring decontextualized word reading. Connected text provides students with comprehension information that can aid their reading of less familiar words, prompt self-corrections, or confirm a tentative pronunciation. When reading words outside of connected text, students must rely completely on their word reading skills (i.e., the combination of their phonemic awareness,
Step 2: Assessment of Skills and Instructional Placement 151
TABLE 4.2. Examples of Measures for Assessing Word Reading Skills Real words
Pseudowords
CBM vendors and measures Acadience: Nonsense Word Fluency
Timed
AIMSweb Plus: Letter–Word–Sound Fluency
Timed
AIMSweb Plus: Word Reading Fluency
Timed
DIBELS: Nonsense Word Fluency
Timed
DIBELS: Word Reading Fluency
Timed
easyCBM: Word Reading Fluency
Timed
FastBridge: Decodable Word Reading
Timed
FastBridge: Nonsense Word Reading
Timed
FastBridge: Sight Word Reading
Timed
Fuchs Research Group: Word Identification Fluency
Timed
iReady: Pseudoword Decoding Fluency
Timed
iReady: Word Recognition Fluency
Timed
Norm-referenced, standardized tests Diagnostic Assessments of Reading, 2nd ed.
Untimed
Feifer Assessment of Reading Kaufman Test of Educational Achievement, 3rd ed.
Timed
Untimed
Timed and untimed
Timed and untimed
Phonological Awareness Test 2, Norm Update
Untimed
Process Assessment of the Learner, 2nd ed.
Untimed
Test of Word Reading Efficiency, 2nd ed. Wechsler Individual Achievement Test, 4th ed.
Timed
Timed
Timed and untimed
Untimed
Wide Range Achievement Test, 5th ed. Woodcock–Johnson IV Tests of Achievement
Untimed
Untimed Timed and untimed
Untimed
Woodcock Reading Mastery Test, 3rd ed.
Untimed
Untimed
Word Identification and Spelling Test
Untimed
Untimed
Note. “Timed” = measure is timed and typically scored in terms of words correct per minute, thus providing index of accuracy and efficiency; “Untimed” = measure is not timed, thus providing index of accuracy. This table provides a sample of the measures available for assessing word reading skills. It is not a complete list, and there are other measures that, due to space constraints, could not be included.
letter–sound knowledge, and orthographic knowledge) to pronounce words correctly. Therefore, measures of decontextualized word reading provide a purer form of word reading assessment. TIMED VERSUS UNTIMED WORD READING MEASURES
The second important consideration in evaluating word reading skills is whether the measure is timed or not. Some measures, like Word Reading Fluency from easyCBM or the Test of Word Reading Efficiency (Second Edition; TOWRE-2) involve timing students
152
Academic Skills Problems
while they read words, and are scored in terms of the number of words the student reads correctly within the time limit. Other measures, like the Word Identification or Word Attack subtests from the Woodcock–Johnson tests, are untimed. Information about student’s word reading skills can be gleaned from both timed and untimed measures. Both timed and untimed measures provide data on word reading accuracy because even timed measures like the TOWRE-2 are scored based on the number of words read correctly in the allotted time. Untimed measures allow the examiner to observe the student’s word reading skills at a slower pace, which can help reveal skill gaps and error patterns that may have been missed in a timed assessment. There are considerable advantages to timed measures of word reading. Word reading proficiency is defined not just by accuracy, but by efficiency. Skilled readers can look at a word and read it with near immediacy. Recalling our discussion of word reading development, the critical aspect that helps make text reading efficiency and reading comprehension possible is the ability to read individual words with ease. Likewise, a lack of word reading efficiency is a hallmark feature and unifying characteristic of word-level reading difficulties and disability (i.e., dyslexia) across written languages (Ziegler et al., 2003). Because word reading efficiency is so essential for the development of reading proficiency, it is important that any assessment for students with word reading difficulties include, at a minimum, a timed measure of word reading that is scored in terms of the number of words read correctly within a time limit. REAL WORDS VERSUS PSEUDOWORDS
The third important consideration when assessing word reading skills is whether to use measures that include real words or pseudowords. Pseudoword reading measures, such as Nonsense Word Fluency from the DIBELS and Word Attack from the Woodcock– Johnson tests, include nonsense words that are wholly “decodable” and can be read using knowledge of letter–sound correspondence and common spelling patterns. Real word reading measures typically include common or high-frequency words, which may include both phonetically regular and irregular words. On some measures, words increase in difficulty to include complex, multisyllabic words common in academic texts. On both types, the words in the lists are generally meant to reflect a sample of words representative of spelling patterns and words that students will typically encounter in printed English. There are advantages to using both real word and pseudoword measures in a reading assessment. Their results can complement each other. Measures that use real words provide the examiner with information on how well students have acquired (or are in the process of acquiring) the orthography of printed English. They assess the extent to which students can read words they have seen before, as well as words they have not encountered but that possess spelling patterns similar to those of words they have learned. On some word reading measures, the words increase in difficulty across the list, which provides an opportunity to examine the student’s skills in reading longer, multisyllabic words. Pseudoword reading measures, on the other hand, evaluate students’ skills in decoding words that they have never seen before. Specifically, they test skills in phonemic decoding because students must rely on nothing but their knowledge of letter–sound correspondence, letter combinations (e.g., ea, ch, oo), and knowledge of spelling “rules” and patterns (e.g., silent-e, spelling patterns such as -ing, -tion) to pronounce correctly. Readers must be able to apply this knowledge to unfamiliar words, a skill that increases in importance across grades as the texts they read become increasingly more complex.
Step 2: Assessment of Skills and Instructional Placement 153 ASSESSING WORD READING WITH CBM VERSUS NORM‑REFERENCED, STANDARDIZED TESTS
As noted in Table 4.2, both CBM and standardized measures are available for assessing word reading. Although useful information can be obtained from both types, standardized measures are sometimes better for an initial assessment because they usually include a broader set of words and spelling patterns. CBM measures, on the other hand, tend to be more controlled in terms of the words contained in the measures, which makes them better suited for monitoring progress. Standardized and CBM measures of word reading can both be used in an assessment. An additional point is important regarding Nonsense Word Fluency measures. Earlier DIBELS versions (e.g., DIBELS, Sixth Edition) and those currently offered by Acadience include only vowel–consonant (VC) and consonant–vowel–consonant (CVC) words and are often scored in terms of the number of letter sounds students name correctly either in isolation or as part of the whole word. They also offer separate metrics for scoring the number of words read as whole units. These factors make some versions of Nonsense Word Fluency good for assessing and monitoring progress for students at basic stages of reading development, but less informative for students beyond a basic level of word reading. The more recent version of Nonsense Word Fluency from the DIBELS Eighth Edition includes a greater range of spelling patterns, with words up to six letters in length, and account for the frequency in which letter combinations occur in English (i.e., more frequently occurring letter combinations, like sh, appear more often). SUMMARY: TYPES OF WORD READING MEASURES
In summary, the following aspects should be considered when assessing word reading skills: (1) Measuring word reading out of context (i.e., reading words in lists or in isolation) is key for evaluating “pure” word reading skills; (2) both timed and untimed measures are useful, but timed measures provide critical information on students’ word reading efficiency, which is a hallmark of skilled reading and a lack of it is a consistent characteristic of reading difficulties; (3) measures that include real words and those that include decodable pseudowords are both useful in an assessment of reading skills; and (4) standardized measures usually offer a more comprehensive view of word reading than CBM tools, whereas CBM measures are best suited for monitoring progress (although information can be obtained by both types in conjunction).
Interpreting Results from Word Reading Measures Students’ difficulties with word reading may be revealed by below-average scores (i.e., below the 25th percentile or in that proximity) that, depending on the measure, may be due to poor accuracy (i.e., multiple errors), low efficiency (i.e., low rate or “fluency”), or in most cases, both poor accuracy and efficiency. When interpreting outcomes from word reading measures, it is important to consider the discussion in Chapter 2 about the development of word reading proficiency. Children learn to map (i.e., link) word spellings to pronunciations first through relying on individual letter sounds, followed by increasingly larger units of letters which will eventually include whole words. This process allows them to first read words with accuracy, and then through practice, they are able to read words quickly and with minimal conscious effort. This process is made possible by learning foundational skills in phonemic awareness and letter–sound correspondence, and lots
Academic Skills Problems
154
of practice reading words with feedback. Lack of efficiency in reading words means that students have not unitized spelling patterns and made the spelling–sound connections necessary to read words efficiently. Therefore, when difficulties with word reading are identified, the next steps are to determine why. More specifically, the evaluator should work backward to identify gaps in foundational skills that are involved in word reading development. IDENTIFYING STRENGTHS AND WEAKNESS IN WORD READING
Students with word reading difficulties can vary considerably in their relative strengths and weaknesses, and there may be word types and spelling patterns in which students are relatively stronger in reading compared to others. For example, a student’s reading might be fairly accurate with short words that are mostly phonetically regular and follow a VC, CVC, CCVC, or CVCC pattern. However, their accuracy diminishes considerably when words become more complex. As another example, a student may be relatively strong in reading words with consonant blends (i.e., a letter combination in which each sound can be heard), such as st, str, cl, sl, but has not learned certain digraphs (a letter combination that makes one sound), such as ch, sh, th, oo, ea, and ou. Determining the word types and spelling patterns in which the student is stronger and weaker provides information for designing an intervention. Information on strengths and weaknesses with word types can be obtained (1) through error analyses of the results of the word reading measures administered, and (2) by administering reading diagnostic measures such as phonics surveys or decoding inventories. Several phonics surveys and decoding inventories exist, and examples are provided in Table 4.3. These measures generally involve having a student read lists of words that are organized and scored in a way that reveals the word types and spelling patterns in which the student is stronger and weaker. Several of these measures also include subtests for evaluating students’ letter–name and sound knowledge. Results of the diagnostic assessments using phonics or decoding inventories contribute to the assessment; they can help confirm or refine the hypothesis statement about the nature of the student’s reading difficulties, and most importantly, they contribute
TABLE 4.3. Examples of Word Reading/Decoding Inventories Measure
Publisher/source
CORE Phonics Survey
Consortium on Reading Excellence (2008), Teaching Reading Sourcebook
Diagnostic Assessments of Reading–2
PRO-ED: proedinc.com
Early Names Test
Mather et al. (2006)
Informal Decoding Inventory
Walpole et al. (2011)
Names Test
Cunningham (1990)
Quick Phonics Screener
Hasbrouck & Parker (2001)
Note. This table provides a sample of the measures available for assessing decoding. It is not a complete list; numerous decoding inventories have been offered over the years, and there are likely others that exist in reading assessment batteries and intervention programs.
Step 2: Assessment of Skills and Instructional Placement 155
valuable information to designing intervention. However, phonics and decoding inventories often do not provide all the information needed for assessing word reading skill difficulties. As illustrated in Figure 4.1, in some cases, it is important to assess the student’s early literacy skills to determine the root of the problem.
Assessing Early Literacy Skills: The Foundations of Word Reading When students exhibit word reading difficulties, the two primary skill areas to assess in early literacy are alphabetic knowledge and phonemic awareness. As discussed in detail in Chapter 2, their interaction forms the foundation for word reading development. Deficits in either (or both) skills may be a root cause of word reading difficulties.
Assessing Alphabetic Knowledge Alphabetic knowledge refers to students’ skills in identifying printed letters by name and, most importantly, associating printed letters with sounds. Because letter–sound correspondence represents the most essential building block of reading development, alphabetic knowledge is arguably the most important skill area to assess for any student who experiences difficulty in reading words. Often, difficulties in reading words can be traced back to a lack of accuracy and fluency in connecting printed letters and letter combinations to sounds. For younger students or those at more basic skill levels, this may be involve a lack of knowledge of single letter sounds. For older students, inadequate alphabetic knowledge may manifest as inaccurate and dysfluent knowledge of letter combinations and spelling patterns, such as vowel and consonant digraphs (i.e., a letter combination that makes one sound, like ea, ee, ou, th, sh, ch, ck), and pronunciations of larger spelling units, such as syllables, affixes, and morphemes (e.g., pre-, dis-,-ent, -ing, -tion, -ly). Examples of tests and subtests of alphabetic knowledge are provided in Table 4.4. As with other skills, several considerations are relevant for selecting measures of letter names and letter sounds. ASSESSING LETTER NAMES VERSUS LETTER SOUNDS
Table 4.4 lists several measures, some of which measure letter–name knowledge, some measure letter–sound knowledge, and others measure both. Knowledge of letter–sound correspondence is one of the most important skills for reading development, and difficulties associating printed letters with sounds are often a primary variable in explaining why students experience difficulties learning to read words. Hence, assessing letter–sound knowledge is the most important of the two and should be included in an assessment for students with word reading difficulties. However, knowledge of letter names can still provide some useful information in addition to assessing letter sounds. Measuring letter–name knowledge is relevant when assessing younger children (i.e., kindergarten and early first grade) who are having difficulties in acquiring basic word reading skills. Recalling our discussion from Chapter 2, learning letter names helps facilitate a finer-grained sensitivity to the phonemic structure of language (Foulin, 2005), knowing letter names makes learning letter sounds easier because most letters provide clues to their sounds (McBride-Chang, 1999; Treiman et al., 1998), and instruction that pairs letter names with sounds results in superior letter– sound knowledge over targeting letter sounds alone (Piasta et al., 2010).
156
Academic Skills Problems
TABLE 4.4. Examples of Measures for Assessing Alphabetic Knowledge Letter names
Letter sounds
CBM providers and measures AIMSweb: Letter Naming Fluency
Timed
AIMSweb: Letter Sound Fluency
Timed
AIMSweb: Letter Word Sound Fluency
Timed
easyCBM: Letter Names
Timed
easyCBM: Letter Sounds FastBridge: Letter Names
Timed Timed
FastBridge: Letter Sounds iReady: Letter Naming Fluency
Timed Timed
iReady: Letter Sound Fluency
Timed
Norm-referenced, standardized tests Diagnostic Assessments of Reading–2 Kaufman Test of Educational Achievement, 3rd ed. (KTEA-3)
Untimed
Untimed
Timed
Phonological and Print Awareness Scale (PPA Scale)
Untimed
Phonological Awareness Test 2 Norm Update (PAT-2:NU)
Untimed
Test of Phonological Awareness, 2nd ed. (TOPA-2)
Untimed
Test of Preschool Early Literacy (TOPEL)
Untimed
Untimed
Wechsler Individual Achievement Test, 4th ed.
Untimed
Untimed
Woodcock Reading Mastery Test, 3rd ed. (WRMT-3)
Untimed
Word Identification and Spelling Test (WIST)
Untimed
Note. This table provides a sample of the measures available for assessing knowledge of letter names and sounds. It is not a complete list, and there are other measures that, due to space constraints, could not be included.
In short, letter–sound knowledge should usually be a part of an assessment for students with word reading difficulties. Measures of letter–name knowledge may provide additional information for designing intervention for beginning readers, but are usually not informative for students beyond the initial stages of reading. TIMED VERSUS UNTIMED MEASURES OF LETTER NAMES AND LETTER SOUNDS
As indicated in Table 4.4, some measures of letter–name and –sound knowledge are untimed, while others are timed. Untimed measures provide data on students’ accuracy only. Timed measures provide data on students’ accuracy and fluency with the skills, which are indicative of how well the skills are learned and how efficiently students can recall the letter name or sound from memory. This efficiency is important because it reflects automaticity with recall. Similar to the ways that fluent reading makes reading comprehension possible, efficiently recalling alphabetic information (especially letter sounds) may make it easier to learn to read words (Clemens, Lee, et al., 2020; Ritchey & Speece, 2006). Thus, timed measures of alphabetic knowledge are arguably more important for assessing students with word reading difficulties, and measures of letter–sound
Step 2: Assessment of Skills and Instructional Placement 157
fluency may be particularly important. Determining specifically what letter names or letter sounds the student does or does not know can be obtained through diagnostic measures, which are described next. DIAGNOSTIC LETTER–NAME AND LETTER–SOUND ASSESSMENT
Recommendations for intervention are enhanced when an evaluation can identify what letters or letter combinations the student has not yet learned. This may require the use of diagnostic measures, for several reasons. First, some tests of letter–name and letter– sound knowledge or fluency do not include all letters of the alphabet. Second, on fluency- based measures, students may only see a small portion of the content within the time limit (especially if their fluency is low), thus providing data only on a limited set of items. Third, measures of letter sounds rarely include letter combinations, such as digraphs, that are important for readers to know. As in most situations in our assessment process, deciding when to use a diagnostic measure depends on what information the evaluator needs to design instruction. If the previous assessment data gave no indication that the student’s letter–sound knowledge was incomplete, there is little reason for administering a diagnostic measure of letter–sound knowledge. On the other hand, indications that alphabetic knowledge is a problem, especially in situations where it appears that gaps in a student’s knowledge of letter–sound correspondence may be a cause of their word reading difficulties, are those in which a diagnostic measure may be useful for designing the intervention. If a published diagnostic measure of alphabetic knowledge is not available, one can be constructed very simply. A letter–name knowledge task can be created by listing the letters of the alphabet in random order and asking the student to point to each letter, identifying them by name if assessing letter–name knowledge, or by sound when assessing letter–sound knowledge. When assessing letter–sound knowledge, it is best to score a letter as correct if students identify its most common sound. Additionally, letter–sound diagnostic measures can include common digraphs (i.e., a two-letter combination that makes one sound, such as ee, sh, or th). Letters and their most common sounds, as well as the most common consonant and vowel digraphs, are listed in Table 4.5. For younger students, it is helpful to test individual letter sounds separately from letter combinations. DISTINGUISHING ALPHABETIC FLUENCY MEASURES FROM RAPID AUTOMATIZED NAMING
It is important to differentiate measures of alphabetic fluency from measures of rapid automatized naming (RAN). Performance on RAN measures, especially those that include letters or numbers, are strongly correlated with reading skills (especially reading fluency; for reviews, see Araújo et al., 2015; Norton & Wolf, 2012), and overlap highly with the skills assessed by the alphabetic knowledge and fluency measures described in this section. But they are not the same. RAN measures are designed to measure how quickly a student can label a small set of randomly ordered but familiar letters or numbers (some measures also include objects, shapes, or colors for students who have not learned letters or numbers yet). RAN letters measures often consist of four to five letters that the student is expected to know, which are repeated randomly and organized in a list. The student is asked to name as many letters as possible within the time limit. The number of responses is tallied, which often does not involve recording incorrect responses. As such, theses measures are aimed at evaluating a student’s speed in recalling familiar symbols in sequence,
Academic Skills Problems
158
TABLE 4.5. The Most Common Sounds Associated with Each Letter and Letter Digraphs As in . . . a b c d e f g h i j k l m n o p q r s t u v w x y z
apple bat cat dip egg fit get hat it jet kick let met nap top pat quick rat sip tap up view wig fox yes zip
As in . . . Consonant digraphs sh th ch wh ph wr ss ck tch kn
shut with chat what phone wrist miss pack fetch know
Vowel digraphs ai ay ee ea ie oa oe ue ui oo
rain play meet neat OR head piece OR tie boat foe glue suit foot
but are not designed to provide information on what letters the student knows or does not know. In other words, RAN measures only assess recall and are not intended to provide information about a student’s mastery or skill gaps in alphabetic knowledge. Another important difference is that RAN is highly difficult to improve through intervention, and any gains in RAN are unlikely to be associated with overall reading improvement (de Jong & Vrielink, 2004; Norton & Wolf, 2012). In contrast, the learning of letter names and sounds is malleable through instruction (e.g., DuBois et al., 2014), and gains are associated with improved reading, especially when letter–sound instruction is combined with phonological awareness instruction (Bus & van IJzendoorn, 1999; Elbro & Petersen, 2004; Hulme et al., 2012; Schneider et al., 2000). Thus, measures of alphabetic knowledge contribute significantly to planning interventions, whereas RAN measures provide little information for planning intervention over and above the data already provided by measures of alphabetic knowledge, phonemic awareness, word reading, and oral reading.
Assessing Phonological and Phonemic Awareness Results of phonemic awareness assessments can provide insight into students’ reading difficulties. Namely, below-average skills, particularly in phoneme segmenting or blending, reveal that the student is having difficulty processing sounds in words at a specific phoneme level. They are less able to “get inside” a word to process it in terms of its individual phonemes (recall our analogy in Chapter 2 with regard to listening to classical music).
Step 2: Assessment of Skills and Instructional Placement 159
This is a critical factor because (1) the ability to isolate individual phonemes is thought to facilitate pairing that sound with a printed code (i.e., either an individual letter or group of letters); and (2) children blend and segment phonemes as they learn to read and spell words. Thus, problems in phonemic awareness are a contributing factor to the student’s difficulties in developing proficiency in reading words. In conjunction with difficulties in letter–sound correspondence, problems in phonemic processing create barriers to everything on which word reading is built. Examples of measures to evaluate phonological and phonemic awareness are listed in Table 4.6. Before deciding what to use, it is important to investigate what types of
Phoneme deletion (elision) or substitution
Phoneme segmenting
Phoneme blending
Alliteration or initial/ final sound ID
Syllable segmenting/ blending
Rhyming
TABLE 4.6. Example Measures for Assessing Phonological and Phonemic Awareness
CBM vendors and measures Acadience: First Sound Fluency
Acadience: Phoneme Segmentation Fluency
AIMSweb: Initial Sounds
AIMSweb: Phoneme Segmentation Fluency
DIBELS, 8th ed.: Phoneme Segmentation Fluency
easyCBM: Phoneme Segmenting
FastBridge: Onset Sounds
FastBridge: Word Blending FastBridge: Word Rhyming
FastBridge: Word Segmenting
iReady: Phonological Awareness
Norm-referenced, standardized Comprehensive Test of Phonological Processing–2
Feifer Assessment of Reading
Phonological and Print Awareness Scale
Phonological Awareness Test 2 (Norm Update)
Process Assessment of the Learner, 2nd ed.
Test of Phonological Awareness, 2nd ed.
Test of Preschool Early Literacy
a
Wechsler Individual Achievement Test, 4th ed. Woodcock Reading Mastery Test, 3rd ed.
Note. This table provides a sample of the measures available for assessing phonological and phonemic awareness. It is not a complete list, and there are other measures that, due to space constraints, could not be included. aTest includes deletion (elision) of syllables.
160
Academic Skills Problems
skills are measured by each test. Tests of phonological and phonemic awareness do not involve print and are conducted on an auditory/verbal basis (e.g., the examiner provides a word verbally, and the student segments the word into its component phonemes). PHONOLOGICAL VERSUS PHONEMIC AWARENESS
As discussed in Chapter 2, phonological awareness is an umbrella term that encompasses the full range of basic (rhyming, syllable segmentation) to more sophisticated (phoneme segmenting, phoneme manipulation) skills in perceiving and manipulating speech sounds. Phonemic awareness is more sophisticated and specific, and involves perceiving and manipulating individual phonemes (the smallest speech sounds) such as phoneme isolation, phoneme blending, and phoneme segmentation. A test or subtest measures phonemic awareness only if it involves tasks in which students must segment, blend, or manipulate individual phonemes in words. If the test involves tasks such as rhyming or segmenting and blending by syllables and other larger word parts, or mixed phonological and phonemic tasks, the test would be considered one of more general phonological awareness. Tests or subtests of more general phonological awareness are appropriate for students at very basic stages of reading development, including PreK and early kindergarten. Tests of phonemic awareness are usually more useful for evaluating reading skills among school-aged students referred for reading difficulties because phoneme-level skills are more involved in reading development. TYPES OF TASKS ASSESSED BY THE TEST
Most measures developed as part of the CBM framework tend to only measure a single skill, and thus are better suited for monitoring progress in that particular skill if phonological awareness is targeted in the intervention. For identifying skill deficits that contribute to a student’s word reading difficulties, it is often important to examine their phonemic awareness skills on a range of tasks. However, some types of tasks are more informative than others. Rhyming represents one of the most basic forms of phonological awareness, and tests typically consist of having students generate a word that rhymes with one spoken by the examiner, or identify a word that rhymes with a target word from a set of choices. Syllable blending and syllable segmentation tasks are a step up in sophistication from rhyming; they ask students to blend syllables spoken by the examiner into a whole word (e.g., the examiner says, “bas . . . ket . . . ball”, and the student then responds, “basketball”), or to segment a spoken word into its component syllables. Rhyming and syllable blending/segmenting tasks are considered “phonological” because they involve processing sounds larger than individual phonemes. These skills are often targeted in initial literacy instruction in PreK and early kindergarten. For students in later kindergarten and beyond, these types of tests have limited utility for evaluating reading difficulties. Rhyming skills have been shown to be weak and inconsistent predictors of reading skills (e.g., Muter et al., 1998), and tests that involve processing larger phonological units (e.g., rhyming, onset-rime segmentation) are not as strongly predictive of reading skills or discriminative of students with reading difficulties than tests than require phoneme-level skills (e.g., Melby-Lervåg et al., 2012; Nation & Hulme, 1997). Alliteration and first/last sound identification require attention to phonemes and represent entry-level phonemic awareness tasks. In alliteration tasks, students identify a word or words that begin with the same phoneme as a word spoken by the examiner. In
Step 2: Assessment of Skills and Instructional Placement 161
first sound identification tasks, students say the first phoneme they hear in a word spoken by the examiner. In last sound identification tasks, students say the last sound they hear in a word spoken by the examiner. These skills can typically be expected of students in early to mid-kindergarten who are receiving formal reading instruction. Phoneme segmentation and phoneme blending are two of the most important phonemic awareness skills for reading and spelling words (Muter et al., 1998; Nation & Hulme, 1997; O’Connor, 2011). In phoneme segmentation tasks, students are asked to segment a word spoken by the examiner into its component phonemes (e.g., the examiner says, “bat,” and the student: “b . . . ah . . . t”). Phoneme blending represents the reverse; the student says the word formed by the phonemes segmented by the examiner (e.g., the examiner: “b . . . ah . . . t,” the student: “bat”). Phoneme segmenting and blending can be expected of students receiving formal reading instruction, usually by the second half of kindergarten and into first grade. The most advanced phonemic awareness tasks include phoneme deletion (sometimes referred to as phoneme elision), phoneme substitution, and phoneme reversal In phoneme deletion tasks, the student is asked to remove a sound from a word spoken by the examiner to form a new word (e.g., the examiner: “Say bat without /b/,” the student: “at”). Sounds that students are asked to delete may occur at the initial, medial, or final positions in spoken words (medial sounds are usually more difficult). In phoneme substitution tasks, the student is asked to change a phoneme in a word to create a new word (e.g., the examiner: “Say bat. Now change /b/ to /k/,” the student: “cat”). Like in phoneme deletion tasks, substituted phonemes may occur in various positions of words, with medial substitutions tending to be more difficult. Phoneme reversal tasks involve reversing the sounds in a word so that it is said backward (e.g., the examiner: “What word do you get if you say bat backward?” The student: “tab”). IS THERE VALUE IN ASSESSING “ADVANCED” PHONEMIC AWARENESS SKILLS?
Some have suggested that measuring struggling readers’ ability to complete advanced phonemic awareness tasks such as phoneme deletion and phoneme substitution with sounds across initial, medial, and final positions provides important information for understanding reading difficulties and designing interventions (e.g., D. A. Kilpatrick, 2015). Phoneme deletion, substitution, and reversal tasks are more challenging than phoneme segmenting or blending, and some studies have indicated that correlations between performance on such measures and reading skills are slightly stronger than correlations between phoneme segmentation or blending and reading (e.g., Kroese et al., 2000). However, this does not necessarily mean that performance on advanced tasks provides greater insight into students’ phonemic awareness or better informs intervention development compared to measures of phoneme segmentation and blending. Two issues should be considered. First, evidence indicates that performance on advanced phonemic awareness tasks is influenced by much more than phonemic awareness. Several studies have revealed that the ability to complete advanced phonemic tasks like phoneme deletion is largely the result of learning to read and, more importantly, spell (Byrne & Fielding-Barnsley, 1993; Castles et al., 2003; Hogan et al., 2005; Perfetti et al., 1987). Orthographic (i.e., spelling) knowledge appears to be particularly important for completing advanced phonemic awareness tasks, as studies have observed that children and adults tend to mentally invoke the spelling of a word when asked to delete or reverse phonemes (Castles et al., 2003; Wilson et al., 2015). Thus, the ability to delete, substitute, or reverse phonemes in
162
Academic Skills Problems
spoken words is highly influenced by one’s knowledge of how words are spelled and their reading exposure; high scores on such tasks are not necessarily an index of proficiency with phonemic skills that made word reading possible, and low scores are not necessarily an indication that phonemic awareness is a skill area that needs to be targeted. In addition to being influenced by orthographic knowledge, advanced phonemic awareness tasks like phoneme deletion, substitution, and reversal place greater demands on working memory (i.e., simultaneous storage and processing of information), as data indicates that performance on these tasks is more strongly correlated with working memory compared to other phonemic tasks (Breaux, 2020). Second, although experimental studies support the use of phoneme segmentation instruction (especially when integrated with letters) for beginning and struggling readers (Ehri, 2020; Ehri et al., 2021), there is no evidence to date that instruction aimed at reaching high levels of accuracy and fluency with advanced phonemic skills improves word reading or is a necessary part of intervention for struggling readers (Clemens, Solari, et al., 2021). In short, available evidence raises questions regarding the value of assessing advanced phonemic awareness skills for struggling readers. Rather than concluding that a student’s lack of automaticity with phoneme deletion, substitution, or reversal is a unique index of phonemic processing and a cause of their reading difficulties, assessors should recognize that a student’s low performance on such tests is likely a result of their underdeveloped spelling knowledge and insufficient reading instruction or experience. Further undermining the value of assessing advanced phonemic awareness is the lack of a clear connection to intervention, as studies have yet to demonstrate that targeting students’ automaticity with advanced phonemic awareness skills results in improved reading. This may change with future research. SUMMARY: ASSESSING PHONEMIC AWARENESS
For school-age students with reading difficulties, a key to assessing phonemic awareness is selecting a test that requires students to engage in some form of phoneme-level processing. Some tests will provide composite scores that summarize student performance across a range of phonological and phonemic tasks or subtests. Composite scores can be useful for obtaining a more general picture of students’ phonological and phonemic awareness skills, especially when determining the extent and severity of phonological awareness deficits compared to other students in that age or grade. However, phoneme segmentation and phoneme blending are the skills most centrally involved in learning to read and spell (Ehri, 2020; O’Connor, 2011) and are arguably the most useful for assessment. When used alongside assessments of alphabetic knowledge and decoding, phoneme segmentation and blending data can inform the development of an intervention to target the skill deficits responsible for a student’s difficulty reading words.
Assessing Spelling Skills as Part of a Reading Assessment Spelling has been referred to as a “window” into students’ literacy skills, especially for younger or struggling readers (Ouellette & Sénéchal, 2008; Treiman, 1998). Ehri (2000) referred to reading and spelling words as “two sides of a coin.” The reason for this close connection is that both reading (decoding) and spelling (encoding) words rely on a common foundation of phonemic awareness, letter–sound correspondence, and orthographic knowledge. We have discussed the roles of these skills in word reading. They function similarly in spelling; phonemic awareness is used to segment words into phonemes to be
Step 2: Assessment of Skills and Instructional Placement 163
spelled, letter–sound correspondence is used to match segmented phonemes to letters and letter combinations, and orthographic knowledge is used to recall word segments and whole-word spellings. Spelling assessment can provide an evaluator with additional insights into a student’s strengths and weaknesses in these areas. Examples of spelling measures are provided in Table 4.7. In addition to examining students’ performance in spelling relative to peers, insight into students’ literacy skill functioning is provided by analyzing students’ spelling responses. This requires looking beyond whether words are spelled correctly or incorrectly. To illustrate, Table 4.8 displays the responses of four students after they were asked to spell three words: cat, burn, and car. No student spelled any of the words correctly. However, the four students varied considerably in their approximate spellings of the words, and their responses reveal some important information about their alphabetic and orthographic knowledge. Jayla is clearly demonstrating stronger early literacy skills than the others—she represented all phonemes with either the correct letters or “phonetically plausible” alternatives, indicating that she is developing strong knowledge in the alphabetic principle and phonemic awareness. Her error on burn (“bern”), which is a phonetically irregular word, indicates that she has not added an orthographic representation of burn to memory and is relying on letter–sound correspondence. Josh and Emily show two points further down the continuum; Josh represented initial and final letters and sounds with accuracy but omitted vowels, and Emily represented only the initial or final sounds. They have more skill needs than Jayla; however, both Josh and Emily understand how letters represent specific sounds and can pull some sounds out of a dictated word. Max’s responses reflect the lowest level of literacy functioning of the four students; his responses indicate he understands that words are represented with symbols, but he demonstrates no correct representation of sounds, and his inclusion of numbers indicates his alphabetic knowledge is at the very early stages. Thus, combined with reading assessment data, students’ spelling responses provide important insight into their knowledge and application of early literacy concepts. The CBM approach to spelling involves sampling words directly from the instructional curriculum. Examination of the teacher interview data should show whether spelling is being taught from a separate series, from the reading series, or by using teacher- generated word lists. Although probes can be constructed from the same curricular TABLE 4.7. Examples of Measures and Test Batteries That Include Subtests for Assessing Spelling Skills • CBM Spelling • Feifer Assessment of Writing • iReady • Kaufman Test of Educational Achievement, 3rd ed. • Process Assessment of the Learner, 2nd ed. • Test of Written Language, 4th ed. • Test of Written Spelling, 5th ed. • Wechsler Individual Achievement Test, 4th ed. • Wide Range Achievement Test, 5th ed. • Woodcock–Johnson IV Tests of Achievement Word Identification and Spelling Test • Note. This table provides a sample of the measures available for assessing spelling. It is not a complete list.
Academic Skills Problems
164
TABLE 4.8. An Example of How Students’ Spelling Responses Can Provide Insight into Their Early Literacy Skills Word
Jayla
Josh
Emily
Max
cat
kat
ct
c
ghr5
burn
bern
brn
nn
dsdss
car
cor
krrr
urr
3rt
material in which the student is being instructed, the nature of spelling instruction in schools does not allow for careful control of material across grades. Given that the objective of this part of the assessment process is to find specific areas where the student’s skills are deficient, evaluators may wish to use a more standard set of spelling words that is more carefully controlled for word difficulty across the grades. However, there may be situations in which the examiner wishes to evaluate students’ spelling with a specific set of words, such as words with particular letter patterns (e.g., vowel or consonant digraphs, blends, silent-e) or word features (e.g., affixes, morphemes). In these instances, examiners can construct their own spelling probes. The process is as follows: 1. The evaluator should select three sets of 20 words taken from the text or material used for instruction. 2. Words should be dictated to the student at the rate of one word every 10 seconds for first or second graders (12 words), and one word every 7 seconds for third graders and up (20 words). These time limits help standardize administration of the measure, allowing for scores to be compared over time. 3. Spelling probes can be scored in terms of the number of words spelled correctly and total number correct letter sequences, which provides a more sensitive index to students’ spelling responses as opposed to simply scoring words as correct or incorrect. The index is particularly useful for monitoring progress given its sensitivity to small changes in spelling improvement. The procedure for scoring correct letter sequences is as follows (and is described in more detail in Chapter 7). First, “phantom” characters are placed before and after the word. Then, each correctly sequenced pair of letters (including the beginning and ending character) is counted. Each word will have +1 possible letter sequences for letters in the word. For example, the word butter has seven possible letter sequences. Similar types of spelling assessments can be conducted with students at a very basic reading level. Ritchey’s (2008) spelling task uses five three-letter phonetically regular words such as cat, top, and sit. Words are dictated by an examiner on an untimed basis. Students’ spelling responses can be scored for total words spelled correctly, correct letter sequences, or correct sounds, another scoring approach Ritchey used that provides credit for a phoneme represented by a correct letter or a phonologically permissible alternative. For example, if a student spelled “kat” for cat, they would receive all three points because k is phonologically permissible in this case. Several studies have observed that spelling measures and spelling scoring indices collected in kindergarten demonstrate moderate to strong relations to student achievement on measures of literacy, including word reading
Step 2: Assessment of Skills and Instructional Placement 165
and standardized tests of spelling (Clemens, Oslund, et al., 2014; Ritchey, 2008; Ritchey et al., 2010) Review of a student’s permanent products (i.e., work samples) is another way to gather information on their spelling skills. Recent samples of the student’s work can be reviewed, such as spelling tests, writing samples, or other written work, to identify error patterns and relative strengths in students’ spelling knowledge.
Assessing Linguistic Comprehension Linguistic comprehension refers to students’ ability to understand oral language and may be referred to as listening comprehension or by similar terms. Language comprehension central to reading proficiency. As discussed in Chapter 2, language plays a role in reading in multiple ways; phonological awareness is part of the language system, vocabulary knowledge plays a role in word reading acquisition, and language comprehension plays its most central role in reading comprehension. Assessing linguistic comprehension is particularly important for understanding any academic skills difficulties experienced by emergent bilingual students, as well as students from historically marginalized backgrounds who may have had limited exposure to the vocabulary and dialect used in instruction and academic texts. Returning to our roadmap for reading assessment provided in Figure 4.1, assessing language-related skills has its greatest relevance when text reading accuracy and efficiency appear to be adequate, but the student struggles to understand what they read. The following types of assessment, followed by measures of reading comprehension, would be relevant in this situation. However, it also may be important to consider language-related difficulties for students that struggle with word reading for the reasons noted above. Thus, there are several reasons why an assessment of a student’s language skills is useful in an evaluation of their reading difficulties. Examples of tests that measure language skills are reported in Table 4.9. To be most relevant to a reading evaluation, consider tests that measure students’ vocabulary knowledge and listening comprehension. As will be seen in later sections of this chapter, these language domains are also useful for assessment in mathematics and writing.
Assessing Vocabulary Knowledge Vocabulary knowledge refers to one’s understanding of word meanings. It is one of the most essential aspects of comprehending a language and, consequently, understanding printed text. Vocabulary knowledge also overlaps strongly with background knowledge, another key aspect to reading comprehension. Additionally, vocabulary knowledge plays a role in word reading acquisition. A beginning reader with greater breadth and depth in their vocabulary knowledge is in a better position to link a decoded pronunciation to an existing entry in their oral vocabulary, are more likely to “fix up” a partial pronunciation to match a word in their vocabulary, and are more likely to be able to read that word correctly in the future (Elbro & de Jong, 2017; Kearns & Al Ghanem, 2019; Seidenberg, 2017). Thus, identifying underdeveloped vocabulary knowledge contributes to a reading assessment and designing an intervention. It is most relevant for students with reading comprehension difficulties and students who are learning English. Deficits in vocabulary knowledge should be viewed as possible targets for intervention. Measures that assess vocabulary knowledge are included in Table 4.9. A few considerations are warranted when selecting a measure.
166
Academic Skills Problems
TABLE 4.9. Examples of Tests for Measuring Language in an Academic Assessment Vocabulary
Listening Comprehension
CBM vendors and measures CBM Vocabulary Matching
easyCBM: Vocabulary
FastBridge: Sentence Repetition
Norm-referenced, standardized Kaufman Test of Educational Achievement, 3rd ed. (KTEA-3)
Oral and Written Language Scales, 2nd ed. (OWLS-II)
Test of Language Development, 5th ed. (TOLD-5)
Test of Preschool Early Literacy (TOPEL)
Test of Reading Comprehension, 4th ed. (TORC-4)
Test of Written Language, 4th ed. (TOWL-4)
Wechsler Individual Achievement Test, 4th ed. (WIAT-4)
Woodcock–Johnson IV Tests of Oral Language
Woodcock Reading Mastery Test, 3rd ed. (WRMT-3)
Note. This table provides a sample of the measures and batteries available for assessing language skills. It is not a complete list, and there are other measures that, due to space constraints, could not be included.
READING VOCABULARY VERSUS ORAL VOCABULARY MEASURES
Vocabulary measures vary in the way they are administered. Some measures are designed for students to take independently, in which they read target words and identify a correct synonym from a set of answer choices. Therefore, reading is required in these types of measures. Reading is not required on other types of vocabulary tests; in these cases, the examiner provides words orally and the student defines it (also orally). These differences have implications for assessment. If a student with word reading difficulties is administered a vocabulary measure that requires reading, it may be impossible to determine whether low scores are due to low vocabulary knowledge or because the student was unable to read some of the words or answer choices. Thus, orally administered measures of vocabulary knowledge are preferable for students with word reading difficulties. ASSESSING CURRICULUM‑SPECIFIC VOCABULARY KNOWLEDGE
The vocabulary measures listed in Table 4.9 are all measures of general vocabulary knowledge, meaning they include a range of vocabulary terms that are important for understanding academic texts and instruction. However, there may be situations in which the evaluator would like to evaluate a student’s knowledge of vocabulary terms relevant to the specific texts or instruction they are provided in the classroom. Vocabulary knowledge is one of the areas in which alignment with the curriculum of instruction is more relevant for assessment, because knowledge of a set of terms depends on having learned those specific terms through instruction or through reading. This may be particularly important when students are referred for difficulties in certain content areas, such as
Step 2: Assessment of Skills and Instructional Placement 167
social studies and science, where content-specific vocabulary strongly reflects students’ background knowledge of a subject area. CBM vocabulary-matching techniques (Espin et al., 2001) have been used successfully to assess students’ knowledge and monitor progress in science (e.g., Borsuk, 2010; Conoyer et al., 2019; Espin et al., 2013; Ford et al., 2018) and social studies (Beyers et al., 2013; Conoyer et al., 2022; Lembke et al., 2017). Measures are created by gathering key vocabulary terms and definitions from the students’ textbooks or other curriculum materials. Text glossaries or unit overviews are often good sources for identifying these terms. To construct a vocabulary-matching measure, a set of terms is listed vertically on the left side of the page, and definitions are listed on the right side of the page in random order. It is recommended that three or four additional definitions be included as distractors (and that students be informed not all definitions will be used). Students complete the measure by matching a term on the left with the correct definition on the right. ASSESSING LANGUAGE (LISTENING) COMPREHENSION
For students with reading difficulties, there are some situations in which it is important to examine their listening comprehension (i.e., their ability to comprehend oral language). This is particularly relevant when students have been referred for difficulties with reading comprehension. Vocabulary knowledge is certainly a large part of listening comprehension; however, vocabulary knowledge does not tell the whole story. Understanding language also involves familiarity with syntax (i.e., word order affects meaning), knowing the meaning of phrases, idioms, and expressions (e.g., “on the other hand,” “it’s still up in the air”); understanding how verb tense, possessives, and pronouns change meaning; and other insights that allow us to comprehend oral language. Difficulties with these aspects of oral language are likely to affect reading comprehension as well. However, it is also the case that struggling readers can demonstrate average oral language comprehension but have significant difficulties comprehending what they read. Thus, from the standpoint of intervention development, it is important to determine when reading comprehension difficulties may be due, in part, to underlying difficulties comprehending oral language more generally. Examples of measures that can be used for evaluating listening comprehension are listed in Table 4.9. There is some variation in how the tests measure listening comprehension, but in general, the examiner reads sentences or brief passages to the student, and then asks the student questions to evaluate their understanding. A key factor is that students are not asked to read the content because the aim of these tests is to measure a student’s ability to comprehend language without the potential barriers of reading. Students with underlying difficulties in oral language are likely to be identified early in their schooling (i.e., PreK, kindergarten) because their language comprehension can be readily observed in daily activities such as conversation or following directions. Early language difficulties are a risk factor for reading difficulties. However, practitioners may still encounter a smaller population of students with “late-emerging” reading difficulties (Catts et al., 2012). In a longitudinal study that followed students from kindergarten through 10, Catts and colleagues found that although most students with reading difficulties were identified before grade 4, a still sizable portion of students with reading difficulties emerged in fourth grade and beyond. They found that these students with late-emerging reading difficulties tended to struggle more in reading comprehension, and although word reading difficulties were present, they tended to be less severe than for students identified earlier. A high proportion of students with late-emerging
168
Academic Skills Problems
reading difficulties had a history of oral language difficulties dating back to kindergarten. Catts et al. posited that some students may have been overlooked in early grades, or perhaps demonstrated language or reading difficulties that were not severe enough to be detected earlier on. Rather, their reading difficulties emerged in later elementary grades as texts became more complex and reading comprehension demands increased. These results argue for early language screening to identify students with early language difficulties, but they also indicate that practitioners should be aware of the possibility of late-emerging reading difficulties that are primarily due to issues with language.
Assessing Reading Comprehension Difficulties By this point, having considered oral reading, word reading skills and the early-literacy subskills that make word reading possible, and language comprehension, the causes of a student’s reading comprehension difficulties are likely going to be fairly clear. Percentage- wise, the majority of problems with reading comprehension can be traced to inaccurate and inefficient word reading skills, which make reading text a slow, arduous, error-filled endeavor, which in turn significantly impedes one’s ability to understand and learn from print. In these cases, skill deficits will be apparent on both measures of isolated word reading as well as measures of text reading. Indeed, when word reading is problematic, there is little to be learned from administering measures of reading comprehension because so much of the student’s reading difficulties are explained by their problems with word-level skills. Reading comprehension cannot be expected if students struggle significantly with reading words. There are other cases, albeit a smaller percentage of the time, in which students’ reading comprehension is low despite seemingly adequate word and text reading skills. In many instances, these reading comprehension difficulties may be explained at least partially by inadequate linguistic comprehension and background knowledge, as discussed above. However, there are still cases in which students may demonstrate average performance across all basic reading and language measures but still struggle to understand and learn from text. Examples of tests that can be used to assess reading comprehension are listed in Table 4.10. The response mode involved in each test is reported. As will be discussed later, response mode is one of the factors that influences what reading subskills may play a greater role in student performance across different tests. Assessing reading comprehension can add to an evaluation of a student’s reading skills in several situations. However, measuring reading comprehension is not as simple as it sounds, or as straightforward as test developers would have you believe. There are several important factors to consider in assessing reading comprehension difficulties.
Pure Reading Comprehension Difficulties First, it is important to caution readers that so-called “pure” comprehension difficulties (i.e., problems are specific to reading comprehension) may not be as common as some educators believe. Evidence indicates that for most students, even students in secondary grades, low reading comprehension is accompanied by below-average word reading skills, reading fluency, vocabulary and linguistic comprehension, or a combination of factors (Brasseur-Hock et al., 2011; Cirino et al., 2013; Clemens, Simmons, et al., 2017; Hamilton & Shinn, 2003; Spencer et al., 2014). Difficulties in those subskills, especially
Step 2: Assessment of Skills and Instructional Placement 169
word reading, can play a significant role in students’ lack of response to comprehension interventions (Vaughn et al., 2020). Teachers may often refer to word callers, students they believe have adequate reading fluency but poor comprehension. However, studies indicate that when teacher-identified word callers are tested, most demonstrate below- average reading efficiency (Hamilton & Shinn, 2003; Meisinger et al., 2009). To be clear, it is entirely possible for reading comprehension difficulties to occur despite adequate word and text-reading skills. However, evaluators should be mindful that low performance on tests of reading comprehension may not necessarily mean a reading comprehension problem exists; a student’s low performance may be due to problems in basic reading skills or language areas that make reading comprehension possible.
Sentence verification
Open-ended response
Multiple-choice
Retell
Cloze
TABLE 4.10. Examples of Tests of Reading Comprehension and Their Response Formats
CBM vendors and measures Acadience: Maze AIMSweb: Reading
Comprehension a
DIBELS 8th ed.: Maze
easyCBM: Common Core State Standards Comprehension
easyCBM: Multiple Choice Comprehension
FastBridge: CBMcomp
FastBridge:
COMPefficiencya
Norm-referenced, standardized Diagnostic Assessment of Reading–2
Gray Oral Reading Tests, 5th ed. (GORT-5)
Oral and Written Language Scales, 2nd ed. (OWLS-II)
Process Assessment of the Learner, 2nd ed. (PAL-2) Test of Reading Comprehension, 4th ed. (TORC-4)
Test of Silent Reading Efficiency & Comprehension (TOSREC)
Wechsler Individual Achievement Test, 4th ed. (WIAT-4) Wide Range Achievement Test, 5th ed. (WRAT-5)
Woodcock–Johnson IV Tests of Achievement (WJ IV ACH)
Woodcock Reading Mastery Test, 3rd ed. (WRMT-3)
Note. This table provides a sample of the measures available for assessing reading comprehension. It is not a complete list, and there are other measures that, due to space constraints, could not be included. a Computer-administered.
170
Academic Skills Problems
Different Tests of Reading Comprehension Measure Different Things A second factor to be aware of is that tests of so-called “reading comprehension” vary significantly in the skills they measure. Reading comprehension is a complex construct that is the result of interactions among a student’s reading skills, background knowledge, attention and engagement, the text’s characteristics, and situational demands. Tests of reading comprehension often measure only a small aspect of the construct. Tests frequently differ substantially from each other in terms of how they are administered, the types and lengths of the texts students are asked to read, the types of questions students are asked, how students are expected to respond, and other aspects. This variability in test formats and characteristics has implications for how reading comprehension tests are interpreted. Studies indicate that correlations among different tests of reading comprehension are moderate at best, far weaker than would be expected for tests that purport to measure the same thing (Clemens, Hsiao, et al., 2020; Francis et al., 2006; Keenan et al., 2008; Keenan & Meenan, 2014). Furthermore, several studies have found that student performance on different tests of reading comprehension is differentially influenced by component reading and language skills. For example, reading comprehension tests that use cloze tasks (i.e., identify a word that belongs in a blank space in a sentence or paragraph) appear to be more influenced by word reading skills (Andreassen & Bråten, 2010; Clemens, Hsiao, et al., 2020; Francis et al., 2006; García & Cain, 2014; Keenan et al., 2008; Keenan & Meenan, 2014; Nation & Snowling, 1997; Spear-Swerling, 2004), whereas language skills appear to be more important on longer tests and those that require verbal responses to oral questions (Clemens, Hsiao, et al., 2020; Francis et al., 2006; Keenan et al., 2008). Tests that involve reading informational and expository texts appear to tap background knowledge to a greater extent than tests with narrative passages (Best et al., 2008). Tests with response formats requiring lowerlevel text processing (e.g., retell, sentence verification, and cloze) are often easier for students than tests that require responses to open-ended questions and multiple-choice questions (Collins et al., 2018). For these reasons, evaluators must be critical of any test that purports to measure reading comprehension and consider their results carefully. No single test should be considered to adequately measure a student’s reading comprehension skills.
Caveats and Considerations to Assessing Reading Comprehension Given the issues with measuring reading comprehension, is it still worthwhile to assess it as part of a reading evaluation? If the results will help plan intervention, the answer is “yes,” with the following caveats in mind: 1. Reading comprehension assessment is of little practical use when a student has significant difficulties with word reading skills (and text reading by extension) because reading comprehension cannot exist if text is not read with sufficient accuracy and efficiency to allow comprehension to take place. In these cases, a student’s low reading comprehension is explained at least partially (if not fully) by their difficulties with word and text reading. 2. If reading comprehension is assessed, evaluators should use measures that sufficiently tax reading comprehension and reflect how it would most commonly take place in the student’s natural reading context. This means that the reading comprehension
Step 2: Assessment of Skills and Instructional Placement 171
measures used should have students read passages (i.e., paragraphs or multiparagraph passages), not just single sentences. Measures that rely on single sentences can be too heavily influenced by a decoding error on a single word (Keenan et al., 2008). Additionally, single-sentence measures may not sufficiently tax the language comprehension, inference making, and knowledge integration involved in a typical reading situation. Measures that involve more reading provide a better view of reading comprehension. 3. Whenever possible, use multiple measures of reading comprehension that differ in their response formats and examine convergence in performance across the measures. For example, one measure may involve responding to multiple-choice questions, another may involve responding verbally to open-ended questions, and another might involve responding to cloze items. Using multiple measures with different response formats is one way to help mitigate problems of any one measure not adequately representing the construct, and problems of reading comprehension measures that are differentially influenced by different skills. We acknowledge that administering multiple measures of reading comprehension is not easy and may not be feasible. It certainly violates the notion of parsimony that we advocate in assessment. Therefore, using multiple measures of reading comprehension should be reserved for situations in which understanding reading comprehension skills is of particular interest and importance for developing an intervention. 4. It should always be remembered that reading comprehension is far more than a test score. Similar to how the words correct per minute metric has led to oversimplifications of reading fluency, reducing reading comprehension to a single test score risks similar types of problematic interpretations (Kamhi & Catts, 2017; Wixson, 2017). By design, reading comprehension tests are intentional reductions of a very complex construct, which test publishers must do to offer a measure that someone can feasibly administer and reliably score after merely reading the examiners’ manual. Scrutiny and consideration of what is measured by the test(s) selected, understanding that reading comprehension tests often measure things other than text processing, and considering test results in conjunction with data from the other measures of the reading assessment are important for appropriately understanding a student’s achievement at a given time, and their need for intervention in an area like reading comprehension.
Putting It All Together: A Framework for Interpreting Reading Assessment Results Even with a good understanding of the keystone skills of reading development and how they interact, it can be challenging to draw conclusions from assessment results, especially when measures of different reading skill areas have been administered. There are some actions evaluators can take to aid interpretation. First, simplify your assessment by limiting it to the skills and measures that, driven by your hypothesis and an understanding of how reading skills develop, are most implicated in the student’s academic difficulties. There is no need to unnecessarily complicate your assessment. Parsimony is key. Use the sequence we suggested in Figure 4.1. Second, interpreting the results across multiple tests is made easier by using a common metric for the results, such as standard scores or percentile ranks. CBM tools do not have standard score options, but all major vendors currently offer percentile conversions for most of their measures. Converting scores on the measures to standard scores or percentiles allows them to be compared to each other and graphed to identify a student’s areas of relative strength and weakness. Graphing the results is helpful when the skills are
172
Academic Skills Problems
arranged hierarchically, from basic and foundational to more sophisticated, in a way that generally reflects reading development. Organizing the test results in this manner allows the clinician to identify areas of relative strength and weakness and, most importantly, the point in the sequence where skills have broken down. For some students, the primary and most important deficits occur in phonemic awareness and alphabetic knowledge— which have cascading negative effects on word reading and the development of more complex skills. For other students, their foundational skills may be adequate, but problems with reading more complex words or limited vocabulary knowledge explain higher- order skill difficulties such as reading comprehension problems. Organizing the results in this way helps one examine the evaluation more holistically, connects the evaluation to evidence of reading development, and allows for clearer identification of primary skill targets for intervention. Caution is advised when interpreting results from individual tests and subtests. Different tests will have different normative groups even when they are considered to be “representative” of the larger population, and the age of the norms can differ as well. Problems can also arise in comparing differences in subtests when the measurement error exceeds the score differences. However, comparing percentile performance across subtests and other measures (such as CBM tools) can be considered when the data are used for intervention development purposes, not for high-stakes decisions. At this stage in the process, the evaluator should revisit the hypothesis generated at the start of the assessment process. With the hypothesis in mind, data from the direct assessment are combined with other sources to determine whether the new information confirms the hypothesis, whether the hypothesis should be revised, or if additional information is needed. We discuss reevaluations of the hypothesis at the end of this chapter.
Common and Less Common Situations in Assessing Reading Skills Most Common: Word and Text Reading Difficulties Difficulties reading words and text accurately and efficiently represent the most common type of reading problem. As noted earlier, developing skills to read words with automaticity represents the most significant challenge for a developing reader. It is the skill area in which reading difficulties and disabilities (e.g., dyslexia) most often occur. Problems with reading comprehension can frequently be traced back to problems reading text with efficiency, which in turn most often stem from problems with word reading. Word reading difficulties may be the result of underdeveloped phonemic awareness and alphabetic knowledge (especially, letter–sound correspondence) and insufficient supported practice opportunities, which are necessary for adding word spellings to memory.
Less Common: Word Reading Skills Appear Adequate but Text Reading Is Slow There may be select situations in which a student demonstrates high accuracy in reading words (e.g., reads approximately 97% or more of words correctly in grade-level text), but they read connected text slowly. These situations are uncommon because so much of reading text with efficiency depends on efficient word recognition skills. Still, it is a possibility. The first thing to determine is whether reading comprehension is an area of difficulty, or perhaps vocabulary and listening comprehension, because readers often slow down when they do not understand what they are reading. If reading comprehension is not a problem, then the evaluator should determine whether their oral reading is really an
Step 2: Assessment of Skills and Instructional Placement 173
issue that warrants additional attention and intervention. Some children just read more slowly than others. As long as they are comprehending and learning from texts they are expected to read, and completing work and tests in the allotted time, then there may be little reason for adding supplemental support or intervention for reading fluency. On the other hand, the student’s grade level should be considered. Text complexity and reading demands increase rapidly across the middle elementary grades. Students in second grade, for example, may demonstrate adequate comprehension despite low fluency, but their difficulties become more apparent when text length and difficulty increase in subsequent grades. For this reason, it may be helpful to consider additional practice reading text as a component of an intervention, and, at the very least, periodically monitor the student over time to identify when reading fluency deficits begin to pose a problem for their work completion, cause difficulty understanding and learning from a text, or both. Regardless, whether or not to consider intervention specific to text-reading efficiency should be contemplated carefully.
Less Common: When Reading Comprehension Appears Stronger than Word Reading or Oral Reading Another less common situation occurs when a student’s performance on reading comprehension measures is stronger than on measures of word- and text-reading efficiency (e.g., average comprehension but below-average word reading). This situation may seem like one in which intervention is not warranted because comprehension is a goal of reading. In this case it is important to scrutinize the reading comprehension measures that were used and the student’s performance on them. Very often, published measures of reading comprehension do not sufficiently tax reading skills because they involve passages that are shorter or simpler than what students are expected to read in their natural setting, thus allowing students with strong language and background knowledge to correctly answer comprehension questions despite relatively lower word reading skills. This is especially true of reading comprehension tests administered to students in the early elementary grades, where passages may only consist of a sentence or two and lack complexity. In these instances, attend to a student’s below-average word and text reading skills as needed, because their reading difficulties may become more significant as text difficulty increases. Examiners might consider using a reading comprehension measure that involve reading demands that are more similar to what the student is expected to read in instruction.
Less Common: Difficulties in Reading Achievement Despite Adequate Reading Skills It is still possible for students to struggle academically in subjects that involve reading (e.g., English and language arts, social studies and history, or science) despite demonstrating adequate performance in all the skill and knowledge areas that make reading possible. In such situations, the student may be experiencing issues related to attention, motivation, poor study skills, emotional difficulties, stressful situations within or outside of school, or some combination of these factors. Reading anxiety, for example, is experienced by some students and it affects their achievement (e.g., Ramirez et al., 2019). In these cases, it is likely that difficulties will not be specific to reading and will be associated with low performance in other academic areas. Practitioners may see this situation occur more often as students move through the middle and secondary grades. Observing
174
Academic Skills Problems
this pattern requires attention to the student’s skills and behaviors that support learning, including engagement, motivation, and emotional well-being.
Survey‑Level Assessment and Determining Instructional Level in Reading One of the goals of a reading assessment is to indicate the approximate grade level in which text will be most effective for instruction. This applies the most to the texts used with the student for reading practice, particularly the texts that are assigned to students to practice skills learned in word reading instruction, improve reading fluency, and build skills in integrating vocabulary knowledge or strategies to improve their reading comprehension. Defined as the instructional level, this is the level of text difficulty in which the student is likely to be challenged, but capable of making progress with sufficient instruction and feedback from the teacher or other skilled reader. In contrast, providing texts to the student that are at a much higher level would result in too many errors and would be too difficult for them to learn effectively. Text that is too difficult for instruction is sometimes referred to as being at a frustrational level. Providing text that is too easy (e.g., the student reads all words with 100% accuracy) does not present sufficient challenges to be effective for instruction because there is little to learn. Content that is too easy for instruction has been referred to as being at a mastery level, but a more preferable term is independent because this level of content is appropriate for the student to read in independent practice opportunities when a teacher is not present to provide support. A survey-level assessment is aimed at determining these levels.
Perspectives on the Importance of Text Difficulty Perspectives on appropriate level of text difficulty have been a matter of debate. It is important first to differentiate recommendations on text difficulty made for general education reading instruction, as opposed to interventions provided to struggling readers. Historically, some scholars emphasized the importance of finding the “just right” level of text for reading skills to improve. Reading programs emerged over the years that stressed the importance of “leveled” texts, which suggested that students could be grouped based on the level of text they read successfully, and that students needed to master reading texts at one level before moving to the next. Although there is agreement that text that is too difficult is not useful for instruction, some have advocated that teachers should aim for texts that are on the more challenging side in terms of complexity, assuming that the teacher is present to support students’ reading (Shanahan, 2021), to help students advance their reading skills. Overall, contemporary perspectives on text difficulty in general education indicate less rigidity in trying to get text difficulty just right and lean toward using more challenging texts. Slightly different thinking is warranted for students with reading difficulties. Studies have observed that struggling readers experience more growth in reading when texts used for intervention are within their instructional level (Hiebert, 2005; O’Connor et al., 2002). Reading accuracy (i.e., the percentage of words students read accurately in the text) tends to be the primary factor to consider when making decisions about instructional level for struggling readers (Rodgers et al., 2018), not necessarily the complexity of the ideas or subject matter discussed in it. This makes sense, given that word reading is often the primary limiting factor in reading connected text. Level of support available from the teacher also matters, as will be discussed below.
Step 2: Assessment of Skills and Instructional Placement 175
Survey‑Level Assessment In addition to informing the levels of text appropriate for intervention, a survey-level assessment also helps identify the text level most suitable for progress monitoring. For students with reading difficulties, text that is best suited for monitoring progress is usually in the student’s grade level so that progress toward grade-level expectations can be monitored. However, there are situations in which a student’s reading level is so low that monitoring progress with grade-level materials is of little use because error rates are so high. More detail on determining the grade level for passages for monitoring progress is provided in Chapter 7. In short, survey-level assessment is less about identifying why the student is struggling in reading; the assessment conducted prior to this point is used for that. Rather, the purpose of the survey-level assessment is to identify the level of reading material appropriate for intervention and for monitoring progress. In reading, a survey-level assessment generally consists of administering a series of reading passages, usually beginning with the student’s grade level (although starting below the student’s grade level is advisable when referral information clearly indicates that the student has significant reading difficulties), and then administering passages up or down in grade levels until the evaluator identifies the grade level of materials in which the student demonstrates “independent” levels (too easy for instruction), “frustrational” levels (too difficult), and “instructional” levels (just right). Criteria for determining these levels have been defined in different ways over the years and have included comparisons to local normative data, comparisons to national normative data, and considering the proportion of words read correctly. Given that reading words accurately is so crucial for reading proficiency, and the most common area in which reading difficulties occur, arguably the most important determinant of what makes reading material effective for intervention is the proportion of words read correctly (Gickling & Armstrong, 1978; Rodgers et al., 2018; Treptow et al., 2007). Comparisons to normative fluency data are a secondary consideration.
Survey‑Level Assessment Using the Curriculum of Instruction versus Generic Passages Historically (and in previous editions of this text), an examiner conducted a survey-level assessment by creating reading passages from the student’s curriculum of instruction. The creation of one’s own reading passages is time-consuming, and as discussed earlier, evidence that emerged over the years indicated that generically developed passages are just as valid, and in some cases more reliable, than passages created by a user (L. S. Fuchs & Deno, 1994; Hintze & Christ, 2004). When users have access to publishers of CBM materials, there is no longer a need to create passages directly from the curriculum. Nevertheless, special situations may still exist in which the evaluator wishes to pull passages directly from the curriculum. For this reason, the procedures for deriving passages are provided in Appendix 4A at the end of this chapter. Passages suitable for survey-level assessment are available commercially from several sources, such as Acadience, AIMSweb, DIBELS, easyCBM, FastBridge, and QuickReads (Savvas.com). When using passages from CBM publishers, it is important to use passages allocated for progress monitoring use, not passages reserved for universal screening (i.e., so-called “benchmark testing”). Passages reserved for universal screening should only be used for screening purposes; using them outside of that situation can invalidate their use for screening.
176
Academic Skills Problems
CONDUCTING A SURVEY‑LEVEL ASSESSMENT FOR READING
To conduct a survey-level assessment for a student with reading difficulties, the examiner should first gather a set of reading passages. This set should include (1) passages from the student’s current grade level and (2) passages for each of the two to three grade levels below the student’s current grade. For example, for a third grader, the examiner would make passages from the third, second, and first grades available. For older students, additional passages from lower grade levels can be kept on hand if the examiner suspects the student’s reading level may be several grade levels below their current grade. The evaluator should begin with a passage from the student’s current grade level. Grade-level passages may have already been administered as part of the student’s evaluation in earlier steps; however, these data should only be used for the survey-level assessment if passages came from the same set as in the survey-level assessment (e.g., DIBELS). Even for a student who you are certain is reading below their grade level, it is still a good idea to start with the student’s grade level and work backward as needed. This takes the guesswork out of where to begin and also provides data on the student’s fluency and accuracy in grade-level text, which is needed anyway for describing the student’s present levels of academic performance. Before beginning the assessment, the evaluator should tell the student that they are going to be asked to read and should do their best. If the evaluator is going to ask comprehension questions for a particular passage, the student should be told before beginning. The evaluator should then give the student version of the first reading probe to the student and administer it as indicated in the publisher’s materials. Passages are administered according to the administration and scoring rules associated with each passage set; therefore, examiners should carefully read the instructions beforehand and be ready to administer exactly as the publisher describes. Overall, CBM oral reading probes are scored in a similar manner and involve recording reading errors (by making a slash through any words read in error) and the number of words read correctly in 1 minute. The following scoring rules and error types are generally consistent across passage sets: 1. Correct words. Any word should be pronounced correctly for the context of the passage. Words must be read smoothly as a whole unit to be considered correct. A word that is sounded out is not considered correct unless it is then blended together as a whole unit, but repeated instances of this should be noted because they indicate insecure word reading skills. Words like read and bow must be pronounced correctly for the context of the passage (e.g., if the passage says, “He read the book,” the student’s pronunciation of read must sound like red to be counted as correct). If the student self-corrects an initial mispronunciation within 3 seconds, it is counted as correct. Repetition of words should not be marked as errors, but examiners ought to make a note if this happens repeatedly. 2. Errors. Overall, any word read incorrectly or skipped is scored as an error. This includes the following:
• Omitted words: Skipped words are counted as errors. If the student omits an
entire line of text, the evaluator should redirect them to the line as soon as possible and mark one error. If the evaluator cannot redirect the student, the omission should be counted as one error and not as an error for each word missed.
Step 2: Assessment of Skills and Instructional Placement 177
• Substitutions: Saying another word for a word is counted as an error. If the student omits suffixes such as -ed or -s in a word, the word should be marked as an error. • Mispronunciations: Any word mispronounced or pronounced incorrectly is counted as an error. • Insertions: If a student adds a word that is not in the passage, this is noted but not scored as an error. For example, if the passage reads, “The dog ran” and the student reads, “The big dog ran,” the insertion of the word big is noted but not scored as an error. • Hesitations and struggling with a word: If the student hesitates on a word or struggles to pronounce it, the examiner provides them with 3 seconds to respond. If the student has not read the word correctly after 3 seconds, the examiner should supply the word and count it as an error.
At the end of 1 minute, the student is instructed to stop reading and the examiner marks the last complete word the student read before the timer expired. The probe is scored in terms of the number of words read correctly within 1 minute. Accuracy is scored in terms of the number of words read correctly out of the number of words the student encountered (i.e., words read correctly/number of words encountered × 100). Next, the examiner administers a passage from the next lower grade level. When using probes from standard passage sets provided by CBM publishers (e.g., Acadience, AIMSweb, DIBELS, easyCBM, FastBridge), one passage per grade level is sufficient for obtaining reliable estimates of reading fluency (Ardoin et al., 2004). These passage sets have undergone an extensive development and testing process that makes one passage sufficient, especially considering the low stakes of the decisions made based on survey-level assessment (i.e., it is not difficult to adjust up or down in reading content later after the intervention has begun). However, if the examiner believes that one passage per grade level is insufficient for providing a reliable estimate (e.g., the student is easily distracted, their score on a passage seemed atypical), then three passages per grade level can be administered and the median score from the three passages is used. INTERPRETING THE DATA TO DETERMINE INSTRUCTIONAL LEVEL
Prior editions of this text indicated that instructional level represented ORF between the 25th and 75th percentiles. Performance below the 25th percentile was considered frustrational, and performance above the 75th percentile was considered independent (i.e., “mastery”). There are some problems with this perspective. First, no empirical basis exists for this range as a determinant for instructional level. Second, it is possible this set of decision rules will result in identifying text that is not challenging enough for intervention (e.g., if the student’s performance fell at the 65th percentile). Third, although the WCPM score inherently includes accuracy, using a fluency range does not consider the proportion of words that the student read correctly, which from an intervention standpoint is a more important factor. The one way in which normative fluency data can be informative is to evaluate whether the student’s reading rate is above average (i.e., above the 50th percentile), which suggests the text is not challenging enough for intervention or intervention is not necessary. The rationale is that instructional level should fall somewhere on the lower end of skill performance relative to expectations so that there is room to grow.
Academic Skills Problems
178
Considering reading accuracy (i.e., the percentage of words read correctly in a passage of text) improves practical decisions regarding the level of text to use for a reading intervention. Levels of reading accuracy that reflect frustrational, instructional, and independent reading have been considered and debated for decades. However, research and scholars’ perspectives have generally converged on the following recommendations (Burns, 2007; Gickling & Armstrong, 1978; Morris et al., 2010; Rogers et al., 2018; Treptow et al., 2007), summarized in Figure 4.3. 1. Frustrational level. In frustrational-level text, reading accuracy at or below 90% translates to one error for every 10 words, or 10 errors for every 100 words. Rodgers et al. (2018), in their review of reading accuracy studies, observed detrimental effects on students’ reading progress and motivation when accuracy was below 90%. If a reader cannot achieve this level of accuracy even in the most simple texts, intervention should target foundational skills in early literacy and word reading and provide close support to build the student’s ability to read basic texts. 2. Independent level, Text read with greater than 97% accuracy can be considered independent-level. Ninety-eight percent accuracy translates to two errors for every 100 words; an error rate that should in most cases allow students to maintain understanding while they read. This level of accuracy is sufficient for the student to independently read the material provided (i.e., without supervision by a teacher or skilled reader) or with a peer partner. 3. Instructional level, Given the ranges for frustration- and independent-level text above, the highest grade-level text in which the student reads with accuracy generally between 91 and 97% would be considered instructional-level (i.e., roughly, no more than 1 error for every 10th word but not very high accuracy). This is text that students can read with a teacher or skilled reader who is there to provide support, instruction, affirmative feedback, and immediate error correction. This level of accuracy provides enough opportunities for growth because errors are learning opportunities. As in strength training, where too little weight does not offer enough resistance to build strength and too much weight cannot be lifted, the instructional level generally between 91 and 97%
Frustrational Level
Instructional Level
Easier Text
Difficult Text
Many words are unreadable, reading is laborious, and accuracy is below 90% Text should be avoided for instruction or practice
Independent Level
Student’s reading accuracy is between 91 and 97%
Student reads with ease, accuracy is 98% or better
Good for teaching and reading activities in which the teacher can actively support and correct errors
Good for partner reading, cooperative work, and independent practice
FIGURE 4.3. General guidelines on determining instructional level text in reading.
Step 2: Assessment of Skills and Instructional Placement 179
reading accuracy can be viewed as a “sweet spot” for reading growth. This is consistent with prior work, give or take a couple of percentage points (Burns, 2007; Gickling & Armstrong, 1978; Morris et al., 2013; Rogers et al., 2018; Treptow et al., 2007). We emphasize that determining instructional level for intervention in reading is an inexact science, because reading accuracy and fluency vary across texts even when passages are well controlled for difficulty. Examiners may need to make their best data-based guess regarding what level of content is appropriate for reading practice. Our recommendations on the reading accuracy ranges noted above should be viewed as approximates. It is also important to consider the student’s tolerance for frustration and self-efficacy in making these decisions. For a student suffering from very low motivation due to persistent difficulty and embarrassment from reading, it may be better to use passages which students read more accurately to build confidence and to introduce more challenge as skills improve. On the other hand, text that is more challenging may be appropriate for a student with good motivation who is less easily frustrated. Sometimes a student may appear to be instructional across several grade levels. When it is difficult to determine instructional level between two grade levels, select the higher one. For example, a pattern for a fourth-grade student might look like this: grade 4 = frustration; grade 3 = instructional; grade 2 = instructional; grade 1 = independent. In this case, grade 3 material would be indicated as “instructional” because the highest grade level for the student is instructional. Grade 1 material could be used for independent reading opportunities. The text used for intervention can always be adjusted up or down, as needed, at a later point. We reiterate that this process is used to determine the grade level of connected text used for instruction and reading practice. This process and the guidelines described above do not apply to the selection of words or programs for word reading instruction, because the majority of those words should be unknown if they are targeted for instruction. Rather, this process is applicable for determining the appropriate difficulty of passages used for practice applying decoding skills to connected text, building text reading efficiency, and opportunities to practice integrating new vocabulary, background knowledge, or strategies to improve text comprehension. Additionally, the process described above is not the same as determining the level of text appropriate for monitoring student progress over time. Although similar, identifying passages for progress monitoring involves other considerations and is described in Chapter 7.
DIRECT ASSESSMENT OF MATHEMATICS SKILLS Assessing students’ mathematics skills is informed by an understanding of the keystone skills of mathematics development described in detail in Chapter 2 (see Figure 2.3). For students referred for mathematics difficulties, the assessment should identify the skill areas that are most problematic, but also relevant for the student’s current stage of mathematics development and expectations for their particular grade. For instance, for a second grader referred for mathematics difficulties, fraction operations are not relevant (although fraction concepts may be taught in early grades, fraction operations are not typically taught until third grade at the earliest), but number combinations, procedural calculations, basic word problems, and basic geometry and measurement are very relevant for second graders.
180
Academic Skills Problems
In addition to understanding the keystone skills and their interrelations across mathematics development, understanding what skills are relevant for assessment and intervention is aided by examining mathematics expectations across grade levels. Compared to reading, mathematics development is more dependent on instruction for a longer period of time, and mathematics proficiency in various skills often depends on having been taught those specific skills. Therefore, understanding what is expected of students is helpful for knowing what skills to assess and when. Today, 41 states in the United States use the CCSS, which delineate the academic standards and learning goals for students in kindergarten through grade 12 in mathematics and reading/language arts. States that have not adopted the CCSS often have their own set of standards that very much resemble the CCSS. Other countries have similar types of standards and learning expectations. These standards (either the CCSS or state- specific standards) define what students should know by the end of each grade level, and therefore are a good resource for assessing mathematics skills. Other sources of information for determining expected mathematics skills across grade levels can be found in the scope and sequence materials of the mathematics curriculum used by the school. Although it is likely that most curricula will be aligned with either the CCSS or state standards (which are often requirements for curriculum adoption), these standards frequently represent year-end objectives. Accessing the scope and sequence for specific curricula is helpful for determining what skills can be expected for a given time of the school year. Curriculum-specific objectives are also important for students in private schools and for students being home-schooled. Figure 4.4 provides our suggested approach for planning an assessment of mathematics skills, overlaid over the keystone model discussed in Chapter 2. The general approach taken here focuses primarily on the following skills: (1) fluency with number combinations, (2) procedural calculations, (3) word-problem solving, and (4) when relevant or indicated, early numerical competencies, fractions, and geometry and measurement. We focus on these areas because (1) they are the areas in which problems occur most often for students referred for mathematics difficulties— f luency with number combinations, procedural calculations, and word-problem solving are the most common areas of difficulty (Andersson, 2008; Geary et al., 2012; L. S. Fuchs, D. Fuchs, et al., 2008); and (2) they are particularly important for making higher-order mathematics skills possible. Fluency with number combinations in particular is a core deficit of students with mathematics difficulties and disability across grade levels (Andersson, 2008; Geary et al., 2012; Mabbot & Bisanz, 2008), and is critical to the development of more complex forms of mathematics. Hence, it is central role to our keystone model, and unless the examiner has information that suggests otherwise, it is a good place to start the assessment. Readers will see that starting the assessment by assessing fluency with number combinations is akin to beginning a reading assessment by measuring the student’s oral reading —the results say a great deal about the student’s difficulty in the overall academic domain, and how to approach the rest of the assessment. As noted in Figure 4.4, if difficulties with number combinations are observed, the examiner would work backward to identify problem areas with foundational skills. The referral concern may also include other mathematics skill domains. Problems with number combinations can help explain poor performance in those domains; however, it can also be useful for designing an intervention to assess students’ skills in relevant areas, especially procedural computation and word-problem solving. As in other skill areas, strive for simplicity and parsimony—focus on mathematics skills most relevant to the problem, using tests that most directly measure those skills.
Step 2: Assessment of Skills and Instructional Placement 181 Grades 1–12: Start Here Assess Number Combinations Fluency…
…and skill area(s) of concern, especially procedural computation and word problems Procedural Computation
Early Numerical Competencies: Counting, Numeral ID, Quantity, Place Value
Consider gaps in Early Numerical Competencies Kindergarten: Start Here
Number Combinations Fluency
Word-Problem Solving and Algebraic Reasoning
(“Math Facts”)
Accurate
Efficient
Rational Numbers, Especially Fractions
Algebra
Advanced Mathematics
Geometry and Measurement
Language: Vocabulary and Linguistic Reasoning Skills
Problems with accuracy or fluency…
…can be an underlying cause of difficulties in more complex skills
FIGURE 4.4. Suggested approach to assessing mathematics skill difficulties, overlaid on the keystone model of mathematics.
Assessing Fluency with Number Combinations As discussed at length in Chapter 2, the term number combinations (i.e., “math facts”) refers to single-digit operations in addition, subtraction, multiplication, and division. The fluency aspect refers to automaticity in retrieving answers from memory. According to the CCSS, fluency with addition and subtraction number combinations within 10 is expected by the end of first grade, and within 20 by the end of second grade. Fluency with multiplication and division number combinations within 100 is expected by the end of third grade. Examples of measures useful for measuring fluency with number combinations are listed in Table 4.11. Obviously, the fluency aspect of the skill means that measures should be timed, and scores should be calculated in terms of the number of correct responses within the time limit. Some measures, such as the Math Facts Fluency subtest from the Woodcock–Johnson IV, start with number combinations in addition and subtraction, and progress to include multiplication and division. Evaluators should assess number combinations for whatever operations are relevant given the student’s grade level: If the student is in second grade or below, the test should include addition and subtraction facts. If the student is in grade 3 or above, the test should include addition, subtraction, and multiplication. For older students, performance with division facts may be of interest as well; although they are not always targeted with the same level of instruction and practice as multiplication facts, division facts can provide information on the extent to which students generalize their knowledge of multiplication. Ideally, a test should provide separate scores for fluency with number combinations in each type of operation (or allow for error analysis) so that the examiner can compare the student’s strengths and weaknesses in fluency across the types of operations, which can help pinpoint intervention recommendations. Number combination fluency measures can also be developed by the examiner (although this is only recommended if no other published measures are available).
182
Academic Skills Problems
TABLE 4.11. Examples of Measures and Tests Available for Assessing Number Combinations Fluency (i.e., “Math Facts”) CBM/CBA vendors and measures
• AIMSweb Plus: Math Facts Fluency • FastBridge: CBMmath Automaticity • Fuchs Research Group: Computation Fluency • mClass: Math • SpringMath Norm-referenced, standardized
• Assessing Student Proficiency in Early Number Sense (ASPENS) • Fiefer Assessment of Mathematics • Kaufman Test of Academic Achievement, 3rd ed. • Process Assessment of the Learner 2 Math • Test of Early Mathematics Ability, 3rd ed. • Weschler Individual Achievement Test, 4th ed. Woodcock–Johnson IV Tests of Achievement • Note. This table provides a sample of the measures available for assessing number combinations fluency; it is not a complete list.
Computer programs can automatically generate mathematics number combination probes across various operations, such as Mathematics Worksheet Factory, available from Schoolhouse Technologies (www.schoolhousetech.com); Aplus math (www.aplusmath.com); and the mathematics worksheet generator provided by Intervention Central (www.interventioncentral.org). Commercially available measures can be used as well. An advantage to using commercially available measures is that they will often have normative data, allowing for conversions to standard scores and percentiles. Percentile scores are useful for indicating the severity of the student’s difficulties, as well as their relative strengths and weaknesses, in number combination fluency. Below-average scores in number combinations fluency should be considered a concern, a likely explanatory variable for difficulties in more complex operations and skills, and a target for intervention. Additional information useful for designing intervention can be gleaned from examining the student’s performance on the measure. Low-scoring students whose responses are highly accurate (i.e., above 95%) in solving number combinations but very slow in completing problems are students who understand a procedure for determining the answers (e.g., using counting strategies), but have not yet committed them to memory. Intervention should therefore work to improve the student’s automaticity. On the other hand, a low-scoring student who makes a lot of errors or skips problems is likely a student who lacks a fallback strategy for determining the answer. Intervention in this case would first work to establish accuracy by equipping the student with (1) conceptual knowledge of the operations and (2) a reliable counting strategy for solving number combinations (see Chapter 6). Fluency can be targeted as accuracy improves.
Assessing Early Numerical Competencies Early numerical competencies (Powell & L. S. Fuchs, 2012) refer to a student’s basic understanding and skills with numbers. These skills have also been referred to as number
Step 2: Assessment of Skills and Instructional Placement 183
sense and early numeracy. Skills and concepts in this domain include counting, number identification, number comparison and quantity discrimination, and basic conceptual knowledge of addition and subtraction. We use the term early numerical competencies to refer to this skill domain; it is preferable in some ways to number sense because the skills involve more than a “sense” of numbers, and it is preferable to early numeracy because it refers to the importance of attaining competence in basic number concepts and skills and not just the ability to count. As shown in Figure 4.4, and discussed in detail in Chapter 2, early numerical competencies form the groundwork for the mathematical concepts and skills that follow. Most central are counting skills, understanding relations between numbers, and comparing quantities. These skills provide a basis for learning addition and subtraction operations, recognition of the component nature of numbers (which is foundational to number decomposition and place value), and other key concepts on which mathematical operations are built (Gersten et al., 2012; Jordan et al., 2003). Whether to assess early numerical competencies depends on the level of mathematics skill development of the referred student. For students in kindergarten and first grade who are referred for mathematics difficulties, it is recommended. For students in grades 2 and 3, it could be helpful if the student demonstrates significant difficulties with number combinations and other mathematics skills. However, several of the tests relevant for early numerical competencies may not have normative data that extend to middle elementary grades so that should be taken into consideration. Regardless of grade level, it is key for the evaluator to consider whether the student’s mathematics difficulties may be due, in part, to inadequate foundational conceptual knowledge and skills with whole numbers. If so, assessing early numerical competencies may be useful for designing the intervention. Examples of measures for assessing early numerical competencies are listed in Table 4.12. As noted earlier, tests of this sort may go by different names. Because early numerical competencies involve several subskills, many test batteries offer several subtests or individual skill measures. Some measures are designed as screening tools for use in classor schoolwide screening efforts (i.e., in MTSS models); however, they could be used for individual assessment as well. The specific skills or skill domains tested by each are listed in the table. Readers will note that some include tests of basic number combinations, which are considered by some to be included in the domain of early numerical competencies. Not all skills or subtests contained in batteries of early numerical competencies have extensive research supporting their validity. Some early numerical competencies have more evidence as indices of overall early mathematics knowledge and skills, and are more predictive of subsequent mathematics achievement. Skills that appear more important to assess include oral counting and cardinality (i.e., the last number counted represents the number in a set), strategic counting (i.e., counting on from a given number, assessed by “missing number” types of measures), comparing numbers in terms of magnitude (i.e., determining which number is greater), number identification, and solving basic addition and subtraction operations (e.g., Chard et al., 2005; Clarke & Shinn, 2004; Geary, 2011; Gersten et al., 2012; Lembke & Foegen, 2009; Schneider, Beeres, et al., 2017). Below-average scores in these early numerical competency areas provide indications that deficits in foundational mathematics knowledge may explain, at least partially, the reasons for the student’s mathematics difficulties. This could make them targets for intervention, in conjunction with intervention in more complex skills that are problematic.
184
Academic Skills Problems
TABLE 4.12. Examples of Measures for Assessing Early Numerical Competency Source
Skills measured
Acadience
Quantity discrimination, number identification, next number, missing number
AIMSweb Plus
Number naming, number comparison
ASPENS
Number identification, magnitude comparison, missing number, basic facts and base-10
easyCBM
Numbers and operations (mixed skills for K and 1)
FastBridge
Subitizing, counting objects, match quantity, number sequence, numeral identification, equal partitioning, verbal addition/subtraction, story problems, place value, composing and decomposing, quantity discrimination
Feifer Assessment of Mathematics
Oral counting (forward and backward), numeric capacity (memory), sequences/patterns, object counting, subitizing, number (quantity) comparison)
KeyMath-3
Numeration (counting, number identification, magnitude, number understanding), and other subtests that include content applicable to kindergarten and first grade
mClass Math
Oral counting, quantity discrimination, missing number, next number
Number Sense Screener (Jordan et al., 2010)
Counting, number recognition, number knowledge (next number and quantity discrimination), nonverbal addition/subtraction, addition/ subtraction story problems (single score)
Pal II Math
Number Sense (counting skills, fact retrieval, and other tests)
Preschool Early Numeracy Screener (PENS)
Counting (one-to-one correspondence, cardinality), number relations (numeral and set comparison), arithmetic operations (story problems, formal addition)
SpringMath
Various early numerical competency skills
Test of Early Mathematical Ability, 3rd ed.
Numbering (counting), number comparison, numeral literacy, math facts, calculation skills, understanding of concepts
Note. This table provides a sample of the measures available for assessing early numerical competency. It is not a complete list, and there are other measures that, due to space constraints, could not be included.
Assessing Skills in Procedural Computation Procedural computation refers to mathematics calculation problems that involve multiple steps or algorithms. This primarily includes multidigit operations in addition, subtraction, multiplication, and division. Understanding of place value and the base-10 system is central to procedural computation, as a stronger understanding of base-10 provides students with more flexibility and insight for solving multidigit computation problems. Fluency with number combinations is also a critical aspect for success, as a multidigit
Step 2: Assessment of Skills and Instructional Placement 185
computation problem involves numerous instances of adding, subtracting, or multiplying two numbers. Students who can immediately recall answers to number combinations therefore have an advantage in procedural computation. According to the CCSS, fluency in using strategies and algorithms for adding and subtracting numbers within 1,000 is expected by the end of third grade, and solving multidigit multiplication and division problems is expected by the end of fourth grade. Given the length and complexity of these types of problems, and the need for effective instruction and practice opportunities, it is not surprising that procedural computation is a common area of concern for students with mathematics difficulties. Tests that measure procedural computation skills are common and can be found in almost any mathematics assessment that is not specifically focused on early numerical competencies. Given their ubiquity, and the fact that the tests are listed in the other tables in this chapter, we do not list them again here. Measures and subtests may be referred to in different ways, but generally they will include terms such as mathematics computation, calculation, or operations. It is common for all types of operations to occur within the same subtest, and tests are often constructed to increase in difficulty such that computation problems become longer and more complex as students move through the test. Some may include computation problems involving rational numbers (i.e., fractions, decimals) and other quantities. Tests that examine a broad range of computation skills are common in broadband batteries that measure multiple academic skills, such as the WJ-IV, WIAT-4, KTEA-3, and WRAT-5, and can provide a sense of the student’s overall functioning in this area, but may not be as useful if the evaluator wishes to examine the student’s skills in a more specific area of computation (e.g., multidigit addition and subtraction). Test batteries focused on mathematics, such as the KeyMath-3 and Comprehensive Mathematical Abilities Test, are more likely to include subtests more specific to a type of operation. SpringMath offers an array of procedural computation probes specific to types of problems. Publishers of CBM mathematics tools (Acadience, AIMSweb Plus, easyCBM, FastBridge, mClass), include mixed computation skills measures in keeping with a general outcomes approach to measurement. Procedural computation is a skill area in which curriculum–test overlap should be considered. This skill domain is highly dependent on instruction and practice; students cannot be expected to complete certain operations if they have never been exposed to them in instruction. Therefore, relying on the teacher interview and review of curriculum scope and sequence or grade-level standards will be useful in identifying a procedural computation test relevant to the types of skills targeted in instruction. The measures included in the mathematics achievement tests described in this section will likely be sufficient for ascertaining the extent and nature of a student’s difficulty in procedural computation needed for developing intervention recommendations. However, there are additional avenues the evaluator can take if more specific information on the student’s computation difficulties is needed for designing an intervention. First, samples of the student’s recent work may be helpful (e.g., worksheets, quizzes, unit tests) for identifying error patterns and problem types that are more challenging. Second, if other assessments are insufficient or unavailable, specific-skill probes can be constructed using content from the student’s mathematics curriculum. The evaluator should first define the specific types of mathematics computation problems of interest, limiting them to those that will be useful for informing intervention. In addition to providing information on expected procedural computation skills, the teacher can indicate the problem types in which the student is independently successful, and the types of problems in which the student is functioning at a frustrational level. By looking at the range
186
Academic Skills Problems
of items, the evaluator can select the possible types of computational skills to be assessed. It is not necessary to assess every single computational objective; instead, three or four objectives should be selected to allow the evaluator to assess the range of items across the independent- and frustration-level points. In selecting these objectives, the evaluator should try to choose those that require different operations (e.g., addition and subtraction), if both objectives are listed between independent and frustrational points. The evaluator should compose several sheets of problems of the same type, leaving enough room for computation, if necessary (e.g., in the case of long division). We remind readers that developing probes for an assessment should only be undertaken if no other assessment tools are available for adequately assessing procedural computation. If examiners find it necessary to develop their own procedural computation probes for diagnostic and intervention-development purposes, guidelines from previous editions of this text are provided in Appendix 4B at the end of this chapter.
Interpreting Results of Procedural Computation Assessment Similar to interpreting results from number combinations fluency measures, the student’s results from procedural computation measures should be reviewed. Information gleaned from the student’s performance is used for informing intervention. Fluency-based computation measures are particularly useful in this regard because the examiner can evaluate whether low scores are due more to lack of understanding of how to solve problems (i.e., slow and inaccurate), as opposed to a student who understands how to solve the problems but is not fluent. Burns et al. (2010), based on the results of their meta-analysis, noted the importance of considering skill-by-treatment interactions. Specifically, when a student knows how to complete procedural calculations accurately (i.e., 95% or more problems completed correctly) but is not yet fluent suggests that intervention should focus on practice to build automaticity. Conversely, when a student makes several errors, this suggests that intervention must target the student’s understanding of procedures and algorithms for solving problems before fluency can be expected. A student who rushes to solve as many problems as possible may need to be retaught procedures and algorithms, but may also benefit from strategies that foster more careful attention to accuracy and ways to selfcheck solutions.
Assessing Skills in Word‑Problem Solving and Pre‑Algebraic Reasoning Word problems are generally defined as linguistically based mathematical problems that are presented orally, through pictures, or through text. Word problems are one of the most common areas of difficulty for students who struggle in mathematics (Andersson, 2008). However, unlike some other areas of mathematics that are relatively discrete (e.g., number combinations), word-problem solving is influenced by multiple skills that have some bearing on how tests are selected and results interpreted. Pre-algebraic reasoning is a part of solving word problems because word problems typically involve using known quantities to determine an unknown quantity. Word- problem skill is associated with pre-algebraic competence (L. S. Fuchs, Powell, Cirino, et al., 2014). Hence, contemporary interventions for word-problem skills involve teaching pre-algebraic concepts and skills (L. S. Fuchs, Powell, Seethaler, et al., 2009; Powell, Berry, & Barnes, 2020). However, pre-algebraic reasoning is not something readily
Step 2: Assessment of Skills and Instructional Placement 187
indicated by scores on tests of word-problem solving. Rather, a student’s difficulties with word problems can be viewed as indicators of potential difficulties with pre-algebra and algebra in the future, and the need for intervention to improve the odds for long-term mathematics success. Other skills that are heavily involved in word-problem solving are oral language and reading comprehension (L. S. Fuchs, Gilbert, Fuchs, et al., 2018; L. S. Fuchs, Fuchs, Seethaler, et al., 2019). This includes vocabulary knowledge, particularly mathematics vocabulary, as well as the linguistic comprehension skills needed to comprehend problems that are either read to the student or the student reads themself. For these reasons, an evaluation of a student who experiences problems with word-problem solving should include some determination of whether limited oral language or reading comprehension are involved in their word problem solving difficulties. The assessment of linguistic and text comprehension skills is described in the section on reading assessment earlier in this chapter, and assessment of language skills specific to mathematics will be described in a subsequent section. Finally, solving word problems also depends on accurate calculations. Once relevant information and a solution strategy have been identified, answering correctly requires an accurate calculation of the result. Therefore, problems with number combinations or procedural computation with any type of operation (addition, subtraction, multiplication, or division) can affect a student’s word-problem solving. The multiple influences on word-problem solving require consideration in how a student’s skills will be assessed. Most mathematics tests and test batteries listed in the previous tables in this chapter offer measures of some form of word-problem solving. Like tests of procedural computation, tests that assess word-problem solving may go by different names but, in general, include terms such as applied problems, concepts and applications, and problem solving in addition to tests titled “word problems.” However, one aspect evaluators must carefully consider is that many of these types of tests mix various types of problems together with word problems. In addition to word problems, these tests may include items that involve time, money, measurement, and reading charts and graphs. These skills fall within a broader constellation of so-called “applied problems” and can provide an index of a student’s overall achievement in the area, but may not be particularly helpful for evaluators who wish to assess a student’s word-problem skills more specifically. Other test batteries include subtests that are more specific to traditional word problems, such as the KeyMath-3 and the TOMA-3. The Fiefer Assessment of Mathematics includes a subtest called Equation Building in which the examiner reads a series of word problems and students must select an equation that would best represent the solution to the problem. Another consideration in selecting a test for measuring word-problem solving is whether students are expected to read the items. Word problems in some tests, such as KeyMath-3 and the WJ-IV, are read aloud to the student. Whether the student is expected to read has implications for test selection and interpretation, especially when students have co-occurring reading difficulties (which should be considered given the high rates of co-occurring reading and mathematics difficulties). When assessing students with co-occurring reading problems, evaluators should make every effort to use word- problem measures in which problems are read aloud by the examiner. Otherwise, poor performance may be due primarily to their reading difficulties. If it has bearing on how intervention is designed for the student, and time and resources permit, evaluators might also consider administering two word-problem-solving tests—one that requires reading and another that does not. This may indicate the extent to which a student’s mathematics
188
Academic Skills Problems
word-problem solving is due to their problems with reading, and help inform the components that should be included in an intervention. There are other sources for information on a student’s skills in solving word problems. First, the teacher interview can help identify the problem types in which the student has significant difficulties, as well as any problem types in which they are relatively more successful. Second, reviewing recent permanent products (e.g., unit tests, quizzes, homework) can reveal (1) areas of strength and difficulty in word-problem solving, and (2) the types of word problems the student is expected to solve, which can help the examiner identify measures that test similar skills. Third, the examiner can construct their own probe by sampling word problems from the curriculum of instruction. Although this will not result in data that can be compared to a normative sample, the information might be useful from a diagnostic and intervention-planning perspective by helping to pinpoint aspects of word-problem solving that appear to be particularly problematic.
Assessing Skills with Fractions and Other Rational Numbers In contrast to whole numbers, rational numbers refer to fractions, decimals, and percent. Conceptual knowledge and operations with rational numbers, especially fractions, are key pillars to understanding ratio and proportion, success in algebra, and other forms of more complex mathematics. It is also a common area of difficulty in mathematics. Instruction in rational numbers typically begins in third grade, but their relevance extends through secondary grades and thus may be a referral area of concern for students in middle elementary and up. Measures for assessing students’ knowledge and skills specific to rational numbers are less common than measures in the skill areas discussed previously; many tests of achievement include problems involving fractions or decimals within tests of overall computation. KeyMath-3 includes items involving fractions, decimals, and percent across the Numeration, Algebra, Addition and Subtraction, Multiplication and Division, and Applied Problems subtests. However, there are examples of specific tests that evaluators may find useful. SpringMath includes measures that assess specific skills with rational number understanding and operations. The Comprehensive Mathematical Abilities Test (Hresko et al., 2003) includes a Rational Numbers subtest in its supplemental battery, and the PAL-II Math includes a cluster of subtests titled Part–Whole Relationships. With regard to CBM measure publishers, the easyCBM system includes items with rational numbers in their Numbers and Operations Probes beginning in grade 3 and up, and also offers probes in Number Operations and Ratios. The Acadience system includes items with rational numbers in their Concepts and Applications probes beginning in grade 3, and Computation probes beginning in grade 4. As with other specific mathematics skills, examiners can create their own probes by sampling problems involving rational numbers from the student’s mathematics curriculum, which can be used for gathering diagnostic information and intervention design. Additionally, samples of the student’s recent work involving rational numbers can be reviewed to obtain information for better understanding the student’s difficulties in this skill area.
Assessing Skills in Geometry and Measurement Student referrals for difficulties specific to geometry and measurement are uncommon, but difficulties in this area are likely to occur as part of a student’s difficulties in
Step 2: Assessment of Skills and Instructional Placement 189
mathematics more generally. Nevertheless, geometry and measurement deserve attention in mathematics assessment given their interrelations with word-problem solving and algebra, and association with more complex mathematics that students encounter in later grades. The foundations of geometry and measurement are targeted in instruction as early as kindergarten. Geometry and measurement skills are often included in tests that evaluate mathematics concepts and applications, and these items may be included with others that involve time, reading graphs, money, and word problems. This is common in broadband achievement tests and general outcome measures developed under a CBM framework. However, a few published tests include measures specific to geometry and measurement, and they include KeyMath-3, Comprehensive Mathematical Abilities Test, SpringMath, and easyCBM. In addition to test data, permanent product reviews of the student’s recently completed homework, tests, and other work can be informative, and evaluators can also sample problems from the student’s curriculum to gather diagnostic information on the student’s specific difficulties in this area.
Assessing Skills in Algebra Algebra is a viewed as a gateway to more complex forms of mathematics and, as such, is key to improved postsecondary education and employment opportunities. Indeed, Figure 2.3 is constructed to reflect the current thinking in mathematics instruction that skills build toward facilitating success in algebra (NMAP, 2008). Intervention for students with algebra difficulties has the potential to enhance education and life outcomes for students with academic and behavioral risks. Although it is likely that students who experience problems in algebra will have demonstrated mathematics difficulties in foundational skills long beforehand, it is still possible for students to be referred for difficulties in algebra. Like any complex skill that is built on proficiency with foundational skills and concepts, problems in algebra may have less to do with algebra per se. Lack of automaticity with number combinations, poor procedural calculation skills or place-value understanding, inadequate knowledge of fraction operations, or even inaccurate understanding of the equal sign (Knuth et al., 2006; Powell & L. S. Fuchs, 2010) can all play a role in algebra difficulties. These factors indicate that an assessment of a student with difficulties in algebra should include multiple skill areas that are foundational to algebra success because intervention may be needed in one or more skill areas. Nevertheless, the complexity of the procedures and depth of the conceptual knowledge required for algebra argue that difficulties may also include understanding and solving equations. Subtests specific to algebra are included in batteries such as KeyMath-3, Comprehensive Mathematical Abilities Test, SpringMath, and easyCBM. In addition to test results, the teacher interview and permanent product reviews are a good source of information on the student’s performance relative to expectations and specific problem areas. Examiners can also create probes by sampling algebra problems from the student’s curriculum if that information is needed for designing intervention. Assessment of algebra and pre-algebra skills is an underdeveloped area, but there is work being done. Ketterlin-Geller, Gifford, and Perry (2015) investigated three experimental measures of algebra readiness for students in grades 6 through 8 and observed promising evidence. Ketterlin-Geller and colleagues (2019) described measures and their evidence as universal screeners of algebra skills in middle school. It is likely that, as awareness of the importance of algebra grows, the coming years will see more advancement in the development of measures assessing algebra readiness and skill difficulties.
190
Academic Skills Problems
Assessing Mathematics Language and Vocabulary The role of language in mathematics achievement has been acknowledged for quite some time (e.g., Brune, 1953); however, the language–mathematics relation has received greater and well-deserved attention in recent years. Language, in a mathematics context, involves an understanding of the vocabulary and syntax used in mathematics instruction and its applications, and the ability to engage in discourse around mathematics through listening, speaking, reading, and writing. As reviewed in Chapter 2, studies have demonstrated that language and text comprehension support mathematics development, especially in more complex and language-heavy skill areas such as word-problem solving, geometry, and algebra (L. S. Fuchs, D. Fuchs, Compton, et al., 2015; L. S. Fuchs, Gilbert, D. Fuchs, et al., 2018; Peng, Lin, et al., 2020; Peng & Lin, 2019; Riccomini et al., 2015). Mathematics can be viewed as having its own language, with terms and phrases unique to mathematics as well as words that have alternate meanings in a mathematics context compared to everyday use. Powell, Driver, et al. (2017) documented 133 terms and phrases important for mathematics in late elementary grades, and Hughes et al. (2016) illustrated how students can develop insufficient or inaccurate mathematics vocabulary due to problematic ways that teachers refer to terms and phrases in mathematics instruction. Based on their meta-analysis, Peng, Lin, et al. (2020) noted the importance of language not only in understanding instruction, but also in how students represent problems, retrieve knowledge and concepts, and communicate results. Therefore, for a student referred for mathematics difficulties, consideration should be given to the student’s oral language and mathematics vocabulary, in addition to their mathematics skills. Students with mathematics difficulties may have more general underdeveloped oral language, which is especially likely when they have co-occurring reading difficulties, where oral language deficits may be a unifying factor in co-occurring reading and mathematics difficulties (L. S. Fuchs et al., 2019). Limited familiarity with mathematics language may have a role in the mathematics difficulties of students who experienced limited exposure to the dialect and vocabulary breadth common to instructional contexts. Assessing mathematics language and vocabulary is particularly important for emergent bilingual students (Powell et al., 2020). If language or reading difficulties are suspected or observed, evaluators should consider administering a measure of oral language comprehension (listed in the earlier section on reading assessment) as part of a mathematics assessment. In terms of standardized norm-referenced measures of mathematics language or vocabulary, the Fiefer Assessment of Mathematics includes a subtest titled Linguistic Math Concepts that tests students’ knowledge and application of mathematics terminology. The TOMA-3 includes a subtest titled Mathematical Symbols and Concepts that assesses students’ knowledge of math symbols, phrases, and terms. Several researcher- developed mathematics vocabulary measures have emerged and include mathematics terminology appropriate for preschool (Purpura & Logan, 2015), early elementary (Powell & Nelson, 2017), middle and late elementary (Powell et al., 2017), and middle school (Hughes et al., 2020). These measures are available from their respective authors. Assessing student’s oral language and mathematics vocabulary adds a unique component to understanding the sources of the student’s mathematics difficulties. Because mathematics language and vocabulary instruction can be integrated into an intervention targeting mathematics skills, this information can help tailor an intervention to a student’s unique needs.
Step 2: Assessment of Skills and Instructional Placement 191
Interpreting Mathematics Assessment Data and Considering Instructional Level The goal of an assessment of mathematics skills is to identify an intervention to address the skills and variables that are most likely the cause of the student’s mathematics difficulties. In the following chapters, we discuss instruction and intervention strategies that are effective for mathematics. At this stage of the assessment, the hypothesis statement developed in the early stages of the assessment should be reevaluated in light of the data collected so far. Results of the direct assessment are considered with information from the teacher interview, direct observations, student interview, and review of permanent products to determine whether information supports the hypothesis, or if adjustments to the hypothesis are needed. We reiterate the importance of parsimony: Interpreting the results of the direct assessment is made easier when the evaluators limit the measures to those that are directly relevant to designing intervention. Parsimony in the selection of measures not only saves time and resources, but also makes interpreting the data more straightforward. Another way to aid interpretation is to convert raw scores to standard scores or percentiles, when possible. Placing scores from each measure on a common metric helps identify the student’s relative strengths and weaknesses across the keystone mathematics skills administered. Deficits in basic and foundational skills influence achievement in high-order skills; therefore, underdeveloped foundational skills should be viewed as targets for intervention. The necessity for determining instructional level in mathematics is less clear than in reading. Unlike reading, mathematics does not have levels of difficulty that are as specifically tied to grade levels. Additionally, mathematics involves several skills that, although related and build on each other, are not linearly sequenced. Therefore, it is challenging to find a specific point in mathematics development in which a student is “instructional,” and there is seldom any mathematics material that is tied to one specific grade level. Overall, the point of determining instructional level is about better identifying what intervention should look like and the type of content that should be used for instruction. Burns et al. (2010) challenged the notion that determining instructional level was important for intervention in mathematics because their meta-analysis revealed that students who received intervention with mathematics material that was very challenging (i.e., frustrational level) actually made greater gains than students who were functioning at an instructional level in the material provided. They concluded that a more productive perspective is one in which the examiner considers whether the student’s difficulties are the result of a lack of understanding of how to complete the problems, or if the student understands how to solve the problems but is slow (dysfluent). Lack of accuracy in problems attempted, skipping problems, and uncertainty in problem-solving approaches suggest the need for direct instruction in problem-solving procedures and algorithms. Few problems completed (i.e., low fluency; such as below the 25th percentile on fluency norms) but high accuracy (e.g., 95% accurate or better) with problems attempted indicate the need for intervention to focus on providing practice to build fluency. Rushing to complete problems but making many errors suggests the need for possible reteaching of problem-solving procedures, strategies to promote more careful attention to accuracy (e.g., goal-setting or incentives for accuracy), and ways the student can self-check their solutions. Determining this allows for a skill-by-treatment perspective to intervention design (Burns et al., 2010), one in which the focus is on skill acquisition, fluency building, attention to detail, or some combination of these. There are some areas of mathematics in which skills can be arranged hierarchically (from basic to advanced) and generally reflect the sequence in which students learn
192
Academic Skills Problems
them. Procedural computation is one of those areas. Appendix 4C, “A Computation Skills Mastery Curriculum,” summarizes a general progression of computation skills and the approximate grade levels in which they are typically taught (note that some variation is expected in the skill sequence or grade levels that skills are taught) and appeared in earlier versions of this text. Although it is fairly standard, readers are recommended to consult the scope and sequence materials from the student’s mathematics curriculum, which may be slightly different from this sequence. For a student with difficulties in procedural computation, data from across the assessment should be used to determine where in the sequence the student is independently successful (i.e., their “mastery” level), where the student is instructional, and where the student is frustrational. Situations may exist in which evaluators need to create computation probes to determine a student’s success with certain problem types if not revealed through the other assessment activities. The results will inform the types of problems used in instruction (i.e., instructional level) and the problem types that can be used for independent practice. The type of hierarchical ordering reflected in Appendix 4C is not possible with mathematics skills as a whole. However, it can be done within other specific mathematics domains. The SpringMath program uses skill trees across areas of mathematics to identify skills and content that students need and will be effective for instruction. Hierarchies can also be seen in state and national standards, such as the CCSS. This approach can be taken with any of the mathematics skill areas discussed in this chapter, using data sources that include direct measures of mathematics skills (which may include both CBM tools and norm-referenced, standardized tests), permanent product reviews, information from the teacher, and evaluator-created diagnostic measures when necessary. As such, determining the “instructional level” of a student in mathematics is more relevant to identifying the skill area(s) in need of improvement and within a skill domain, determining whether skill acquisition or fluency is the primary need. This perspective is consistent with the hypothesis-driven approach that we advocate in this text; the evaluator selects measures and interprets the results based on the questions they wish to answer about the source of the student’s difficulty, with the primary goal of designing an intervention to best meet their needs.
DIRECT ASSESSMENT OF WRITING SKILLS As reviewed in detail in Chapter 2, writing (and by extension, writing difficulties) involves two keystone domains: (1) transcription, the skills involved in efficiently putting words to paper that include handwriting (or keyboarding) and spelling; and (2) composition, which includes skills involved in generating ideas, planning, drafting, and revising. Transcription skills can be viewed as having critical facilitative effects for writing in the ways that word reading efficiency has for reading and number combination fluency has for mathematics: Efficiency in writing words facilitates higher-order composition skills and helps improve the quality of the final writing product. Without efficient transcription, writing is difficult and high-quality written products are unlikely. Variables that influence transcription and composition, and ultimately the quality of the final product, are the student’s oral language and reading skills, their motivation to write, and their ability to self-regulate their attention and persistence. Thus, the goal of an assessment of writing skills is to identify the specific areas of difficulty that are likely causing low writing performance, which informs what intervention should target and what intervention should look like.
Step 2: Assessment of Skills and Instructional Placement 193
With these core skill areas in mind, Figure 4.5 provides a suggested roadmap for assessing writing skills overlaid over the keystone model of writing discussed in Chapter 2. Transcription offers an entry point for the assessment, especially for students in early grades (i.e., K–3). However, transcription problems (which includes typing) are also important to consider, if only to rule out, as an explanation for the writing difficulties of students in middle elementary school and beyond. Composition skills are considered relative to a student’s transcription skills, and self-regulation and oral language are considered given their magnified relevance in writing. There are a number of options for assessing writing skills, and measures vary in terms of whether they capture skills more aligned with transcription or composition. Table 4.13 provides an organizer of example tests and measures, with the subtests, scoring strategies, and skills they primarily assess. Although there are several options, an assessment of writing skills should not be made overly complex. It should be quickly apparent, beginning with the referral information and the teacher interview, whether the student’s writing difficulties originate with basic transcription-related issues, or if their problems pertain more to higher-order composition aspects (of which transcription problems could still be a part). The age of the student will also be an indication; the earlier the grade level of the student referred for writing difficulties, the more likely their writing difficulties will involve transcription. Nevertheless, evaluators should still evaluate the presence of transcription issues even for students referred for composition-related difficulties. It is possible, as in reading comprehension, that educators (especially in later grade levels) focus on the student’s problems with writing quality and overlook the possibility the student may still have lingering issues with transcription that are a limiting factor.
The CBM Approach to Writing Assessment Across this chapter, we’ve recommended starting an individual assessment of an academic domain with a brief, efficient measure of a keystone skill in which automaticity is both the outcome of foundational skills, but is also facilitative of higher-order proficiency
Grades K–3:
Start Here.
Assess language and composition as needed
Oral Language and Reading Transcription: Handwriting (or Typing) and Spelling Accurate
Efficient
Writing Quality Grades 3 and up:
Start Here.
Rule out transcription problems, and assess language and motivation as needed
Composition: Planning, Drafting, Grammar, Syntax, Revising Self-Regulation, Motivation
FIGURE 4.5. Suggested approach to assessing writing skill difficulties, overlaid on the keystone model of writing.
194 Total Words Written Words Spelled Correctly Correct Writing Sequences Correct Minus Incorrect Writing Sequences Percent Correct Writing Sequences Graphomotor Dyslexic (spelling) Executive and Compositional Writing Spelling Written Expression Spelling Writing Fluency Written Expression Vocabulary Spelling
CBM Writing (Story Starters)
Feifer Assessment of Writing
iReady
Kaufman Test of Educational Achievement, 4th ed.
Oral and Written Language Scales, 2nd ed.
Test of Written Language, 4th ed.
Scoring metric, scale, or subtest
TABLE 4.13. Examples of Measures for Assessing Writing and the Skills They Assess
Writing fluency/ production
Spelling
Transcription
Grammar/ syntax
Writing quality
Composition
195
Spelling Spelling Writing Samples Sentence Writing Fluency Correct Minus Incorrect Writing Sequences Writing Quality Rubric
Wide Range Achievement Test, 5th ed.
Woodcock–Johnson Test of Achievement, 4th ed.
The Writing Architect a
Note. This table provides a sample of the measures available for assessing writing skills. It is not a complete list, and there are other measures that, due to space constraints, could not be included. aTruckenmiller et al. (2020).
Spelling Alphabet Writing Fluency Sentence Composition Essay Composition Sentence Writing Fluency
Wechsler Individual Achievement Test, 4th ed.
Punctuation Logical Sentences Sentence Combining Contextual Conventions Story Composition
Academic Skills Problems
196
in that academic domain. This starting point is not intended to reveal everything the evaluator needs to know—it is meant to provide the evaluator with information to guide the remainder of the assessment, thereby making the evaluation more efficient and targeted at the specific areas of difficulty for the study. We suggest the same approach when assessing writing, as illustrated in Figure 4.5. Writing measurement procedures developed under the CBM framework (Deno, Marston, & Mirkin, 1982), although originally designed for monitoring writing progress, offer an “entry point” assessment for an evaluation of an individual student’s writing difficulties. CBM writing is focused on transcription, but it is also an efficient way to obtain a preliminary but direct view of the student’s overall level of writing proficiency, and provide insight regarding what skills should be evaluated more in depth. The general procedure of CBM writing is as follows: .
1. CBM writing uses story starters that provide students with an initial idea to write about. In developing story starters, consideration is given to the type of discourse that is being assessed. Expressive or narrative starters relate to creative writing of fictional stories (e.g., “I’ll never forget the night I heard that strange sound outside my window”). Explanatory or expository discourse is subject-oriented and asks the student to write about an area of fact (e.g., “When the air pressure is rising, the forecast is usually for better weather”). Although any of these types of starters can be utilized, the most typical assessment of written language uses narrative story starters. These starters should contain items that most children would find of sufficient interest to generate a written story. Table 4.14 offers a list of possible story starters. During the assessment, the evaluator may choose to use two or three story starters to obtain multiple writing samples. 2. The evaluator should give the student a sheet of lined paper with the story starter printed at the top. The evaluator tells the student that they are being asked to write a story using the starter as the first sentence, and the evaluator then reads the story starter aloud. The student should be given a minute to think before they are asked to begin writing. 3. After 1 minute, the evaluator should tell the child to begin writing, start a timer, and time the student while they write. Original procedures provided students with 3 minutes to write, and various time limits have been used across the literature ranging from 3 to 15 minutes. Research has found that the validity of writing samples does not vary appreciably across time limits (Espin et al., 2000; Furey et al., 2016; McMaster & Campbell, 2008; Romig et al., 2021; Weissenburger & Espin, 2005). Generally, shorter time limits are appropriate for younger students (i.e., 3–5 minutes). For older students (i.e., grade 3 and up), longer time limits may be necessary for informing subsequent assessment and intervention (i.e., 10 minutes; Truckenmiller et al., 2020). If the student stops writing before the timer expires, they should be encouraged to keep writing until time is up.
CBM Writing: Scoring and Considerations CBM writing probes can be scored in several different ways, and there are advantages to considering multiple scores for the same student. Scoring approaches can be categorized into three domains: production-dependent, production-independent, and accurate production (Jewell & Malecki, 2005).
Step 2: Assessment of Skills and Instructional Placement 197
TABLE 4.14. List of Possible Narrative Story Starters I just saw a monster. The monster was so big it . . . I made the biggest sandwich in the world. Bill and Sue were lost in the jungle. One day Mary brought her pet skunk to school. One day it rained candy. Hector woke up in the middle of the night. He heard a loud crash. Mae got a surprise package in the mail. One time I got very mad. The best birthday I ever had . . . I’ll never forget the night I had to stay in a cave. The most exciting thing about my jungle safari was . . . When my video game started predicting the future, I knew I had to . . . I never dreamed that the secret door in my basement would lead to . . . The day my headphone radio started sending me signals from outer space, I . . . The worst part about having a talking dog is . . . When I moved to the desert, I was amazed to find out that cactuses . . . When I looked out my window this morning, none of the scenery looked familiar. I’ve always wanted a time machine that would take me to that wonderful time when . . . I would love to change places with my younger/older brother/sister, because . . . The best thing about having the robot I got for my birthday is . . . I always thought my tropical fish were very boring until I found out the secret of their language . . . I thought it was the end of the world when I lost my magic baseball bat, until I found an enchanted . . . The best trick I ever played on Halloween was . . . I was most proud of my work as a private detective when I helped solve the case of the . . . If I could create the ideal person, I would make sure that they had . . . You’ll never believe how I was able to escape from the pirates who kept me prisoner on their ship . . .
PRODUCTION‑DEPENDENT SCORING METHODS
Production-dependent scoring methods are the traditional CBM approaches for scoring writing samples and are based on the number of words the student writes within the time limit. These methods are sensitive to problems with transcription. Total words written (TWW) is simply a tally of the total number of words written within the time limit, regardless of spelling, grammatical, syntactic, or semantic accuracy. It is purely a metric of writing fluency and output. Although very simple, this metric is highly relevant to assessing individual students because hallmark characteristics of writing difficulties are passages that are short and sparse. Words spelled correctly (WSC) is the number of correct spellings in the student’s writing sample. This metric is also useful because spelling problems are common among students with writing difficulties, as spelling problems are a factor that constrains their ability to produce quality writing. Correct writing sequences (CWS) is a tally of the number of adjacent pairs of words that are correct in terms of spelling, grammar, syntax, punctuation, and capitalization. An example is provided in Figure 4.6. Each pair of words in the student’s writing sample is examined, and each correctly sequenced pair receives a point. Thus, CWS captures
198
Academic Skills Problems ^I woud drink ^ water ^ from ^ the ^ ocean
5
^ and ^ I woud eat ^ the ^ fruit ^ off ^ of
6
^ the ^ trees ^.^ Then ^ I woud bilit a
5
^house ^ out ^ of ^ trees, ^ and ^ I woud
6
gather ^ firewood ^ to ^ stay ^ warm^.^ I
6
would try ^ and ^ fix ^ my ^ boat ^ in ^ my
6
^ spare ^ time^.
3
Correct Word Sequences
37
FIGURE 4.6. Example of correct writing sequences scoring for CBM writing.
multiple aspects pertaining to production and mechanics; more words written are needed for more points, and correct sequences provide a basic view of quality. These production-dependent scoring methods are focused on transcription. On one hand, this is a strength because CBM writing probes provide an efficient way for examiners to directly examine the ease with which students put words to paper. In addition to the quantitative scores, examiners can also observe students in terms of how effortful it is for them to write (for some students, especially younger students, handwriting is difficult and even uncomfortable), think of what words to write, and contemplate how to spell them. Handwriting legibility can also be assessed, providing an additional index of transcription. CBM writing is particularly relevant for students in early elementary grades given the nature of their emerging writing skills. In short, CBM writing measures can indicate the extent to which intervention should focus on aspects pertaining to transcription. On the other hand, the emphasis of these scoring methods on production and mechanics means that they are not a particularly rich source of information on a student’s composition skills or overall writing quality. This is particularly the case for older students (i.e., late elementary grades and beyond), when writing quality is increasingly defined by cohesion, organization, clarity, and creativity. Thus, the traditional production-dependent CBM indicators of fluency and mechanics lose some of their ability to reflect overall skills in written expression in later grades. An additional limitation is that the traditional CBM approach is based only on a brief sample of writing in a short amount of time, which is an obvious limitation because high-quality writing requires time in planning, drafting, and revising. Issues of validity have been observed across research reviews (McMaster & Espin, 2007; McMaster et al., 2012), in which production-dependent scoring metrics demonstrate weak to moderate correlations with measures of overall composition writing quality. PRODUCTION‑INDEPENDENT AND ACCURATE PRODUCTION SCORING METHODS
Researchers have sought to address some of the limitations of production-dependent CBM writing indices (McMaster & Espin, 2007; McMaster et al., 2012). These scoring methods use the same sample of writing from a story-starter but score it differently. Percent of correct writing sequences (%CWS) is a production-independent approach that involves scoring the percent of correct writing sequences out of all writing sequences the student produced. As such, %CWS is purely a measure of the student’s accuracy
Step 2: Assessment of Skills and Instructional Placement 199
in grammar, syntax, and punctuation. The amount of text the student writes does not influence the score. The same approach can be used to score percent of correctly spelled words, or percent of legible words to capture a quantitative index of handwriting. Correct minus incorrect writing sequences (CMIWS; Espin et al., 2000) is an accurate production scoring approach; it is scored by subtracting incorrect writing sequences from the number of correct sequences. Therefore, it captures a student’s grammar and spelling accuracy, as well as how much writing they produce. Students who write more have the potential for a higher score, but they must be accurate in doing so. RELATIONS OF DIFFERENT SCORING APPROACHES TO OVERALL WRITING SKILLS
Studies have observed that, compared to fluency-based production-dependent indices (e.g., TWW, CWS), production-independent (e.g., %CWS) indices and accurate production (CMIWS) tend to be more strongly correlated with a student’s performance on standardized writing measures. CMIWS in particular has demonstrated stronger correlations with standardized tests of writing than other indices, especially with older students in late elementary, middle, and secondary grades (Amato & Watkins, 2011; Espin et al., 2008; Jewell & Maleki, 2005; Mercer, Martinez, et al., 2012; Romig et al., 2017; Truckenmiller et al., 2020) as well as for students with diverse language backgrounds (Keller-Margulis et al., 2016). However, other indices can be informative for evaluating individual students with writing difficulties. Production-dependent metrics, such as TWW, can be informative for students at very basic stages of writing or when they have severe writing difficulties. A CBM writing sample can be scored with any of the methods described above, thus providing the evaluator with ways to specifically examine production, accuracy, or a combination of them. As Jewell and Maleki (2005) noted, the selection of scoring metrics should be strategic, based on what the evaluator wants to determine for (1) identifying aspects of writing to assess further and more specifically, and (2) identifying ideal intervention strategies.
Expansions and Alternatives to CBM Writing Work has also sought to expand the traditional use of narrative story starter prompts that may work well for early elementary grades but fall short of the type of expository writing that students are expected to do in later grades. Instead of story starters, Espin, De La Paz, et al. (2005) examined the use of expository prompts, such as “Choose a country you would like to visit. Write an essay explaining why you want to go to this country,” or “Think about how students can improve their grades. Write an essay telling why it is important to get good grades, and explain how students can improve their grades” (p. 210). With middle school students, Espin and colleagues observed strong validity between students writing samples based on expository prompts and criterion measures of written expression. The types of writing most commonly expected of students, and even adults, often involve writing about text. For example, book reports, summarizing a chapter, or writing a research paper frequently require the writer to summarize, synthesize, contrast, or integrate information they have read. For this reason, scholars have examined the use of passage prompts. Truckenmiller and colleagues (2020) developed the Writing Architect, a computer-based passage-prompt system that reflects contemporary writing expectations for students in later elementary grades and beyond. Each administration proceeds as follows. First, students read an informational passage on the computer that is at an
200
Academic Skills Problems
appropriate length and reading difficulty for their grade level. After reading the passage, students read instructions that tell them what to write about and what their writing should include. For example, after reading an article about how plastic bottles can be used for building homes, the program instructs students: “Write an informative paper that will help others learn about building houses out of plastic bottles” (p. 15). The instructions go on to remind students to use information from the article as reasons for utilizing plastic bottles for building houses, and prompts students to remember the key components of an informative paper, such as including its main idea, staying on topic, including an introduction and conclusion, and using information from the article (in their own words) along with their own ideas (Truckenmiller et al., 2020). Students are then given 3 minutes after reading the passage and the instructions to plan their response and make notes on paper. After 3 minutes, students then begin typing their response in a text box provided by the program. The program provides no feedback or correction for spelling or grammar mistakes. In their study, the program noted what students had written at the 5-, 7-, and 15-minute marks. Truckenmiller et al. also separately assessed students’ transcription skills via their typing fluency, whereby students were asked to type a passage presented on the screen as quickly as possible. Working with students in third through eighth grades, Truckenmiller and colleagues (2020) scored students’ writing products based on the paragraph prompts using the CMIWS metric at 5-, 7-, and 15-minute marks, and scored the 15-minute response using a writing quality rubric (Troia, 2018, described further below) that evaluated students’ writing on the basis of seven dimensions of quality. The results of the study indicated that, when accounting for student’s gender and typing fluency, students’ written responses, scored for CMIWS, were strongly predictive of their scores on the quality rubric, which in turn were moderately to strongly predictive of performance on a standardized measure of writing and the writing portion of a state accountability test. Overall, the results of this study provided support for their new model of passage prompts for writing assessment, especially in grades 5 through 8. The authors noted that shorter passage prompts or traditional CBM writing methods may still be more appropriate for third graders; however, their study represents a significant innovation in writing assessment that is more consistent with how students are expected to write in academic settings. As discussed in Chapter 7, the Writing Architect was originally designed as a progress monitoring measure, thus providing an additional benefit to using it at the initial assessment stage. Other metrics and scoring strategies have been developed that seek to capture more qualitative and complex aspects of students’ writing. Troia (2018) developed an essay quality scoring rubric that was used in the Truckenmiller et al. (2020) study described above. This rubric allows the user to score a sample of writing on seven dimensions on a 0–5 scale: purpose, logical coherence, concluding section or sentence, cohesion, supporting details from source materials, language and vocabulary choice, and grammar/usage/ mechanics. In scoring writing samples from students in grades 3 through 8, Truckenmiller et al. (2020) observed moderate to strong correlations with students’ scores on a standardized measure of written expression and scores on a state accountability assessment for writing, especially in grades 5 and up. Troia’s (2018) rubric is provided in the Academic Skills Problems Fifth Edition Workbook. Although CBM writing and its variants provide significant information for understanding a student’s writing difficulties and designing an intervention, there are situations in which evaluators may require additional direct measures of writing, particularly for evaluating composition quality. This may be most common when evaluating older students. Table 4.13 indicates several test batteries that include subtests capturing overall
Step 2: Assessment of Skills and Instructional Placement 201
writing quality and composition. Spelling skills can also be evaluated more specifically as part of an assessment of writing skills, and the table indicates several standardized tests of writing that include spelling subtests (for more information on spelling assessment, see the section on reading assessment earlier in this chapter). The most comprehensive of the standardized tests is the Test of Written Language (Fourth Edition), which offers a series of subtests ranging from basic transcription skills to story composition. It also offers a subtest of vocabulary use in writing, which is unique to writing tests but a skill area that is highly relevant to written expression. Knowledge and skills in language (e.g., vocabulary and other aspects of oral language) and self-regulation also play roles in students’ writing performance. Language assessment is described in the section on reading earlier in this chapter, and the tests included in that section (especially any vocabulary subtests) are relevant for writing assessment as well. In addition to the vocabulary use subtest in the TOWL-4 mentioned earlier, another test to consider is the Test of Oral Language Development (Fifth Edition), which includes tests of both receptive and expressive language.
Assessing Self‑Regulation in Writing Information on the student’s self-regulation skills can be obtained through the teacher interview and any rating scales administered, direct observations in the classroom, and observing the student complete writing tasks as part of the direct assessment. Things to look for relevant to writing are whether the student takes time to plan and consider their responses, as opposed to immediately writing whatever comes to mind; and whether they demonstrate consistent effort and persist with writing, as opposed to writing as little as possible and giving up quickly. Self-regulation in writing also pertains to the overall thought and care a student puts into their writing, and the extent to which they review, revise, and proofread their written products. Writing is a domain in which the quality of the final product is strongly influenced by one’s willingness and desire to do better. Thus, self-regulation is a key factor in writing achievement, and assessing students’ planning, effort, and persistence can provide useful information for designing intervention.
Using Permanent Product Reviews and the Student Interview Permanent product reviews are an excellent source of information on a student’s writing skills, which can include essays, reports, homework, or writing journals. In fact, writing is the skill domain in which permanent product reviews may have the highest utility. In addition to providing the examiner with information on a student’s specific areas of strength and difficulty in writing, examining their written work reveals the types of writing assignments they have been given and what is expected of them. This can help the evaluator interpret results of the direct assessment in terms of the student’s performance relative to expectations, and also helps the evaluator design interventions relevant to the types of writing the student is expected to produce and the parameters within which their writing is being evaluated. The student interview can also be a source of information for understanding a student’s writing difficulties and for informing intervention. The student can be asked questions pertaining to affective factors, such as their motivation to write, writing-related anxiety, and so on. Most students with writing difficulties do not enjoy writing and are unmotivated to do so; however, questions in this area might identify some aspects of writing the student does enjoy, topics they may like to write about, or what things they
202
Academic Skills Problems
believe might help them improve their writing. The interview could also reveal information not available from direct assessments, for example, a student might report that writing is physically uncomfortable. This information may be useful in building components of the intervention to provide necessary supports, to consider possible referrals (e.g., to an occupational therapist if fine motor skills are a concern), and to promote motivation and engagement.
Interpreting the Results of a Writing Assessment Direct assessments of writing skills, combined with information from the teacher interview, review of permanent products, and the student interview, should provide the examiner with the information needed to pinpoint the student’s areas of writing difficulty and design an intervention accordingly. At the very least, the assessment will allow the examiner to determine if the student’s writing difficulties are due in part to transcription problems, or if transcription appears adequate and the issue pertains more to composition. As we will review in Chapter 6, the most effective types of writing interventions target multiple components, including planning, organization, drafting, production, and revising, but the assessment information will be necessary for determining which aspects of the intervention are most important. As in mathematics, “instructional” level does not have a great deal of relevance for intervention in writing; what is important are identifying the skill areas the student needs to develop that will improve their overall performance, and then providing opportunities to write that are consistent with the types of writing assignments and expectations for the student’s grade level.
CONSIDERING SKILL VERSUS PERFORMANCE DEFICITS (“CAN’T DO” VS. “WON’T DO”) IN ACADEMIC SKILLS By this point in the assessment process, it should be clear whether the student’s low academic performance is due to skill deficits (i.e., “can’t do”; the student generally has enough willingness to complete assigned tasks, but demonstrates poor accuracy, underdeveloped skills, and gaps in foundational skills) or performance deficits (i.e., “won’t do”; the student has demonstrated understanding and good accuracy, but due to behavior or motivation, chooses not to complete academic tasks). A third category is similar to skill deficits; students may have basic understanding or even good accuracy, but fail to perform skills fluently, which is often the result of insufficient opportunities to practice. In most cases, the information collected across the interviews with the teacher and student, direct observations, review of permanent products, and direct assessment of academic skills will be enough to indicate which category (or combination) best characterizes the student’s academic performance problems. Nevertheless, there may be situations in which these questions, particularly in terms of differentiating skill from performance deficits, remain unanswered at this stage of the assessment. In such instances, evaluators can consider experimental procedures to determine whether intervention should target skill instruction, motivation, or some combination. Procedures for this type of “can’t do/won’t do” assessment have been described clearly in several publications (Codding et al., 2009; Duhon et al., 2004; Gansle, Noell, & Freeland, 2002; Parker et al., 2012; VanDerHeyden & Witt, 2008). In short, these procedures involve setting up a series of conditions in a single-case experimental design and examining how the student’s performance on a measure of interest varies as a function
Step 2: Assessment of Skills and Instructional Placement 203
of the condition. The conditions include, at a minimum, (1) the student’s “baseline” performance under typical expectations and task demands and (2) adding a component to improve motivation, such as offering a reward for meeting a performance goal. Some implementations add a third condition, in which instruction in the target skill is provided. Beginning with a baseline, the conditions would be randomly presented and repeated. The reward and instruction conditions can also be combined. Differentially higher performance in the reward condition relative to the others suggests a performance deficit, whereas stronger performance in the instruction condition suggests a skill deficit. It is also possible for performance to be strongest for the combination of instruction and incentive. Readers will note that this approach is very similar to brief experimental analysis (BEA), which we criticized earlier as an assessment for identifying reading fluency interventions. We distinguish an experimental analysis of skill versus performance deficits discussed here from BEA procedures that attempt to determine an ideal intervention from several instructional or practice strategies. BEA of that type often includes problematic expectations about how immediately a complex academic skill can change, and often conflate performance on a measure with actual learning. Experimental can’t do/won’t do analysis need not include an instruction condition (VanDerHeyden & Witt, 2008), thus keeping the focus on ruling a performance deficit in or out and avoiding problematic assumptions about the immediacy of skill improvement. However, it is important to note that an experimental can’t do/won’t do assessment is prone to some of the same problems as BEA for testing different instruction strategies. Most importantly, normal variability in the measures used to test performance across conditions must be considered; if scores between two or more conditions fall within the standard error of the measure, firm conclusions about the results are not possible. The procedures also take time and resources, and should only be undertaken if the data are necessary for designing intervention. The skill being measured should be considered; performance deficits are more plausible in areas such as writing or solving mathematics problems, where sustained effort and motivation often translate to greater quality, output, and accuracy. A performance deficit is far less plausible for a skill like reading fluency, which is driven primarily by automatic word recognition. It is also the case that any intervention for a student with academic difficulties can benefit from including motivational components, or they can be added later if engagement or behavior is problematic, thus making an experimental analysis to determine a performance deficit rather unnecessary. Still, if considered judiciously, and the information is necessary, an empirical test of an incentive condition compared to baseline expectations may be helpful for determining whether motivational components should be included in an intervention.
SUMMARIZING THE DATA COLLECTION PROCESS AND REVISITING THE HYPOTHESIS Thus far we have described the first two phases of the assessment process: collection of data about the instructional environment and direct assessment of academic skills. Information is obtained through the teacher interview, rating scales, direct observation, examination of permanent products, student interview, and direct assessment. Decisions that can be made from these data include identification of variables, factors, and skill deficits directly affecting the student’s academic performance within the teaching and learning process; recommendations for interventions; the student’s instructional level and
204
Academic Skills Problems
materials that may be most ideal for intervention; setting long- and short-term goals; and evaluation of the effectiveness of designed interventions. It is key at this stage to revisit the hypothesis formed early in the assessment process regarding the nature and causes of the student’s academic difficulties. Data collected from the direct assessment are reviewed to determine whether the information thus far confirms the hypothesis, whether refinements or revisions to the hypothesis should be made based on the data collected, and whether the assessment has raised additional questions in need of investigation. The evaluator should treat the assessment as an empirical process. Data should be viewed objectively, without preconceived notions of what the evaluator “wants to believe” about the case. There should be no hesitation to revise a hypothesis that, in light of subsequent data collected, reveals the original hypothesis was incorrect, no matter how strongly the evaluator felt about the original hypothesis. The value of this empirical, hypothesis-driven approach is that the data are the guide. Subjective opinions are minimized as much as possible, and a well-supported hypothesis helps translate the data into intervention recommendations. Readers are reminded that the assessment is not over at this point. Shapiro’s model, the models of CBA and CBE on which it was based, and other similar frameworks such as data-based intensification view “assessment” as an ongoing process that extends through intervention implementation. The intervention itself is a form of a test, and the progress monitoring data collected during intervention reveal whether the selected strategies result in improved student performance. If not, additional assessment is conducted to determine what adjustments in instruction should be made. Intervention is therefore part of the assessment process because it helps reveal what does and does not work for improving a student’s achievement. To facilitate the process of integrating the data from their various sources, we provide the Data Summary Form for Academic Assessment in Appendix 4D and in the Academic Skills Problems Fifth Edition Workbook. This form can be used for aggregating and summarizing the results from all aspects of the assessment so far, which should facilitate intervention development and report writing. Frameworks for revising the hypothesis statement are provided here as well. At this point, the evaluator is ready to develop intervention recommendations and consider how the student’s progress will be monitored. These are the topics of the chapters that follow.
Step 2: Assessment of Skills and Instructional Placement 205
APPENDIX 4A
Procedures for Developing Curriculum-Based Oral Reading Fluency Probes The following are procedures for deriving oral reading passages that appeared in previous editions of this text; they are provided here in the event that examiners do not have access to a controlled set of reading passages from a publisher or a CBM materials vendor. We remind readers that using passages from CBM vendors (e.g., Acadience, AIMSweb, DIBELS, easyCBM, FastBridge) saves time and results in more reliable data than deriving passages from the curriculum. Several CBM vendors offer free access to their materials for use with individual students. If it is necessary to create a set of passages from curricular materials, it is important that the passages be controlled for readability level, regardless of which reading series is used for instruction. It is often difficult to find passages from the curriculum of instruction that are always within the grade level that one is assessing. As such, passages are viewed as acceptable if they are no more than ±1 grade level from the level that the evaluator is assessing. In other words, if the evaluator is assessing grade 2, the passage should range in readability from grades 1 to 3. Determining the readability of passages can be done with the use of current computer technology. Most word-processing programs (e.g., Microsoft Word) offer a built-in readability formula that can be accessed easily. However, users need to understand that the only readability programs available through Microsoft Word are the Flesch Reading Ease and Flesch–Kincaid formula, both of which have limitations in use with children. Programs are available commercially that offer calculations of readability across many different formulas (www.readability-software.com; OKAPI reading probe generator at www.interventioncentral.org). The product offered by Readability-software.com provides formulas that include the Dale–Chall, Spache, Fry, Flesch Grade Level, Flesch Reading Ease, SMOG, FORCAST, and Powers–Somner–Kearl. Each formula uses a somewhat different basis on which readability is determined. For example, the Spache formula is vocabulary-based and useful for elementary-level materials from primary through fourth grade. Similarly, the Dale–Chall and Flesch Grade Level formulas are vocabulary-based measures that are useful for assessing readabilities for upper-elementary and secondary-level materials. Similarly, other formulas, such as the FOG and the Flesch Reading Ease, examine the number of syllables within words and are applied to rate materials used in business and industry, and other adult materials. The Fry index cuts across a wide range of materials from elementary through college levels. Typically, the Spache, Dale–Chall, and Fry formulas are the most commonly used in determining readability for material that would typically be used in developing a CBA. Although readability formulas can be useful in providing general indicators of the difficulty levels of materials, they usually are based on only one aspect of the text, such as vocabulary or syllables in words. Another measure of readability that focuses on multiple aspects of text complexity is the Lexile® framework for reading (www.lexile.com). Lexile measures are available for a wide range of commercially published literature typically used in schools. In addition, passages available from several CBM vendors have been graded using the Lexile framework.
CONSTRUCTING ORAL READING PROBES 1. For each book in a reading series, the evaluator should select three 150- to 200-word passages (for first through third grades, 100- to 150-word passages): one from the beginning, one from the middle, and one from the end of the book. This selection provides a total of three passages for each book in the reading series and reflects a reasonable range of the
206
Academic Skills Problems
material covered in that grade level of the series. Readability of each of these passages is checked against a common formula (Spache for materials up to fourth grade, Dale–Chall for those above). Passages that do not fall within one (0.5 is better) grade level of the identified material are discarded. Closer to grade level is always better. To facilitate the scoring process, the evaluator copies the passage on a separate sheet with corresponding running word counts placed in the right-hand margin. For preprimers and primers, shorter passages may be used. In addition, the differentiations between preprimers may not be salient enough to warrant separate probes for each individual book. In these cases, it is recommended that only the last of the preprimer books be used for purposes of assessment. Another issue that sometimes emerges is that a reading series may have more than one level assigned to a single book. Although it is only necessary to assess by book, not level within books, some examiners may wish to create a series of probes for each level within the book. This is a perfectly acceptable practice, but it may lengthen the assessment period considerably. Selected passages should not have a lot of dialogue, should be text (i.e., not poetry or plays), and should not have many unusual words. It is not necessary to select passages only from the beginning of stories within the text. 2. The evaluator should make two copies of each passage selected. One copy will be for the child to read and the other copy will be used to score the child’s oral reading. The evaluator may consider covering their copy with a transparency or laminating the probe so that it can be reused. 3. For each probe, the evaluator should develop a set of five to eight comprehension questions. These questions should require literal comprehension (i.e., the information is stated in the text) and inferential comprehension (i.e., information is not explicitly stated and students must make an inference to answer correctly). Comprehension questions can be developed for each probe, although it may not be necessary to check comprehension on each passage. Idol et al. (1996), Howell and Nolet (1999), and Blachowicz and Ogle (2001) offer excellent suggestions for developing comprehension questions. If comprehension difficulties are expected, it may be better to use a measure designed to assess reading comprehension specifically, in addition to measuring student’s oral reading fluency. Available reading comprehension measures are described in Chapter 4. The issue of whether to administer comprehension questions in a CBA is rather controversial. Results of numerous studies have consistently demonstrated that oral reading rate is a strong predictor of comprehension skills, especially up through the late elementary grades (e.g., for a review and meta-analysis, see Reschley et al., 2009). Correlations between measures of comprehension and oral reading rate are consistently higher than .60 in most studies. Many practitioners, however, are very uncomfortable with not assessing comprehension skills directly. Anyone working with students referred for academic problems has come across occasional students referred to as word callers, students with what appears to be adequate word and text reading fluency, yet having significant difficulties in reading comprehension. Although the presence of word callers among students is often overestimated by teachers, there is evidence that the prevalence increases above grade 3 (Hamilton & Shinn, 2003; M eisinger et al., 2009). Failure to assess comprehension skills for such a student could lead one to erroneous conclusions about their reading level. Reading fluency and reading comprehension are intertwined; therefore, it makes sense to check the student’s comprehension at least in some manner during this process. In this way, the evaluator can feel more comfortable with the relationship between oral reading rate and comprehension. The comprehension screen will also provide the evaluator with additional information on the depth of a student’s reading difficulties.
Step 2: Assessment of Skills and Instructional Placement 207
APPENDIX 4B
Administration and Scoring of Examiner-Created Mathematics Computation Probes Measures of procedural computation (i.e., multidigit operations in addition, subtraction, multiplication, or division that require the use of algorithms or procedures) are available in nearly every test of mathematics and from most CBM vendors. If examiners find it necessary to develop their own procedural computation probes for diagnostic and intervention-development purposes, the following guidelines that appeared in previous editions of this text are provided. 1. The examiner should sample problems from the student’s curriculum of instruction that reflect the types of problems the student is expected to learn. Generally, no more than two different probe sheets for each skill or cluster of skills are administered. The evaluator should give the probe to the child and tell them to work out each problem, going from left to right without skipping any. If the child does not know how to do a problem, they should be encouraged to make any effort they believe is appropriate for solving it. 2. If the evaluator believes the student’s performance on the probe was not indicative of their best efforts, a second probe of the same skills can certainly be administered. Likewise, if the evaluator finds that the student is highly frustrated by the skill being assessed, they can stop the student short of the time permitted for the probe, and the evaluator does not need to administer a second probe at the same level. It is important, however, to note the amount of time the student worked on the math probe. 3. Math probes can be scored by examining the process of how the student reached the answer or just the answer itself. For diagnostic and intervention-development purposes, it is informative to examine the student’s process rather than the answers alone. When scoring for process, each of the probes is scored by counting the separate digits in an answer. For all skills except long division, all the digits the student writes below the problem are counted. For example, in a two-digit addition problem with regrouping, markings the student makes above the 10’s column are not counted. The evaluator should count the number of digits, both correct and incorrect, for each probe. If the child completes the worksheet before their time is up, the evaluator should divide the number of digits by the total number of seconds and multiply by 60. This equals the digits correct (or incorrect) per minute. The mean score for all probes of the same skills serves as the score for that skill or cluster. A problem encountered in scoring the math probes may occur when students are doing double-digit multiplication. An error in multiplication will result in incorrect scores when the student adds the columns. Even though all operations are correct, a single mistake in multiplication can result in all digits being incorrect. When a student makes an error in multiplication, digits should be scored as correct or incorrect if the addition operations are performed correctly. For example, the problem 45 × 28 360 90 1,260 shows 10 correct digits (9 digits plus the placeholder under the 0). Suppose the problem has been completed as follows:
208
Academic Skills Problems
45 × 28 360 80 1,150 The problem is scored as having 8 correct digits (7 digits plus the placeholder under the 0) because the student multiplied incorrectly but added correctly. Keep in mind that the purpose of scoring problems this way is to examine where the student’s errors in procedural computation occur, thus providing diagnostic information useful for determining what intervention should target. Scoring the student’s problem-solving process provides a sensitive index for doing so. In addition to scoring problems for digits correct and incorrect in the problem-solving process, it can also be helpful to score the probes for the percentage of problems completed correctly. This is a more commonly used metric in classrooms and may be helpful in communicating with the teacher.
APPENDIX 4C
A Computation Skills Mastery Curriculum GRADE 1 1. Add two one-digit numbers: sums to 10 2. Subtract two one-digit numbers: combinations to 10 GRADE 2 3. Add two one-digit numbers: sums 11–19 4. Add a one-digit number to a two-digit number—no regrouping 5. Add a two-digit number to a two-digit number—no regrouping 6. Add a three-digit number to a three-digit number—no regrouping 7. Subtract a one-digit number from a one- or two-digit number—combinations to 18 8. Subtract a one-digit number from a two-digit number—no regrouping 9. Subtract a two-digit number from a two-digit number—no regrouping 10. Subtract a three-digit number from a three-digit number—no regrouping 11. Multiplication facts—0’s, 1’s, 2’s GRADE 3 12. Add three or more one-digit numbers 13. Add three or more two-digit numbers—no regrouping 14. Add three or more three- and four-digit numbers—no regrouping 15. Add a one-digit number to a two-digit number with regrouping 16. Add a two-digit number to a two-digit number with regrouping 17. Add a two-digit number to a three-digit number with regrouping from the 10’s column only 18. Add a two-digit number to a three-digit number with regrouping from the 100’s column only 19. Add a two-digit number to a three-digit number with regrouping from 10’s and 100’s columns 20. Add a three-digit number to a three-digit number with regrouping from the 10’s column only 21. Add a three-digit number to a three-digit number with regrouping from the 100’s column only 22. Add a three-digit number to a three-digit number with regrouping from the 10’s and 100’s columns 23. Add a four-digit number to a four-digit number with regrouping in one to three columns 24. Subtract two four-digit numbers—no regrouping 25. Subtract a one-digit number from a two-digit number with regrouping 26. Subtract a two-digit number from a two-digit number with regrouping 27. Subtract a two-digit number from a three-digit number with regrouping from the 10’s column only 28. Subtract a two-digit number from a three-digit number with regrouping from the 100’s column only 29. Subtract a two-digit number from a three-digit number with regrouping from the 10’s and 100’s columns 30. Subtract a three-digit number from a three-digit number with regrouping from the 10’s column only 31. Subtract a three-digit number from a three-digit number with regrouping from the 100’s column only 32. Subtract a three-digit number from a three-digit number with regrouping from the 10’s and 100’s columns 33. Multiplication facts—3–9 GRADE 4 34. Add a five- or six-digit number to a five- or six-digit number with regrouping in any column 35. Add three or more two-digit numbers with regrouping (continued)
209
36. Add three or more three-digit numbers with regrouping 37. Subtract a five- or six-digit number from a five- or six-digit number with regrouping in any column 38. Multiply a two-digit number by a one-digit number with no regrouping 39. Multiply a three-digit number by a one-digit number with no regrouping 40. Multiply a two-digit number by a one-digit number with no regrouping 41. Multiply a three-digit number by a one-digit number with regrouping 42. Division facts—0–9 43. Divide a two-digit number by a one-digit number with no remainder 44. Divide a two-digit number by a one-digit number with remainder 45. Divide a three-digit number by a one-digit number with remainder 46. Divide a four-digit number by a one-digit number with remainder GRADE 5 47. Multiply a two-digit number by a two-digit number with regrouping 48. Multiply a three-digit number by a two-digit number with regrouping 49. Multiply a three-digit number by a three-digit number with regrouping
210
APPENDIX 4D
Data Summary Form for Academic Assessment The purpose of this form is to assist in aggregating and summarizing the data from the academic assessment. Skip any questions or sections that are not relevant or were not assessed. This form is provided to assist the evaluator in decision making and report writing but is not to be used as the report itself. Child’s name: Teacher: Grade: School: School district: Date: Teacher-Reported Primary Area of Concern: Teacher-Reported Secondary Area of Concern: Teacher-Reported Areas of Relative Strength: TEACHER-REPORTED STUDENT BEHAVIOR Rate the following areas from 1 to 5 (1 = Never, 5 = Always): a. Stays engaged (on-task) during teacher-led large-group instruction b. Stays engaged (on-task) during teacher-led small-group instruction c. Stays engaged (on-task) during partner work or independent work d. Follows directions e. Shows effort and persistence, even when work is difficult
f. Asks for help when needed
g. Completes tests or classwork in allotted time
h. Completes homework on time
i. Engages in behaviors that disrupt instruction or peers’ learning
Student’s behavior is reported to be more problematic in: Summary of Additional Behavior Rating Scale Data (if administered) Scale: Summary of scores (include subscale scores and total scores, as applicable) and percentile ranks: Behavior notes and additional information: (continued)
211
READING—SKILLS Teacher-Reported Primary Area(s) of Reading Difficulty: Title of curriculum series, materials used for reading instruction: Reading skills expected of student at present grade and time of year: Summary of Direct Assessment Scores of Reading Skills (leave cells blank if not assessed or applicable) Keystone Skill Area
Test/Measure/Subtest
Raw Correct/Errors Standard Percentile Score (if applicable) Score Rank
Vocabulary, Language Comprehension Phonological Awareness
Alphabetic Knowledge
Word Reading, Decoding
Spelling
Text Reading Efficiency
Reading Comprehension
If Word Reading/Decoding skills are a concern, what types of words does the student find challenging? What types of words is the student more successful at reading?
(continued)
212
Reading Survey-Level Assessment: Results of Oral Reading Passages Administered
Grade Level/Book
Median Scores for Level Words Words Correct Incorrect % Passage per Minute per Minute correct WC ER %C
Learning Level (Independent, Instructional, Frustrational)
1 2 3 1 2 3 1 2 3 1 2 3 Summary of review of reading permanent products: READING—ENVIRONMENT Instructional Procedures: General nature and focus of reading instruction: Proportion of large-group, small-group, independent work: Number of reading groups: Student’s reading group (if applicable): Allotted time/day for reading: Student aware of rules/expectations/routines: Yes No Contingencies: Observations:
None completed for this area
System used: BOSS Other (continued)
213
Setting of observations: ISW:TPsnt
SmGp:Tled
Co-op
ISW:TSmGp
LgGp:Tled
Other
BOSS results: Target Peer Target Peer AET% AET% OFT-M% OFT-M% PET% PET% OFT-V% OFT-V% OFT-P% OFT-P% TDI% Intervention Strategies Previously Attempted: Simple Moderate Intensive STUDENT INTERVIEW—READING
Not completed for this area
Understands expectations of teacher
Yes
No
Not sure
Understands assignments
Yes
No
Not sure
Feels they can do the assignments
Yes
No
Not sure
Likes the subject
Yes
No
Not sure
Feels they are given enough time to complete assignments
Yes
No
Not sure
Feels as if they are called on to participate in discussions
Yes
No
Not sure
Feels as if they can improve in [referred skill area] with effort and support
Yes
No
Not sure
Reading—Overall Summary Notes: MATHEMATICS—SKILLS Mathematics curriculum/program used for instruction: Specific problems in mathematics:
(continued)
214
Summary of Direct Assessment Scores of Mathematics Skills (leave cells blank if not assessed or applicable) Keystone Skill Area
Raw Correct/Errors Standard Percentile Score (if applicable) Score Rank
Test/Measure/Subtest
Early Numerical Competencies
Number Combinations
Procedural Computation
Word-Problem Solving (and/or Pre-Algebraic Reasoning) Rational Numbers
Geometry and Measurement
Algebra
Summary of review of mathematics permanent products: MATHEMATICS—ENVIRONMENT Instructional Procedures: Allotted time/day: Teaching procedures, how instruction is divided, etc.: Contingencies: (continued)
215
Observations:
None completed for this area
System used: BOSS Other Setting of observations: ISW:TPsnt
SmGp:Tled
Co-op
ISW:TSmGp
LgGp:Tled
Other
BOSS results: Target Peer Target Peer AET% AET% OFT-M% OFT-M% PET% PET% OFT-V% OFT-V% OFT-P% OFT-P% TDI% Mathematics Intervention Strategies Attempted: Simple Moderate Intensive STUDENT INTERVIEW—MATHEMATICS
None completed for this area
Understands expectations of teacher
Yes
No
Not sure
Understands assignments
Yes
No
Not sure
Feels they can do the assignments
Yes
No
Not sure
Likes the subject
Yes
No
Not sure
Feels they are given enough time to complete assignments
Yes
No
Not sure
Feels as if they are called on to participate in discussions
Yes
No
Not sure
Feels as if they can improve in [referred skill area] with effort and support
Yes
No
Not sure
Mathematics—Overall Summary Notes: WRITING—SKILLS Types of writing assignments and expectations for students at present grade and time of year: (continued)
216
Summary of Direct Assessment Scores of Writing Skills (leave cells blank if not assessed or applicable) Test/Measure/Scale/Subtest (and scoring metric, if applicable)
Keystone Skill Area
Score
Standard Percentile Score Rank
Transcription: Handwriting, Writing Fluency Transcription: Spelling
Grammar, Syntax
Composition Quality
Summary of review of writing permanent products: Are low motivation and/or low self-regulation involved in the student’s writing difficulties? Are oral language difficulties involved in the student’s writing difficulties? WRITING—ENVIRONMENT Instructional Procedures: Allotted time/day: Teaching procedures, activities: Observations:
None completed for this area
System used: BOSS Other Setting of observations: ISW:TPsnt
SmGp:Tled
Co-op
ISW:TSmGp
LgGp:Tled
Other (continued)
217
BOSS results: Target Peer Target Peer AET% AET% OFT-M% OFT-M% PET% PET% OFT-V% OFT-V% OFT-P% OFT-P% TDI% School MTSS/RTI Model (complete only if the school has one) Grade levels and skill areas covered by the model: For how many years has the model been in place?
Less than 1 year
1 year
2 years
3+ years
Is this student assigned to tiered instruction beyond Tier 1? No
Yes, skill area(s):
To which tier is the student currently assigned? Tier 1
Tier 2
Tier 3
Other
Describe the interventions that have been used for the student (as applicable): Tier 2 Tier 3 What are the benchmark scores (and measures) for the student in the current year (if available)? Fall Winter Spring What are the expected benchmark scores (and measures) for the student in the current year (if available)? Fall Winter Spring What is the student’s rate of improvement (ROI) for progress monitoring (if available)? Expected ROI Targeted ROI Attained ROI Hypothesis Development Primary area of difficulty: Suspected skill deficits that are the reason for the difficulty: Difficulties with behaviors or learning-related skills that may be contributing to the problem: Possible environmental and instructional factors contributing to the problem: Relative strengths (academic or social/behavioral) that may mitigate the problem: (continued)
218
Hypothesis Statement Framework. This is meant as a guide to assist hypothesis writing. It should be refined and revised as needed (e.g., relevant aspects added or irrelevant aspects omitted). Separate hypotheses can be written for secondary areas of difficulty. ’s difficulties in [reading/mathematics/writing] appear to be due to inadequate or underdeveloped skills in . These difficulties appear [or do not appear] to be related to the student’s behaviors or learning-related skills, which include . The student’s difficulties appear [or do not appear] to be related to instructional or classroom environment factors, which may include
.
Compared to their area(s) of difficulty, the student demonstrates relative strengths in
.
219
CHAPTER 5
Step 3: Instructional Modification I GENERAL STRATEGIES AND ENHANCING INSTRUCTION
BACKGROUND The most practical approaches to intervention are those that target the academic skills, instructional techniques, and classroom environmental variables that are most relevant to the student’s academic performance. The hypothesis-driven assessment process described in the previous chapters identifies the skills and variables that are most likely the cause of the student’s academic difficulties. The next step involves connecting the assessment results to strategies, programs, and procedures that research has demonstrated are effective for improving student achievement, directly target the academic skills and environmental variables of interest, and are consistent with the evaluator’s hypothesis on what will best meet the student’s needs. Whether and to what extent the interventions concentrate on specific academic skills, increasing opportunities to practice, antecedent or consequent events surrounding academic performance (e.g., making instruction more explicit and improving instructional feedback), or the curricular material itself (e.g., changing the difficulty of the material used for instruction and/or practice), or some combination of these variables, are based on the data gathered through teacher interviews, direct observations, direct assessment of academic skills, rating scales, reviews of permanent products, and student interviews. The resulting intervention procedures selected for a student should (1) directly target academic deficits or environmental variables suspected to have a causal effect on the student’s academic performance, and (2) have research showing that they improved academic skills in similar situations or use evidence-based instructional principles. However, actual validation of whether the selected strategies improve the student’s performance comes after they are implemented, and progress monitoring data are examined, to determine the student’s response. Thus, although the evaluator uses the assessment data to select strategies that research evidence says are effective in improving the student’s target skills and behaviors, only those that result in improvement in the student’s performance (as indicated by subsequent progress monitoring data) are retained 220
Step 3: Instructional Modification I 221
over time. Those interventions that are not found to result in the desired change are adjusted or discarded, but only for that individual student. Another student, with a similar pattern of skill difficulties, may be responsive to the same evidence-based procedure that was ineffective for a different student. In other words, the assessment allows the evaluator to make a data- and research-informed prediction about what strategy or set of strategies will be effective, but knowing for certain only comes through monitoring progress. In the following two chapters, we discuss evidence-based options for initially identifying intervention strategies.
Conceptualizing Intervention Intensity Intervention strategies for academic skills remediation can be conceptualized along a continuum ranging from general and least invasive, to specific and most intensive. As illustrated in the examples in Figure 5.1, the continuum can be characterized by the extent to which the strategies alter instruction, their specificity to particular skills or situations, and the resources required. General strategies involve minimal or no changes to instruction or the program being used and are applicable across different academic skills and skill difficulties. The focus of interventions at this level is increasing student engagement and motivation, adding prompts and reminders for the student to use strategies they have learned, teaching self-management strategies, or adjusting other aspects that do not involve changes to instruction. These interventions are usually easy to implement and require minimal time and resources. Strategies at this level are consistent with antecedent behavior strategies because they aim to increase the likelihood that a student will demonstrate the desired skill or behavior when asked. As such, these approaches would be appropriate for students who have “performance” difficulties; they understand what to do and how to do it, but their academic performance is low due to low motivation, forgetting procedures,
Rules and Expectations Behavior-Specific Praise Group Contingencies
Make Instruction More Explicit Improve Instructional Feedback Brisk Pace of Instruction
Reading Interventions
Mathematics Interventions
General and Simple Strategies
No Changes to Instruction
Moderate Strategies Enhance Existing Instruction
Specific and Intensive Strategies Changes to Instruction
Self-Management Strategies • Cueing, Self-Instruction • Self-Monitoring • Goal Setting and Self-Graphing Point Systems and Incentives Increase Opportunities to Respond and Practice • Build In Practice • Cooperative Peer Tutoring • Peer-Assisted Learning Strategies • Cooperative Learning
Writing Interventions
Vocabulary Interventions
FIGURE 5.1. Continuum of intensity of instructional modifications and interventions.
222
Academic Skills Problems
or other factors unrelated to the actual skill. Using the strategies assumes that either (1) current instruction is appropriate and does not require changes, or (2) these strategies are used in conjunction with instructional adjustments described later. When general strategies are unsuccessful, or when the student is clearly in need of strategies to remediate skill deficits or build fluency, moderate strategies that focus on enhancing existing instruction or programs are implemented. As noted in Figure 5.1, such interventions aim to improve instruction, usually by making it more explicit and increasing the pace, making feedback (both affirmative and corrective) more frequent and effective, increasing opportunities for students to respond and practice, and adding additional reviews of previously taught skills. Note the emphasis is on enhancing how content is taught and increasing students’ practice opportunities, rather than changing the focus of instruction itself. Students for whom these types of approaches are most appropriate are those in which the assessment revealed skill deficits (i.e., so-called “acquisition” deficit), skills that are partially learned or insecure, students who need more practice to build automaticity, or a combination of aspects. Students with academic difficulties may require interventions that are more intensive and focused on specific academic skills. As indicated in Figure 5.1, these strategies involve focusing on skills specific to certain academic areas (i.e., reading, mathematics, writing), individualizing intervention, and other changes that make instruction more targeted and individualized. These changes may be implemented in conjunction with general or moderate strategies, but may require additional time, resources, staff, and professional development than less intensive approaches. For example, an intervention plan may include individualized tutoring to improve a student’s word problem-solving skills (i.e., specific and intensive) that is supplemental to general mathematics instruction, but this tutoring could be paired with strategies to increase the number of opportunities to respond in general mathematics instruction (i.e., moderate) and a self-monitoring procedure to improve the student’s task engagement (i.e., general). Clearly, in this volume, we cannot cover every intervention that has demonstrated success for improving academic skills or describe them in great detail. Rather, we focus on the strategies and instructional frameworks that research indicates provide good chances for success and can be implemented across various skill areas. Thus, this chapter and Chapter 6 are designed to help readers build knowledge of “what works” to align effective interventions with the specific needs identified through the assessment, and to provide a jumping-off point for readers interested in learning more about specific strategies. The present chapter is focused on strategies and frameworks that generally fall within the first two levels of intensity in Figure 5.1, and have demonstrated evidence for success across different types of academic skills. Selected strategies and intervention programs that are more intensive in nature and specifically designed for remediating basic skills in reading, mathematics, and writing are described in Chapter 6. Throughout all of the descriptions, we encourage readers to consider the types of situations in which they might implement a strategy or some combination of them. There is significant value in considering less intensive interventions first. Less intensive interventions make fewer (or no) changes to instruction, and therefore are usually easier to implement and use fewer resources while meeting the student’s needs. School resources in terms of teachers’ time, students’ time, physical space, and budgets must be protected. Certainly, the extent and severity of the student’s academic difficulties will have bearing on the level of strategies that are initially selected, and the extent to which instruction needs to be altered. If the assessment data for a very low-performing student
Step 3: Instructional Modification I 223
clearly indicate the need for changes to instruction, then the intervention recommendations should be consistent with that need. However, there is no need to consider interventions that are more intensive or costly than what is needed. More importantly, intervention can be intensified when progress monitoring indicates the student is not making adequate progress. For initial recommendations, opt for interventions on the low- to midrange of the intensity spectrum, as appropriate, and make adjustments or increase the intensity if needed.
GENERAL AND SIMPLE STRATEGIES FOR ACADEMIC PROBLEMS: LESS INTENSIVE APPROACHES THAT DO NOT REQUIRE CHANGING INSTRUCTION Consistent with the first box in Figure 5.1, the following strategies are those considered less intensive from the perspective that they do not require changes to instruction. Rather, they aim to improve a student’s motivation, engagement, and attention to previously learned strategies and procedures. These strategies increase the likelihood that students will demonstrate expected skills or behaviors when required, similar to antecedent adjustments to the environment. Readers will see close connections between these strategies and the variables they assess in the instructional environment (see Chapters 2 and 3). Use of these strategies assumes, at that time, that the current instruction or intervention program is appropriate and does not require alterations. Or, one or more of these strategies can be used in conjunction with strategies described later that make improvements to instruction.
Establishing and Reinforcing Rules, Expectations, and Routines Students’ behavior and achievement are stronger when they are more aware of the rules and expectations for the classroom, activities, and assignments (see a literature review by Oliver et al., 2011). Rules, expectations, and routines establish a predictable environment and provide a basis for teaching and reinforcing appropriate student behaviors. When an assessment reveals that a student has little knowledge of what they are expected to do, recommendations should include better communicating the rules and expectations for instruction and activities. This may involve establishing a predictable sequence of activities each day, posting and discussing the schedule, and reminding students of upcoming activities. It may also involve teaching students what is expected of them in assigned tasks (e.g., what elements are needed in a daily writing response), teaching expected behaviors for specific activities and settings (e.g., how to respond or ask for help), and what to do when students are finished with their work. Rules and expectations should include the whole-class/core-instruction setting, and any small-group or intervention settings, even if the rules are different across those settings. Expectations should be briefly reviewed with students at the beginning of each activity, rather than assuming students know what to do. Myers et al. (2017) provide an excellent set of recommendations for establishing and teaching rules, expectations, and routines. They include (1) establishing a small but important set of rules for different classroom settings and situations, and involving students in the development and wording of the rules; (2) publicly posting the rules in the classroom so that students can see them and so they can be referred to readily; (3) explicitly teaching and practicing the rules in the same way that academic skills are taught; and (4) praising and reinforcing students when they follow the rules and expectations.
224
Academic Skills Problems
Behavior‑Specific Praise One of the simplest but most effective ways to reinforce students’ effort, following rules and expectations, and other desired behaviors is behavior-specific praise. Behavior- specific praise refers to praise statements that (1) identify the student (or group of students) and (2) identify the behavior one wishes to reinforce. For example: “Naya, I love how you sat down and got right to work”; “Team 2 is working quietly, awesome job”; and “Max, very nice raising your hand.” Behavior-specific praise is effective in improving student’s on-task behavior because, first, it reiterates classroom expectation for the student(s) being praised, thus further reinforcing expected behaviors, and, second, it provides cues to other students in the vicinity about what they should be doing. For a literature review on the effects of behavior-specific praise, see Royer et al. (2019), and for a step-by-step guide on implementing behavior-specific praise, see Ennis et al. (2018).
Group Contingencies Group contingencies represent a set of strategies in which all students in a group receive a reward depending on the behavior or performance of one student, a subgroup of students, or all students in the group. The most commonly implemented type are interdependent group contingencies (Maggin et al., 2012), in which all students in a group receive a reward if they each meet a predetermined criterion (e.g., number of points earned). The power of interdependent group contingencies comes from the positive peer influences and cooperation the strategies foster. Appropriate behavior and work completion become a team effort. When implemented effectively, students recognize the cooperative nature of this approach and (1) work harder to meet the expectations, and (2) encourage each other to do the same. There are numerous types of interdependent group contingencies and they have been extensively studied. Research reviews of group contingency interventions have observed that although interdependent group contingencies were implemented the most, all types were associated with significant improvements in students’ behavior and academic engagement (Little et al., 2015; Maggin, Johnson, et al., 2012; Maggin, Pustejovsky, et al., 2017). One of the longest-standing and most effective interdependent group contingency strategies is the Good Behavior Game (Barrish et al., 1969), which has demonstrated significant effects on improving student behavior on both a short- and long-term basis (for research reviews, see Bowman-Perrott et al., 2016; Embry, 2002; Johansson et al., 2020; Smith et al., 2019). Longitudinal positive effects have even been observed in young adulthood for students who participated in the Good Behavior Game in the first or second grade (Kellam et al., 2008). The Good Behavior Game has been adapted for multiple situations and settings, including whole-class and small-group interventions, and even the school cafeteria (McCurdy et al., 2009). Group contingency interventions can be used to improve students’ behavior, work completion, or work quality. Use of such strategies requires caution, however, to make sure that the behavior of the same student is not repeatedly the reason the group fails to earn a reward, which can lead to negative interactions and bullying directed toward the student. Excellent summaries of types of group contingency interventions, descriptions of how to implement them and handle challenges, sample materials, and keys to success are provided by Chow and Gilmour (2016), Ennis (2018), and Pokorski (2019).
Step 3: Instructional Modification I 225
Self‑Management Interventions Self-management refers to a broad class of strategies designed to promote an individual’s independence, productivity, and self-control. Self-management techniques involve meta- cognition, which involves an individual’s ability to monitor their own thoughts, thought process, behaviors, and procedures in solving a problem. As such, self-management procedures are excellent ways of building a student’s self-regulation and the skills that promote learning. They have substantial appeal for working with students with difficulties and disabilities because they promote task engagement, effort, and persistence on a more global and generalized way across time and settings (e.g., Graham et al., 1992; Holman & Baer, 1979) that for many students with academic and behavior difficulties are important issues (e.g., Fowler, 1984; Hughes et al., 2002; Koegel et al., 1999; Rief, 2007; Shapiro, 1981; Shapiro & Cole, 1994). In addition, self-management procedures are efficient and cost-effective, because once established, they require only minimal involvement of others. They can be effectively applied across a very wide range of behaviors, academic tasks, and situations, and can be implemented from kindergarten through adulthood. You probably use self-management techniques all the time, such as goal-setting techniques for improving task completion and work productivity, self-monitoring weekly exercise or weight loss, or reminder strategies to remember to take medication. Even the use of a daily to-do list is a form of self-management. Self-management is an umbrella term and includes a number of strategies within it (Browder & Shapiro, 1985; Fantuzzo & Polite, 1990; Fantuzzo et al., 1987; Kanfer, 1971; Mace & West, 1986). Here, we focus on three self-management approaches that are most applicable to academic achievement and can be effectively integrated within school-based interventions: (1) cueing, self-instruction, and self-regulated strategies; (2) self-monitoring; and (3) goal setting and self-graphing.
Cueing, Self‑Instruction, and Self‑Regulated Strategy Techniques There are situations in which students understand (or have emerging understanding) of how to complete academic tasks, but they require prompts, cues, or reminders to complete them accurately and consistently. These situations may arise more often when students must complete multistep problems or activities (e.g., solving word problems or writing an essay), manage materials to be prepared for class, or remember cues for responding accurately. Strategies in this domain might delineate the sequence of steps students should follow to complete a multistep problem or task, such as steps in solving a mathematics word problem. Or, they may provide prompts for applying a set of strategies for responding correctly and independently, such as decoding strategies for reading words, or prompts to use counting strategies for solving number combinations. These prompts might include cue cards, a list of steps accessible to the student (e.g., taped to the student’s desk, inside a folder, or on a card), or a checklist. Conderman and Hedin (2011) provide an excellent description of procedures for using cue cards to support student achievement across various academic and behavioral domains. For example, they note that a cue card might summarize the steps in adding fractions, pair vowels with a word or picture to remind students of the sounds the vowels make, or list the key activities in studying for a test. As another example, Gureasko-Moore et al. (2006, 2007) taught adolescents with ADHD to use checklists to improve their organization of school materials, preparedness for class, and homework completion.
226
Academic Skills Problems
Self-instruction involves thinking aloud as one engages in a multistep task or procedure. It is designed to focus the student’s thoughts on effective problem-solving steps, sequences, or strategies. Although the technique was first described by Meichenbaum and Goodman (1971) for reducing impulsive behavior in children, the procedure has since been applied to many areas including increasing on-task behavior (e.g., Bornstein & Quevillon, 1976; Manning, 1990), social skills (e.g., Cartledge & Milburn, 1983; Combs & Lahey, 1981; Lochman & Curry, 1986; Maag, 1990), and academic skills (e.g., Fox & Kendall, 1983; Mahn & Greenwood, 1990; Roberts et al., 1987; Swanson & Scarpati, 1984). Johnston and colleagues (1980) taught three children with an intellectual disability to add and subtract with regrouping by training them to make specific self-statements related to performing the task accurately. Training was conducted in a 20to 30-minute session, during which the children were given problems to complete. The instructor then modeled self-instruction by asking and answering a series of questions. The self-instruction training was based on the guidelines established by Meichenbaum and Goodman (1971) and involved the following: (1) The trainer first solved the problem using the self-instructions while the child observed; (2) the child performed the task while the trainer instructed aloud; (3) the child spoke aloud while solving the problem with the help of the trainer; (4) the child performed the self-instructions aloud without trainer prompting; and finally, (5) the child performed the task using private speech. Figure 5.2 shows an example of the self-instructions employed in the study. Results of this study demonstrated that self-instruction training can be an effective strategy for teaching such skills. Similar results were found in a follow-up study (Whitman & Johnston, 1983). Shapiro and Bradley (1995) provided another example of using a self-instruction training procedure to improve the skills of a 10-year-old, fourth-grade boy, Ray, who was having significant problems in math. Specifically, Ray was experiencing difficulties learning to conduct subtraction with regrouping. Using a self-instruction training methodology, a four-step “cue card” was used to teach Ray how to solve two-digit minus one-digit subtraction problems with regrouping (see Figure 5.3). Individual sessions were held between Ray and his teacher three times per week across a 4-week period. As seen in Figure 5.4, Ray showed an immediate response to the procedure, with increased performance over the 4-week period. Similar to self-instruction are self-regulated strategy development (SRSD) techniques that involve teaching steps for completing complex problems or tasks. SRSD often uses mnemonics, picture cues, or checklists to promote student independence as students learn the strategies. As will be discussed in Chapter 6, self-regulated strategy techniques have been applied very effectively across interventions specific to writing (Harris et al., 2002), reading comprehension (Sanders, 2020), and mathematics (Montague, 2007, 2008). We offer some additional examples here. An approach to self-regulation strategies was described by Cullinan et al. (1981). Called academic strategy training, it involves conducting a task analysis to divide complex tasks into a series of component steps. Figure 5.5 provides an example of an attack strategy used for teaching multiplication facts. The strategy is taught directly to students using multiple examples of appropriate and inappropriate application. Students practice the strategy until they demonstrate mastery, at which time they should be able to perform the task correctly with any items from the response class of the task. Studies investigating strategy training have shown the procedure to be effective in teaching handwriting and composition skills (e.g., Blandford & Lloyd, 1987; Graham & Harris, 1987, 2003; Graham et al., 2009; Kosiewicz et al., 1982), word reading accuracy (Lloyd, Hallahan, et al., 1982), reading comprehension (e.g., Blachowicz & Ogle, 2001; Lysynchuk et al.,
Step 3: Instructional Modification I 227 Q. What kind of a problem is this? A. It’s an add problem. I can tell by the sign.
36 + 47
Q. Now what do I do? A. I start with the top number in the 1’s column and I add. Six and 7 (the child points to the 6 on the number line and counts down 7 spaces) is 13. Thirteen has two digits. That means I have to carry. This is hard, so I go slowly. I put the 3 in the 1’s column (the child writes the 3 in the 1’s column in the answer) and the 1 in the 10’s column (the child writes the 1 above the top number in the 10’s column in the problem). Q. Now what do I do? A. I start with the top number in the 10’s column. One and 3 (the child points to the 1 on the number line and counts down 3 spaces) is 4. Four and 4 (the child counts down 4 more spaces) is 8 (the child writes the 8 in the 10’s column in the answer). Q. I want to get it right, so I check it. How do I check it? A. I cover up my answer (the child covers the answer with a small piece of paper) and add again, starting with the bottom number in the 1’s column. Seven and 6 (the child points to the 6 on the number line and counts down 7 spaces) is 13 (the child slides the piece of paper to the left and uncovers the 3; the child sees the 1 that he or she has written over the top number in the 10’s column in the problem). Got it right. Four and 3 (the child points to the 4 on the number line and counts down 3 spaces) is 7. Seven and 1 (the child counts down 1 more space) is 8 (the child removes the small piece of paper so the entire answer is visible). I got it right, so I’m doing well. [If, by checking his or her work, the child determines that he or she has made an error, he or she says, “I got it wrong. I can fix it if I go slowly.” The child then repeats the self-instruction sequence starting from the beginning.] FIGURE 5.2. Example of self-instruction training sequence for addition with regrouping. From Johnston, Whitman, and Johnson (1980, p. 149). Copyright © 1980 Pergamon Press, Ltd. Reprinted by permission.
1990; Miller et al., 1987; Nelson & Manset-Williamson, 2006; Schunk & Rice, 1992), and arithmetic (e.g., Case et al., 1992; Cullinan et al., 1981; Griffin & Jitendra, 2009; Jitendra, Griffin, et al., 2007; Montague, 1989; Montague & Bos, 1986). Conceptually, these procedures are similar to self-instruction training and are often applicable across academic areas. Deshler and Schumaker (1986) described a more general approach called the strategies intervention model, designed to improve the academic achievement of secondary- level students. The purpose of these procedures is not to teach specific skills, as with self-instruction and SRSD, but to teach students important learning and study skills that are broadly applicable across content areas. The strategies intervention model begins by identifying the curriculum demands that the student lacks (e.g., note taking, writing well-organized paragraphs). Once these weaknesses are identified, a specific teaching strategy is taught. Different strategy programs have been developed for various types of problems and have been packaged into the learning strategies curriculum (Schumaker et al., 1983). The first strand of the curriculum includes a word-identification strategy (Lenz et al., 1984) and aims to improve
228
Academic Skills Problems
FIGURE 5.3. Example of self-instruction training sequence for subtraction with regrouping. From Shapiro and Bradley (1995, p. 360). Copyright © 1995 The Guilford Press. Reprinted by permission.
FIGURE 5.4. Results of self-instructional intervention. From Shapiro and Bradley (1995, p. 361). Copyright © 1995 The Guilford Press. Reprinted by permission.
Step 3: Instructional Modification I 229 TASK CLASS FOR MULTIPLICATION FACTS Description: Multiplication of any number (0–10) by any number (0–10) Examples: 0 × 6 = 10 × 1 =
;3×9=
;7×4=
;8×8=
;
Objective: Given a page of unordered multiplication problems written in horizontal form with factors from 0 to 10, the student will write the correct products for the problems at a rate of 25 problems correct per minute with no more than 2 errors per minute.
ATTACK STRATEGY FOR MULTIPLICATION FACTS Attack Strategy: Count by one number the number of times indicated by the other number. Steps in Attack Strategy:
Example:
1. Read the problem.
2×5=
2. Point to a number that you know how to count by.
Student points to 2
3. Make the number of marks indicated by the other number.
2×5=
4. Begin counting by the number you know how to count by and count up once for each mark, touching each mark.
///// ”2,4 . . . ”
5. Stop counting when you’ve touched the last mark.
“ . . . 6, 8,10”
6. Write the last number you said in the answer space.
2 × 5 = 10
TASK ANALYSIS SHOWING PRESKILLS FOR MULTIPLICATION ATTACK STRATEGY 1. Say the numbers 0 to 100. 2. Write the numbers 0 to 100. 3. Name × and = signs. 4. Make the number of marks indicated by numerals 0 to 10. 5. Count by numbers 1 to 10. 6. End counting-by sequences in various positions. 7. Coordinate counting-by and touching-marks actions. FIGURE 5.5. Task class, attack strategy, and task analysis of preskills for multiplication facts. From Cullinan, Lloyd, and Epstein (1981, pp. 43–44). Copyright © 1981 PRO-ED, Inc. Reprinted by permission.
Academic Skills Problems
230
skills in decoding multisyllabic words. Other strategies are used to improve reading comprehension, such as a visual imagery strategy (Clark et al., 1984), self-questioning strategy (Clark et al., 1984), and a paraphrasing strategy (Schumaker et al., 1984). Finally, the multipass strategy (Schumaker et al., 1982) is used for dissecting textbooks by using methods similar to the Survey, Question, Read, Recite, Review (SQ3R) method of study. The second strand of the curriculum concentrates on note-taking and memorization skills, and the final strand emphasizes written expression and demonstrations of competence. Significant field testing and evaluation of the strategies intervention model have been conducted through the University of Kansas Institute for Research in Learning Disabilities and reported in journal articles and technical reports (e.g., Deshler & Schumaker, 1986, 1993; Deshler, Schumaker, Alley, et al., 1982; Deshler, Schumaker, Lenz, et al., 1984; Ellis & Lenz, 1987; Ellis et al., 1987a, 1987b; Tralli et al., 1996). Results have shown marked improvements in students’ academic achievement following training.
Self‑Monitoring Strategies Self- monitoring refers to a class of interventions in which an individual actively monitors and records their behavior over a period of time. Self-monitoring interventions have been effectively applied across age groups to promote numerous forms of academic and behavior change. In school settings, self-monitoring strategies have stood out for their practicality and positive effects on task engagement, work completion, response accuracy, and reducing problem behavior for students with or without disabilities across a wide range of ages and settings (for reviews, see Briesch & Chafouleas, 2009; Bruhn et al., 2020; Mooney et al., 2005; Reid et al., 2005; Sheffield & Waller, 2010; Webber et al., 1993). Meta-analyses have observed average effect sizes exceeding 0.82 on students’ on-task behavior (Guzman et al., 2017; Reid et al., 2005). Self-monitoring strategies are advantageous as a support for academic achievement for several reasons: 1. They can be set up so that students self-monitor their engagement and on-task behavior (e.g., Harris et al., 2005; Rock, 2005), thereby providing a way to improve intervention effectiveness by directly targeting behaviors that promote learning. 2. They are highly portable, easy to adapt to a range of behaviors and situations, efficient, inexpensive, and can be easily integrated into an academic intervention routine (Menzies et al., 2009). 3. Improvements in behavior associated with self-monitoring tend to maintain over time after procedures have been faded (Reid et al., 2005), and generalize to settings not specifically targeted in the intervention (Mooney et al., 2005; Rhode et al., 1983; Wood et al., 2002). 4. Self-monitoring is a meta-cognitive activity; teaching students to self-monitor their academic engagement may make it easier to teach them to self-monitor skills important for the academic task itself (or facilitate those skills directly), such as monitoring their reading comprehension, procedures for solving mathematics problems, or the writing process. 5. Benefits of self-monitoring on students’ academic engagement have been observed without external reinforcement (Graham-Day et al., 2010; Otero & Haut, 2016; Reid et al., 2005), although reinforcement may result in additional benefits. Self- monitoring procedures are viewed positively by teachers (Joseph & Eveleigh,
Step 3: Instructional Modification I 231
2011). Therefore, they may be more readily adopted by educators who are uncomfortable with external reward systems. Self- monitoring procedures are very straightforward. Individuals are taught to self-monitor (i.e., self-observe and self-record) the presence or absence of a prespecified behavior at a specified time (or series of times). For instance, students can be taught to self-monitor whether they were attending to the teacher or their work (i.e., on-task) when signaled by a periodic timer. Or, students can be taught to self-monitor their work completion. The actual recording mechanism can simply involve tally marks using paper and pencil. Figure 5.6 provides an example of a self-monitoring form students used in one of our studies (Clemens et al., 2022), in which students self-monitored their engagement during a small-group reading intervention. Ways to set up a self-monitoring program have been extensively described (e.g., Gardner & Cole, 1988; Hirsch et al., 2018; Nelson, 1977; Shapiro, 1984; Shapiro & Cole, 1994, 1999; Shapiro et al., 2002), but we provide an overview here. Setting up a self-monitoring program starts by clearly defining the behavior(s) the student will monitor. For example, if students are to monitor their on-task behavior at a given cue, they may need to be trained to discriminate between those behaviors that represent being “on-task” and those defined as “not on-task.” Many studies utilizing self- monitoring procedures include an initial instruction period in which students are explicitly taught what the behavior means and what it looks like. This instruction includes practice in which students demonstrate and evaluate examples and non-examples of the behavior. It usually does not take long for students to learn the needed discrimination, and we have found that students even in early elementary grades can quickly become very good discriminators of the target behavior. Studies of children with developmental disabilities (e.g., Copeland et al., 2002; Hughes et al., 1991; Koegel et al., 1999; Newman et al., 2000; Shapiro et al., 1984); preschool children (e.g., Connell et al., 1993; de Haas- Warner, 1991; Fowler, 1986; Harding et al., 1993); students with learning disabilities (e.g., Bryan et al., 2001; Hallahan et al., 1982; Reid, 1996; Smith et al., 1992; Trammel et al., 1994); students with ADHD and other behavioral–emotional disorders (e.g., Dunlap et al., 1995; Levendowski & Cartledge, 2000; Mooney et al., 2005; Shapiro et al., 1998; Shimaburkuro et al., 1999); and students at risk for academic failure (e.g., Wood et al.,
FIGURE 5.6. Example of a student’s self-monitoring form for monitoring academic engagement.
232
Academic Skills Problems
2002) have all demonstrated successful self-monitoring with relatively little time needed for training. Once the target behavior is identified, a procedure to cue students when to self- monitor is needed. Students can be cued to self-monitor via audio signals, such as periodic beeps or vibrations from a smartwatch or smartphone, or a timer set to signal periodically by the teacher. Or, the teacher can announce when it is time to record. In some settings, the use of externally cued signals may be both disruptive to others in the classroom and interfere with instruction. In such cases, it is possible to teach students to self-monitor specific events, such as completing a worksheet or an individual problem on a worksheet. These types of cueing mechanisms may require some initial instruction but rarely are disruptive to others or to the teacher. The procedure the student uses to self-monitor should be as simple as possible and require the least amount of effort on the part of the student and the teacher. A self- monitoring form may simply be a series of check boxes, such as our example in Figure 5.6. For younger students, the use of a so-called “countoon” (Jenson et al., 1994; Kunzelmann, 1970) may be helpful. This is simply a form with stick figure or icon drawings representing the desired behavior. For example, if a student were to self-monitor in-seat behavior, a series of stick figures of a child sitting or not sitting in a seat might be drawn. The student simply circles the figure that corresponds to their behavior at each interval for self-monitoring. Self-monitoring alone may result in reactive effects: Simply observing one’s own behavior may be sufficient to alter the behavior in the desired direction (e.g., Nelson & Hayes, 1981; Shapiro & Cole, 1994). You may have experienced this in the past, such as when monitoring your weekly exercise results in you exercising more often. Students may even show desired changes without any additional backup rewards (Graham-Day et al., 2010; Otero & Haut, 2016; Reid et al., 2005), although reinforcement may result in additional benefits (Bruhn et al., 2020). Another important aspect of any self-monitoring program is ensuring that student reports of self-monitored behavior are accurate. Early in implementing self-monitoring, it is more important for students to be accurate in their appraisals of their behavior. Therefore, all instances of accurate self-monitoring should be praised even if the student accurately indicated that they were off-task on every instance recorded. Reinforcement for engagement can come later after accuracy has been established. Some studies have found that having teachers verify the accuracy of students’ self-monitored behavior is necessary to maintain desired levels of behavior change (e.g., Lam et al., 1994; Rhode et al., 1983; Santogrossi et al., 1973; Smith et al., 1992). Teachers should plan on conducting “surprise” checks of self-monitoring accuracy throughout the implementation of an intervention program. Obviously, these checks will need to be frequent in the beginning portion of the self-management treatment. Several sources offer descriptions of how to gradually reduce the needed teacher-checking procedure (Robertson et al., 1980; Rhode et al., 1983; Shapiro & Cole, 1994; Smith et al., 1988, 1992; Young et al., 1991). Applications of contingency-based self-management procedures for academic skills have most often used forms of self-monitoring as the intervention strategy. Many of these investigations have targeted academic skills indirectly by having students self-monitor on-task behavior or its equivalent. For example, in a series of studies conducted as part of the University of Virginia Institute for Research in Learning Disabilities (Hallahan et al., 1982), elementary-age students with learning disabilities were taught to self-monitor their on-task behavior (“Was I paying attention?”) at the sound of a beep emitted on a variable-interval, 42-second (range 11–92 seconds) schedule. Data on the number of
Step 3: Instructional Modification I 233
mathematics problems completed correctly on a series of three assigned sheets of problems were also recorded. The results of this study showed significant improvements in ontask behavior, but only modest changes in the number of problems completed correctly. Other studies by these same authors produced similar results (e.g., Hallahan et al., 1981; Lloyd et al., 1982). In looking more carefully at this issue, Lam et al. (1994) compared the relative effects of self-monitoring on the on-task behavior, academic accuracy, and disruptive behavior of three students with severe behavior disorders. Of special interest in the Lam et al. study was the collateral effect of the self-monitoring of one of these behaviors on the other. Results showed that in the specific behavior for which self-monitoring was applied (i.e., on-task behavior, academic accuracy, or disruptive behavior), substantial improvement was evident. However, the greatest improvement in the collateral behaviors was present when self-monitoring was applied to academic accuracy over the other two behaviors. Similar outcomes were found in several other studies (Carr & Punzo, 1993; Harris et al., 1994; Maag et al., 1993; Reid & Harris, 1993; Rock, 2005; Rock & Thead, 2007). In contrast, Harris et al. (2005) found that self-monitoring engagement resulted in stronger spelling performance than self-monitoring spelling accuracy. McLaughlin et al. (1982) had six students with behavioral disorders record whether they were studying or not studying during an individual reading instruction period at random intervals determined by the students. The procedure was employed within a classroom using a classwide token economy procedure. In a subsequent phase, students had to match teacher recordings to earn the appropriate points. Results showed that self-monitoring of studying behavior significantly improved the percentage of problems completed correctly, with greater gains when teacher matching was required. Prater et al. (1992), working with an adolescent student with learning and behavior disorders, used self-monitoring supported by audio and visual prompts to increase their on-task behavior and academic performance. After the procedure was implemented in a resource room, a modified version of the self-monitoring procedure, using only visual prompts, was implemented in two general education classrooms (mathematics and language arts). Improvements in on-task behavior were commensurate with the implementation of the self-monitoring procedures in all three settings. Similarly, Shimabukuro et al. (1999) demonstrated that having students with learning disabilities and ADHD self-monitor and graph their performance in reading comprehension, mathematics, and written expression significantly improved both on-task and academic performance in all areas. In summary, self-monitoring procedures are powerful and cost-effective strategies for improving students’ behavior and academic performance. Self-monitoring is an excellent way to target students’ on-task behavior, effort, and persistence through difficult tasks. These behavioral self-regulation skills are critical for long-term academic success.
Goal Setting and Self‑Graphing Goal setting refers to self-management techniques in which the student, with the support of a teacher, sets an academic or behavior target and plans the activities, steps, and resources needed to accomplish that goal (Schunk, 2003; Zimmerman, 2008). Goal setting is a way to increase a student’s ownership and engagement in their learning, which are elements of self-regulation associated with successful academic outcomes. Goal setting is often accompanied by having students graph or chart their progress toward the goal. It should be noted that the type of goal setting discussed is different than setting goals for progress monitoring discussed in Chapter 7.
234
Academic Skills Problems
Goals can involve broader achievement outcomes such as scoring well on an upcoming mathematics exam, completing a book report, or achieving a desired grade in a course. Goals can also be specific to skills, such as increasing the number of mathematics problems completed in 1 minute, improving reading accuracy, or reducing spelling and grammatical errors in writing. Both broad and specific goals have been associated with improving academic achievement (Martin & Elliot, 2016; Moeller et al., 2012). Scholars have emphasized that effective goals are (1) based on specific, clear, measurable performance criteria (e.g., quantifiable scores or grades), which promotes better self-evaluation and problem-solving; (2) challenging and reflective of meaningful skill improvement, while remaining attainable and realistic; (3) short-term (or long-term goals with a set of shorter interim goals), which can increase motivation and are more effective for younger students or students with low self-regulation; and (4) involve a time by which the goal will be accomplished, which facilitates planning strategies and supports that will be needed to accomplish the goal in time (Schunk, 2003; Zimmerman, 2008). Self-graphing, in which students chart their scores toward a goal, has been used in interventions to improve reading, mathematics, and writing. Often these implementations have focused on having students set and track progress toward fluency-oriented goals, such as oral reading fluency (Morgan et al., 2012), number combinations or mathematics computation fluency (e.g., Codding et al., 2005), and writing fluency (e.g., Koenig et al., 2016). Readers should be cautioned, however, that goal setting alone is only appropriate if the student’s difficulties are primarily due to a lack of motivation, engagement, or effort. Goal setting alone is not appropriate if the student’s skills are inadequate. For example, goal setting to build reading fluency can be appropriate if the student has sufficient accuracy reading words but their willingness to practice is low, in which case goal setting can be a way to encourage motivation and provide a way for students to track the effects of reading practice. However, if a student’s reading, mathematics, or writing difficulties are due to skill deficits rather than simply a lack of motivation (as they usually are), goal setting would only be appropriate if it is paired with instruction aimed at addressing the skill deficits that are the root cause of the skill difficulties. For example, in reading, it is not appropriate to expect a student to simply “read faster” without addressing the underlying reasons why their reading is dysfluent, and encouraging students to read as fast as possible could cause detrimental effects on their reading accuracy and comprehension. Additionally, goal setting alone risks sending a problematic message to the student that proficiency is about “speed” and that they need to work as fast as possible, whereas the messages they ought to receive should be about learning to become more accurate, understanding and comprehending, and practicing so that reading, mathematics, or writing become easier (i.e., more efficient). When used appropriately, goal setting can serve as an effective motivational component in conjunction with intervention and practice designed to improve skills. Indeed, studies indicate that intervention plus goal setting is associated with stronger improvement over intervention alone (Codding et al., 2009; L. S. Fuchs et al., 2003). Occasions in which goal setting alone is appropriate are those in which the student clearly has the understanding and the skills in their repertoire, but performance is low due to a lack of engagement and motivation to practice.
Point Systems and Incentives to Increase Engagement and Productivity Although the self-management strategies described above can improve student motivation and engagement, some students may need additional incentives to improve their task engagement and work completion. For students with difficulties in reading, mathematics,
Step 3: Instructional Modification I 235
and writing, the things they need to do the most are the very tasks they find most difficult— practice reading, mathematics, and writing. Thus, a system that reinforces students’ engagement, effort, and persistence through challenging work may be necessary and very beneficial. Incentive systems can (and should) be integrated with almost any of the intervention strategies discussed in this text. As discussed in Chapter 6, several intervention programs include built-in systems for reinforcing students’ engagement and effort. Point systems and token economies involve the delivery of points or tokens for desirable behavior or work completion, which are later redeemed for privileges or rewards. They have been extensively used throughout history in numerous settings to promote appropriate behavior (Kazdin, 1982; Doll et al., 2013). Overall, in educational settings, evidence indicates the positive effects of point systems and token economies on students’ behavior (Doll et al., 2013; Maggin et al., 2011), although extant research has methodological shortcomings that prevent strong conclusions about their effects. Recently, technology has provided innovative ways to implement point and token systems, such as applications like ClassDojo, which add gamification components to earning points. Robacker et al. (2016) discussed the beneficial aspects of ClassDojo and provided a tutorial for implementing it.
Summary: Less Intensive Strategies In summary, several strategies that do not necessarily involve changes to instruction or intervention programs have been shown to improve students’ engagement, motivation, effort, and consequently, their achievement. In addition to the strategies described above, school teams can consider other simple adjustments, such as changing the time of intervention from the afternoon to the morning (or vice versa) when the student is more alert, or changing the location of intervention to reduce distractions. Sometimes, the most effective strategies are also the simplest. Informed by the assessment, use of these strategies alone would be appropriate for situations in which the student appears to have the requisite skills needed, but their academic difficulties are due to insufficient effort, engagement, or motivation. In other cases, these strategies can also be used in conjunction with more intensive strategies described later. Ideal approaches to intervention might combine instructional changes with strategies to improve engagement, such as self-monitoring.
MODERATELY INTENSIVE STRATEGIES AND APPROACHES: ENHANCEMENTS TO INSTRUCTION TO MAKE IT MORE EFFECTIVE The next set of strategies involve adjustments to instruction or intervention programs, but remain applicable across academic skill areas. These strategies and approaches are aimed at improving and enhancing instruction rather than changing it altogether, usually by making it more explicit and systematic, providing greater opportunities for students to respond and practice, delivering more consistent and effective feedback, and improving instructional pace. Therefore, the following strategies are appropriate for students with deficits in academic skills who would benefit from stronger instruction and practice opportunities to improve acquisition and fluency.
Making Instruction More Explicit Explicit instruction is a teaching approach that directly and unambiguously introduces skills, maximizes students’ opportunities to demonstrate and practice the skills, and
236
Academic Skills Problems
provides immediate and clear feedback on student responses (Archer & Hughes, 2011). Readers may be familiar with the phrases model–lead–test or I do, we do, you do that exemplify how explicit instruction is carried out: (1) The teacher introduces the skill using a concrete exemplar, model, or demonstration (i.e., model; “I do”); (2) the teacher and the students practice the skill together, often through choral or unison responding (i.e., lead; “We do”); and (3) the teacher closely monitors while the students practice the skill on their own, and provides immediate affirmative or corrective feedback (i.e., test; “You do”). Explicit instruction is defined by its clarity in presentation that eliminates student guesswork, concise language that minimizes “teacher talk” and maximizes correct student responding, and immediate unambiguous feedback. The term direct instruction also refers to explicit instruction, as well as to the published explicit instruction procedures and programs conceptualized by Englemann and his colleagues (e.g., Englemann & Carnine, 1982) in reading and mathematics (Carnine et al., 2017; Stein et al., 2018). The term systematic instruction, sometimes used in conjunction with explicit instruction, refers to instruction that follows a logical, predetermined plan in which basic foundational skills and concepts are taught before more complex skills. In contrast, teaching approaches that expect students to learn by providing them with materials and experiences but minimal instruction are considered opposites of explicit instruction (e.g., “discovery” or “experiential” learning; some forms of “whole-language” reading and writing instruction). As noted in Chapter 2, the benefits of explicit instruction have been demonstrated across numerous studies in reading, mathematics, writing, and language/vocabulary (for reviews, see Ehri et al., 2001; Gersten, Chard, et al., 2009; Stockard et al., 2018; Gillespie & Graham, 2014; Marulis & Neuman, 2010; see Chapter 2 for additional review). Explicit instruction is effective for most students, but research has found that it is especially beneficial, and in many cases necessary, for students with learning and behavior difficulties. Time after time, research demonstrates that instruction that is clear and unambiguous, provides immediate and consistent affirmative and corrective feedback, and provides ample opportunities for students to practice a new skill, results in superior learning compared to other approaches. Quite simply, if one is asked about the most effective type of instruction for students with learning difficulties and disabilities, the best answer is usually “explicit instruction.” Therefore, one of the first aspects to consider in improving instruction for students with academic difficulties is to make it more explicit. There are several excellent resources for learning to implement explicit instruction, including texts covering its implementation across skills (Archer & Hughes, 2011), and applications specific to reading (Carnine et al., 2017), and mathematics (Stein et al., 2018). Other helpful articles include Doabler and Fien (2013), Hughes et al. (2017), and Rupley et al. (2009). The core principles of explicit instruction are the same across the academic domains. The following provides an example of an explicit instruction sequence for teaching the sound for the letter m: 1. Model (“I do”): The letter m is displayed in full view of the students. The teacher models by pointing to the letter and saying, “Letter m makes the sound mmmm. My turn. Mmmm.” 2. Lead (“We do”): While continuing to point to the letter the teacher says, “Let’s do it together, what sound does m make?” The teacher and the students both say, “Mmmm” in unison. 3. Test (“You do”). The teacher points to the letter and says, “Your turn. What sound?” The students respond and the teacher provides immediate feedback. For
Step 3: Instructional Modification I 237
correct responses the teacher might say, “That’s right, mmmm.” In the event of a student error or a non-response, the teacher provides immediate error correction, for example, “My turn. Mmmm [while pointing to m]. Your turn. What sound?” Note that modeling and demonstration are clear and unambiguous, thus leaving nothing to chance regarding students’ understanding. Students are not left to guess wildly or try to “discover” the correct response. Teacher talk is minimized, and student responding is maximized. Correct responses receive affirmative feedback, and incorrect responses are corrected immediately. Although this sounds simple, many instructional programs have not been designed this way and many teachers are trained to teach quite differently. Archer and Hughes (2011) further articulated 16 elements of explicit instruction. These elements were derived from and overlap considerably with previous work and other models of explicit instruction (e.g., Brophy & Good, 1986; Rosenshine, 1987; Simmons et al., 1995). Space does not permit a description of all of them, but a subset is important to mention here given their importance in daily instruction more generally, and intervention more specifically:
• Instruction is focused on essential skills and content. • There are high rates of student success during instruction. • The teacher’s instruction is clear, concise, and unambiguous. Students should be
able to understand exactly what the teacher means. Extraneous language should be minimized or eliminated. • The teacher demonstrates skills clearly, step by step. • Instruction and practice use a wide range of examples to promote skill generalization, with some use of non-examples (and making sure students know they are non-examples). • Active student responding should be frequent, which maintains engagement, provides opportunities for learning and allows the teachers to frequently check understanding. • Instruction provides multiple opportunities for students to practice a new skill, with guidance and support. Learning a new skill requires practice, and the teacher should be there to provide affirmative or corrective feedback. The difficulty of the practice opportunities should increase as students demonstrate success with less difficult items. • Always provide immediate affirmative and corrective feedback for students’ responses, especially when skills are new. Learning is extremely difficult without feedback. • The instructional pace is brisk, which provides more learning trials, more practice, and helps maintain engagement. • Review occurs frequently. Begin each lesson with a review of what was learned the previous lesson. On a weekly and monthly basis, provide distributed practice (i.e., multiple, repeated opportunities to practice a skill over time) and cumulative practice (i.e., provide practice that blends recently learned skills or content with previously learned skills or content). In summary, if the assessment of the academic environment reveals that explicit instruction is lacking or inconsistent (see Chapter 3), strategies or coaching to help make it more explicit are a great place to start enhancing instruction.
238
Academic Skills Problems
School practitioners may encounter situations in which improvements to instruction across an entire grade level or school are needed. Enhanced Core Reading Instruction (ECRI; Fien et al., 2020; Smith et al., 2016) offers a way to enhance reading instruction by making it more explicit and systematic, without requiring changing the curriculum. ECRI includes a set of curriculum “overlays” designed to work with popular core reading programs. The overlays use the existing curriculum but include instructional routines, teacher language, and procedures to make it more explicit, and increase opportunities for students to respond and practice. A second key component of ECRI is a supplemental (i.e., “Tier 2”) small-group intervention in which the content is closely aligned with core instruction. Experimental studies with ECRI have observed considerable promise for improving reading achievement in early elementary grades (Fien et al., 2020; Smith et al., 2016).
Giving Instructional Feedback That Is Immediate and Consistent Instructional feedback refers to the information students receive about the accuracy and quality of their academic responses. Feedback should include both affirmative feedback that acknowledges a response was correct, and corrective feedback that immediately corrects a student’s inaccurate response. In Chapter 2, we discussed how important feedback is for learning a new skill, and how slowly and inconsistently learning can progress (or not occur) without feedback. Feedback is central to explicit instruction, as noted above. However, feedback is vital across instructional situations, and its importance is relevant even in situations that explicit instruction is not involved. Unfortunately, teachers often overlook the importance of feedback. Therefore, it deserves specific attention. Instructional feedback can be viewed more broadly as a way of providing students with information on their performance over a longer period of time, and Oakes et al. (2018) provide excellent recommendations for this. However, here we focus on instructional feedback that is part of the moment-to-moment interactions between the teacher and the student that are central to learning. The teacher prompts, the student responds, and the teacher provides feedback. It cannot be stressed enough how important feedback is to students with academic difficulties or students at the early stages of acquisition. Errors are learning opportunities, and without feedback that clearly points out the error and how to correct it, learning is unlikely. Affirmative feedback is just as important; it indicates to the student that their attempt was correct, thereby reinforcing the strategy, connection, or thought process used to identify the correct answer. This is critical for students in the early stages of acquiring a skill, who often respond tentatively and without confidence. Thus, even correct responses are learning opportunities. An assessment revealing that feedback during instruction is infrequent, inconsistent, unclear, or otherwise problematic should include recommendations to improve instructional feedback. The following are characteristics of effective feedback:
• It is brief, clear and concise. Extraneous language is avoided. Feedback should never be ambiguous. • It is delivered in a friendly tone, free of frustration and annoyance. • Affirmative feedback need only be very brief, such as a quick “Good,” “Right,” or “Yes.” Extra praise should be provided when students demonstrate clear effort,
Step 3: Instructional Modification I 239
persistence, problem solving, and other learning-related behaviors that are important to reinforce.
• Affirmative feedback is especially important when students respond hesitantly and without certainty. • Corrective feedback should clearly show what to do differently to correct the error and what the correct response is. Students should be given another opportunity to respond correctly following the corrective feedback. • When providing corrective feedback, simply providing the correct response helps maintain a brisk instructional pace and minimizes student frustration. This is particularly important for new skills that are not yet part of the student’s repertoire. In other situations when the student makes an error in which they had previously demonstrated accuracy, or they have the skills to determine the answer (e.g., the student knows all the letter sounds in the word and has demonstrated the ability to sound out and blend words), it may be appropriate to prompt the student to try again (Shanahan, 2021). Allowing the student another opportunity to utilize the skills or process to determine the answer, this time perhaps with greater attention, can reinforce the appropriate strategies that instruction has targeted. However, interventionists must be careful to never let the student struggle to the point of frustration and should provide the correct response quickly if the correct response is not imminent. Do not let this waste valuable time. We reiterate that this second chance is only provided if the interventionist is confident the student has the skills to respond correctly. Providing the correct response more quickly moves instruction and practice along, and is especially important if the student has a low tolerance for frustration. What should be avoided at all costs are repeated student failures, which create frustration and discouragement (and multiple errors indicate the need for reteaching). When in doubt, simply provide the correct response. • The frequency and level of feedback will vary depending on the type of skill and the stage of the student’s acquisition of it. In the early stages of teaching a new skill or concept, affirmative feedback after every correct response is important for learning, and error correction should be detailed to ensure understanding. As the student demonstrates acquisition, affirmative feedback need not necessarily follow every response, especially when it is apparent the student knows they are correct. For example, when listening to a student read aloud during a practice activity and the student confidently reads words accurately, affirmative feedback can be more intermittent (but be sure not to forget about it completely). Similarly, errors made by a student who has previously demonstrated the skill may involve prompts to try it again, rather than providing detailed corrective feedback when the skill was new. Establishing and Maintaining a Brisk Instructional Pace In Chapter 2, we noted the importance of instruction having a brisk pace, meaning there is little downtime, and teacher talk or behaviors extraneous to instruction are minimized or eliminated. A brisk pace means instruction moves quickly but is not rushed. A brisk pace results in more instruction, more student opportunities to actively respond, and results in greater student engagement, accurate responding, and skill acquisition (Becker & Carnine, 1981; Darch & Gersten, 1985; Rosenshine, 1979; Skinner, Fletcher, &
240
Academic Skills Problems
Henington, 1996). A brisk pace is accomplished partly through teaching experience, but instructional frameworks such as explicit instruction inherently facilitate a better pace. Other factors include using programs and curricula that are structured systematically (which promotes ease of implementation), prior preparation and planning on the part of the teacher, and good classroom behavior management to reduce interruptions.
Increasing Student Opportunities to Respond and Practice Research has consistently demonstrated that learning is strongest when students have a high number of opportunities to actively respond to academic material and requests (Common et al., 2019; Dehaane, 2020; Delquadri et al., 1983; Doabler et al., 2019; Greenwood, Delquadri, et al., 1984; Leahy et al., 2019; MacSuga-Gage & Simonsen, 2015; Sutherland & Wehby, 2001). Students learn best by doing; learning is maximized when they are actively engaged in instruction and frequently provide academic responses by answering questions, reading, solving problems, and writing. This is true during instruction when new content is presented, and after content is introduced in which students have multiple opportunities to practice the new skill. There is no academic skill that can reach proficiency without practice; proficiency in reading, mathematics, and writing is made possible by doing a lot of it. Opportunities to respond (OTR) and frequent practice opportunities represent key aspects of the instructional environment. High OTR also provide the teacher with feedback on students’ accuracy and understanding, which helps signal when content should be reviewed or new content taught. Thus, increasing OTR and practice time should be an intervention recommendation when the assessment reveals these opportunities are lacking. The nature of the “I do, we do, you do” sequence of explicit instruction is one that inherently results in high OTR; therefore, explicit instruction is one way to build more OTR. However, teacher-directed instruction with a group of students (especially a large group) still limits the number of OTR for individual students. Choral responding (i.e., students responding in unison) to a teacher prompt or question is one way to increase OTR for all students (Carnine et al., 2017), but with a larger group it can be difficult to verify the accuracy of all students’ responses. Fortunately, there are many other ways to increase OTR and practice opportunities. The most obvious is dedicating time in each lesson for students to practice the target skill. Skills vary in terms of the extent to which students can practice independently. Reading practice is best accomplished when students read aloud so that errors can be corrected (Carnine et al., 2017); silent, independent reading practice is ineffective for struggling readers and should be avoided (Hasbrouck, 2006). On the other hand, practicing mathematics problems or writing can be more appropriate for independent practice when the teacher is available to offer help and provide feedback on the student’s work soon after completion. The cover–copy–compare technique is a strategy for providing opportunities for students to practice independently across academic areas, which we discuss more specifically in Chapter 6. Here, we describe frameworks and techniques for increasing students’ practice opportunities working with a partner or in cooperative small groups.
Peer Tutoring Peer tutoring refers to structured procedures in which students work in pairs (i.e., dyads). They offer efficient ways in which practice opportunities can be dramatically increased when working with larger groups of students. Numerous examples have been reported in
Step 3: Instructional Modification I 241
the literature, but what follows is an overview of the systematic development and evaluation of these methods from two research groups. Greenwood and his associates at the Juniper Gardens Children’s Project in Kansas City, Kansas (e.g., Delquadri et al., 1986; Greenwood, 1991; Greenwood, Delquadri, et al., 1989, 1997; Greenwood, Terry, et al., 1992, 1993; Greenwood, Carta, et al., 1993) were some of the first to develop and fully investigate effective models of classwide peer tutoring (CWPT). Likewise, Lynn and Doug Fuchs and their associates at Vanderbilt University extended the model of peer tutoring across the age ranges in the development of their peer-assisted learning strategies (PALS; Calhoun & Fuchs, 2003; D. Fuchs, L. S. Fuchs, & Burish, 2000; D. Fuchs et al., 2001; L. S. Fuchs, D. Fuchs, Phillips, et al., 1995; L. S. Fuchs, D. Fuchs, & Kazdan, 1999; L. S. Fuchs, D. Fuchs, & Karns, 2001; L. S. Fuchs, D. Fuchs, Yazdian, et al., 2002; McMaster et al., 2006). The underlying conceptual framework of Greenwood and colleagues’ CWPT procedures was based on extensive evidence that rates of student engagement in academic tasks are uniformly low across regular and special education settings (Berliner, 1979; Greenwood, 1991; Greenwood, Delquadri, et al., 1985; Haynes & Jenkins, 1986; Leinhardt et al., 1981). Hall et al. (1982) and Greenwood, Dinwiddie, et al. (1984) operationalized academic engagement as active student responding. Specifically, they reasoned that a necessary condition for academic progress is the frequent presentation of opportunities for students to actively respond, such as writing, reading aloud, asking or answering questions, and solving problems. These concepts were demonstrated and replicated empirically in several studies (Greenwood, 1996; Greenwood, Delquadri, et al., 1984; Greenwood, Dinwiddie, et al., 1984, 1987; Greenwood, Horton, et al., 2002; Greenwood, Terry, et al., 1993). CWPT was developed specifically as a technique for increasing students’ OTR. In particular, the strategies were designed to be used within larger, general education classes that contained students representing a full spectrum of achievement levels, including students with disabilities. Traditional classroom instruction provided by a teacher often involves asking a single question and calling on a single student to respond, thus limiting OTR for other students. When a peer-tutoring procedure is employed, half the students in the class (assuming students are working in pairs) can respond in the same amount of time as a single student using teacher-oriented instruction. Elliott, Hughes, and Delquadri (cited in Delquadri et al., 1986) reported that some children improved their academic behavior from 20 to 70% as a result of peer-tutoring procedures. Greenwood (1991) reported the long-term impact of time spent engaged in academic instruction across 416 first-grade students who were followed for 2 years. Among students from economically disadvantaged backgrounds, students for whom teachers implemented a CWPT program outperformed students who did not receive peer tutoring on measures of academic engagement and achievement. IMPLEMENTING CWPT
CWPT involves nine components: (1) weekly competing teams; (2) tutor–tutee pairs within teams; (3) points earned for correct responding; (4) a modeling error correction procedure; (5) teacher-mediated point earning for correct tutor behavior; (6) switching of tutor–tutee at midsession; (7) daily tabulation of point totals and public posting on a game chart; (8) selection of a winning team each day and each week; and (9) regular teacher assessments of students’ academic performance, independent of tutoring sessions. For most subject areas, the tutoring sessions are divided into 30-minute blocks—10 minutes
242
Academic Skills Problems
of tutoring for each student and 5–10 minutes for adding scores and posting team outcomes. Additional information on the specifics of implementing a program in CWPT is available in a manual entitled Together We Can!, published by Sopris West (Greenwood et al., 1997), and on the Special Connections website at the University of Kansas, within the Instruction subsection of the website (www.specialconnections.ku.edu). A typical CWPT process is described below, although modifications of the exact procedures are certainly made for individual classroom situations. Readers interested in pursuing the full implementation of CWPT should examine the Greenwood et al. (1997) manual, which incorporates what is known to work best in classrooms, how CWPT can be applied schoolwide, how CWPT can be sustained over time, and how to use computer technology to enhance outcomes. Each Monday, students are paired through a random selection process. Individual ability levels of students are not considered in the assignment of tutors. Children remain in the same tutor–tutee pairs for the entire week. Each pair of students is also assigned to one of two teams for the week. When tutoring sessions begin, a timer is set for 10 minutes, and the tutee begins the assigned work. The specific academic skill assigned can be reading sentences aloud, reading words from a word list, spelling dictated words, completing assigned mathematics problems, or any other academic task desired by the teacher. For example, in reading sentences, the tutee begins reading sentences aloud to the tutor. Tutors give 2 points for reading each sentence without errors; 1 point is earned for successfully correcting an error identified by the tutor. Tutors are instructed to correct errors by pronouncing the correct word or words and having the tutee reread the sentence until it is correct. In spelling, points are based on tutees’ oral spelling of each word and then, if not correct, writing the word three times. Throughout the tutoring session, the teacher circulates around the room, providing assistance to tutors and tutees and awarding bonus points to pairs for cooperative tutoring. Tutees are also given bonus points for responding immediately when asked questions by the tutors. After 10 minutes, tutors and tutees reverse roles, and the same procedures are followed. At the end of all tutoring sessions for that day, individual points are summed and reported aloud to the teacher. Individual points are recorded on a large chart in the front of the classroom, and team totals are determined. No rewards other than applause for winning efforts are provided to the teams. On Fridays, the week’s tutoring is assessed by the teacher. Each child is assessed using CBMs in the academic skills tutored that week. Students who continue to have difficulties with certain skills may be directly instructed outside of tutoring sessions by the teacher. Students are taught the tutoring process before they begin. Training is conducted using explanation, modeling, role playing, and practice. During the first day of training, the teacher presents a brief overview of the tutoring program, and demonstrates with a teacher aide or consultant how errors are corrected, how points are administered by tutors, and how they are recorded on student point sheets. Students practice tabulating points and reporting these results to the teacher. On the second day of training, students practice tutoring, with feedback from the teacher and consultant regarding identifying errors, using the correction procedure, using praise, and tabulating points. If needed, a third day of practice is held. Students learn the procedures quickly and can begin tutoring after the first or second day. It may be necessary to continue to train younger students for a few more days, however.
Step 3: Instructional Modification I 243
The teacher’s role during tutoring sessions involves initially determining dyads, timing the sessions, monitoring tutoring and awarding bonus points for correct tutoring, answering questions as needed, and tabulating and posting points. After each session, the teacher reviews point sheets to assess student accuracy and honesty in reporting and assesses academic progress using CBMs once each week, usually on Fridays. PEER‑ASSISTED LEARNING STRATEGIES
The Fuchs’ PALS model, which was based on CWPT, modifies the basic procedure in several ways. First, higher- and lower-performing students are paired together rather than the random assignment approach typically used in CWPT. The roles are reciprocal, with students taking turns tutoring each other. The higher-performing student typically completes content first while the lower-performing student monitors (which provides a preview), and then the lower-performing student completes the same content while the higher-performing student monitors and provides feedback. Second, students are trained to use specific prompts, error correction procedures, and feedback. During the tutoring, verbal exchanges between the tutor and tutee are expected and structured according to the activity used in the peer-tutoring process. In the mid-elementary grade reading version of PALS, the specific activities include partner reading with retell, paragraph shrinking, and prediction relay. Each of these involves structured exchanges designed to systematically allow the tutor and tutee to interact. The PALS procedure is designed to be implemented three times per week and requires about 35 minutes of implementation. PALS programs have been developed for both reading and mathematics for kindergarten through secondary grades, and materials are available from the Fuchs Research Group (frg.vkcsites.org). A particularly important feature of the PALS program is the nature of scripted activities in subject areas. For example, during the paragraph shrinking activity, which is designed to foster summarization and reading comprehension skills, tutors ask their tutees simple questions to identify the most important thing about the “who” or “what” in the paragraph. The tutee must then condense or “shrink” the paragraph into a summary statement that is 10 words or less to earn their points. Error correction procedures are built into the process if the tutee does not respond accurately. Partner reading and prediction relay activities are similarly well structured so that tutors can easily be taught how to prompt and respond during the tutoring process. In the extensions of PALS downward to kindergarten or upward to high school, similar structures are used, with the activities changing to match the appropriate skills and age levels of the students. These same strategies have also been applied in PALS for mathematics (see frg.vkcsites.org for PALS resources). In summary, procedures for establishing peer tutoring are straightforward and can be applied to a wide range of academic areas, resulting in powerful and cost-effective ways to increase student practice opportunities. As demonstrated in the CWPT and PALS programs, peer tutoring can be conducted by organizing the tutoring pairs as highand low-ability pairs or by not matching students of differing ability levels. Even when matched with a lower-performing student, studies have demonstrated that peer tutoring benefits the higher-performing student as well (e.g., Cochran et al., 1993; Dineen et al., 1977; Franca et al., 1990; Houghton & Bain, 1993). After all, one of the best ways to reinforce and expand one’s learning is to try to teach it to someone else. In addition, there
244
Academic Skills Problems
is some evidence that PALS may improve the social standing of students with disabilities among their peers (D. Fuchs et al., 2002; Dion et al., 2005). Another of the numerous benefits of these strategies is that they provide a way for the teacher to differentiate instruction: Peer-tutoring activities can be implemented with most of the class while the teacher works with small groups of students on more specific skills. McMaster et al. (2006) provided a review and summary of almost 15 years of research on the PALS programs. Their research has shown the applicability of the PALS model of CWPT to be effective across age groups: from children in kindergarten through high school; children who are low-, average-, or high-achieving; and students with identified disabilities. In fact, PALS has been awarded a “best-practice” designation by the U.S. Department of Education What Works Clearinghouse (ies.ed.gov/ncee/wwc) as well as a recognized best-evidence program on the Best Evidence Encyclopedia (www.bestevidence.org/index.cfm). It is important to remember that the peer-tutoring procedures described by Greenwood and associates, as well as by the Fuchs and their colleagues, involve same-age tutors and are classwide procedures (however, they can be used in small-group intervention situations as well). Other investigators have used cross-age tutoring (Beirne-Smith, 1991; Cochran et al., 1993; Paquette, 2009; Topping & Whiteley, 1993; Topping & Bryce, 2004; Vaac & Cannon, 1991) and cross-ability tutoring (Allsopp, 1997; Arblaster et al., 1991; Gilberts et al., 2001; Kamps et al., 1999). Most have shown similar positive effects. Indeed, Rohrbeck et al. (2003), in a meta-analysis of peer-assisted learning procedures with elementary school students, reported effect sizes in achievement of 0.59 (i.e., moderate to large, educationally meaningful), finding that these procedures were most effective with young students from economically disadvantaged backgrounds and historically marginalized communities. Similarly, Kunsch et al. (2007), in a meta-analysis of peer- mediated interventions specifically in mathematics, found an educationally meaningful overall effect size of 0.47, with most studies in the moderate effect range. Other investigations have demonstrated the generalized effects of peer tutoring in mathematics (DuPaul & Henningson, 1993; Fantuzzo et al., 1992; McKenzie & Budd, 1981), the acquisition of peer tutoring through observation of peer models (Stowitschek et al., 1982), combining an in-school peer-tutoring procedure with a home-based reinforcement system (Trovato & Bucher, 1980), and using preschool- or kindergarten-age students as peer tutors (Eiserman, 1988; L. S. Fuchs, D. Fuchs, & Karns, 2001; D. Fuchs et al., 2001; Tabacek et al., 1994; Young et al., 1983). Scruggs et al. (1986), Gumpel and Frank (1999), and Franca et al. (1990) found that peer tutoring positively impacts social behaviors for students with behavior disorders. Johnson and Idol-Maestas (1986) showed that access to peer tutoring can serve as an effective contingent reinforcer.
Cooperative Learning Traditional goals in classrooms implicitly assume competition rather than cooperation. Students are often compared to each other, which can result in discouraging student– student interactions and negative effects, especially for low achievers (Slavin, 1977). Several researchers have described and field-tested successful strategies for improving academic skills based on principles of cooperation rather than competition (Johnson & Johnson, 1985, 1986; Slavin, 1983a, 1983b). Several reviews have substantiated the effectiveness of cooperative learning strategies (e.g., Axelrod & Greer, 1994; Cosden & Haring, 1992; Johnson et al., 1981; Maheady et al., 1991; Nastasi & Clements, 1991; Slavin, 1980, 1983a). These procedures have
Step 3: Instructional Modification I 245
been applied across all academic subjects, in both the United States and other countries, and have incorporated both individual and group reward systems. Of the 28 studies reported by Slavin (1980), over 80% showed significant positive effects on achievement in comparison with control groups. Indeed, typical educational lessons at all levels of instruction consider cooperative learning techniques to be a routine, expected part of the learning process. PROCEDURES FOR COOPERATIVE LEARNING
Although cooperative learning can be applied in several ways, Slavin et al. (1984) provide an excellent example. Team-assisted individualization (TAI) consists of several procedures designed to combine cooperative learning strategies and individualized instruction. Students are first assigned to four- or five-member teams. Each team is constructed such that the range of abilities in that skill is represented across team members. After team assignment, students are pretested on mathematics operations and are placed at the appropriate level of the curriculum, based on their performance. Students then work on their assignments within the team by first forming pairs or triads. After exchanging answer sheets with their partners, students read the instructions for their individualized assignments and begin working on the first skill sheet. After working four problems, students exchange sheets and check their answers. If these are correct, students move to the next skill sheet. If incorrect, the students continue in blocks of four problems on the same skill sheet. When a student has four in a row correct on the final skill sheet, the first checkout, a 10-item quiz, is taken. If the student scores at least 8 correct out of 10, the checkout is signed by the teammate, and the student is certified to take the final test. If the student does not score 80%, the teacher is called to assist the student with any problems not understood; a second checkout is taken, and if the criterion is met, the student is again certified to take the final test. At the end of the week, team scores are computed by averaging all tests taken by team members. If the team reached the preset criterion, all team members receive certificates. Each day, teachers work with the students at the same point in the curriculum for 5–15 minutes. This provides opportunities for teachers to give instruction on any items that students may find difficult. Other guides for implementing cooperative learning techniques include those devised by Schniedewind and Salend (1987) and Gillies (2007).
Other Aspects of Effective Instructional Design Good academic interventions are built on the principles of effective instructional design. Coyne et al. (2001), and the work of the National Center to Improve the Tools of Educators (idea.uoregon.edu/~ncite; Kame’enui et al. 2002) outlined a series of six key principles that should underlie instruction and intervention. These principles are especially relevant to students who are most at risk for developing academic difficulties—that is, the students who often garner the majority of our attention. Specifically, the six principles are (1) big ideas, (2) conspicuous strategies, (3) mediated scaffolding, (4) strategic integration, (5) primed background knowledge, and (6) judicious review. Readers will see considerable overlap between these principles and the strategies discussed across this chapter and Chapter 6. The following provides additional information on each aspect. The information and examples are focused on reading instruction and intervention given
246
Academic Skills Problems
the emphasis of Kame’enui and colleagues; however, the strategies can easily be applied to other academic skill areas.
Big Ideas Effective instruction targets concepts or ideas that are most salient and important to learn. Not all objectives within a curriculum are equal and essential for learning. By identifying the key concepts or ideas, instruction can begin to teach the central ideas around which other less important but related concepts can be linked.
Conspicuous Strategies The strategies that help students understand the processes embedded in the “big ideas” must be explicitly taught. Strategies are a series of steps that are purposely constructed to lead a student to an outcome. Although some students can certainly infer the correct strategies based on previous learning, students who struggle are likely to develop poor or inefficient strategies. By making the strategies conspicuous and clear, ambiguity in the steps required to achieve success is avoided. Research consistently has shown that conspicuous and explicit strategies in teaching young children who are struggling are needed for success (e.g., Foorman et al., 1998; Juel & Minden-Cupp, 2000). Several strategies in reading, mathematics, and writing described in Chapter 6 involve teaching conspicuous strategies.
Mediated Scaffolding Scaffolds are temporary structures designed to support workers and materials in the construction of a building. In education, mediated scaffolding is a metaphor for providing supports, in the form of prompts, models, and guides, that facilitate greater correct responding as skills are being learned. Coyne et al. (2001) define mediated scaffolding as “the personal guidance, assistance, and supports that teachers, materials, or tasks provide a learner during the initial stages of beginning reading instruction” (p. 67). Like the process of constructing a building, scaffolds are gradually removed (i.e., faded) when they are no longer needed. In the learning process, good instruction and a system of prompts and models provide the needed support to allow students to respond successfully and gain new knowledge, but gradually and systematically remove the supports as students show successful attainment of the knowledge. Specifically, this could mean beginning with easy and progressing to more difficult tasks; introducing manageable pieces of information and adding new information that is based on the learning of previous material; and providing prompts, cues, and highlights to remind students to use strategies and procedures. Moving from teacher-directed to student-directed learning is another form of scaffolding that recognizes the ability of students to work independently and guide their own learning as they become more proficient.
Strategic Integration Information needs to be combined in meaningful ways to result in new and more complex forms of learning. Students who show quick acquisition of basic foundational skills should be provided with opportunities to combine their knowledge base toward more sophisticated tasks and problems. Students who learn more slowly may need instruction
Step 3: Instructional Modification I 247
that proceeds carefully and deliberately, with additional opportunities to practice. In other words, strategic integration is a clear recognition of the need to individualize instruction in ways that are responsive to the learner and how quickly they acquire new skills and concepts.
Primed Background Knowledge Learning new information is highly dependent on the knowledge base the student brings to the situation because it is easier to connect new knowledge to existing knowledge. Children who do not have sufficient background knowledge in areas of instruction will often find learning more challenging than students with a rich base of knowledge. Thus, good instruction builds new knowledge and helps highlight or “prime” background knowledge for students prior to instruction, especially for students with less background knowledge than other students. But it is not just priming any knowledge about a topic that is important—teachers must be careful to prime knowledge that is relevant to upcoming instructional content because students with learning difficulties often have trouble suppressing irrelevant knowledge that in some cases can interfere with learning. Strategically building and priming relevant background knowledge help establish more equal opportunities for students to learn.
Judicious Review Performance is not the same as learning (Kamhi & Catts, 2017). Just because a student demonstrated success with a new skill in instruction does not necessarily mean it has been learned and will be maintained. Coyne et al. (2001) note that a strategic, consistent, and carefully selected review process is needed for success. Review needs to be carefully monitored so that the material most recently taught is reviewed first (to ensure it has been learned), and that review systematically rotates previously learned content to ensure it is not forgotten. Teachers need to be flexible with the methods by which learners demonstrate their knowledge. Kame’enui et al. (1997) noted that the judgment of the success of the review process requires criteria, including (1) automaticity—students should be able to complete required tasks with fluency and without hesitation; (2) integrating less complex knowledge within more complex information; and (3) an ability to generalize beyond the specific skill taught to others that are related.
SUMMARY AND CONCLUSIONS The varied strategies discussed in this chapter represent intervention techniques that can be broadly applied across all types of academic problems, age ranges, and instructional settings. We discussed approaches to intervention that do not require changes to instruction, as well as strategies that involve adjustments and procedures to enhance existing instruction. An extensive database supporting the strategies and approaches exists, along with significant evidence of field testing. Practitioners with limited experience in using these procedures are strongly encouraged to seek out detailed descriptions of the methods and to experiment with them in their own settings. Although some adaptations of procedures may be needed for specific settings, most techniques can be implemented easily, with few modifications. Furthermore, much of the literature cited in this chapter contains descriptions of methods that can be read, understood, and directly implemented.
248
Academic Skills Problems
As we noted earlier in this chapter, the selection of strategies is always based on the critical skills and environmental variables identified in the assessment. There is considerable value in considering less or moderately intensive strategies first because they are easier and more efficient to implement. More intensive interventions are unnecessary when less intensive interventions are just as effective. Although there will be many students for whom less or moderately intensive strategies will be effective, there will always be students who require more specific, individualized, and intensive forms of intervention. Intervention intensification will be necessary in many situations when working with students with academic difficulties, and ongoing data collection (i.e., progress monitoring) indicates when intensification is needed. These more specific and individualized forms of intervention are the topic of Chapter 6.
CHAPTER 6
Step 3: Instructional Modification II SPECIFIC SKILLS AND MORE INTENSIVE INTERVENTIONS
H
undreds of intervention procedures have been reported in the literature to improve performance in reading, mathematics, and writing. Consistent with the continuum of intensity discussed in Chapter 5, this chapter focuses on evidence-based interventions that are more intensive in nature, given the ways they alter and focus instruction and the resources they require. Across this section (and the text as a whole), we refer to students with so-called “difficulties” in reference to the applicability of strategies and the evidence base demonstrating their effectiveness. Our use of “difficulties” includes students with learning disabilities (including dyslexia, dyscalculia, and dysgraphia), behavior disorders, and other disability categories. This issue is particularly relevant for our discussion of interventions in this chapter because the studies that provided evidence for their effectiveness included students described as having difficulties, as well as students described as having disabilities. We present these intervention strategies and frameworks as viable options for any student who is struggling academically, regardless of whether they have a formal identification label or diagnosis, and regardless of what that specific identification is. By and large, research has demonstrated that effective instruction is just that—effective. Rarely is any one particular strategy or approach effective for only one subpopulation of learner, skill level, or disability category. This is the reason that, for example, what one does to support all students in learning to read is not very different from supporting students with reading difficulties, which in turn is not different from supporting students identified with a specific learning disability in basic reading (i.e., dyslexia). We are not suggesting that learning disabilities do not exist. They do. Our point is that effective interventions do not discriminate based on the presence of a certain label or diagnostic category. If there is any distinction to be made, it is that effective interventions for students with more intensive learning difficulties tend to be more explicit, systematic, structured, and intensive. This is true across all academic areas.
249
250
Academic Skills Problems
Considering the need for more specific and intensive interventions may be the result of the following: (1) the student has not been adequately responsive to less intensive strategies and adjustments to the instructional environment, or (2) in select situations, the initial assessment revealed significant and persistent difficulties that require an equally intensive intervention plan. In either case, the evaluator’s hypothesis, developed and refined across the assessment and initial intervention, should point to the skill difficulties that are likely the reason for the student’s struggles and therefore are the targets of the intervention. The following sections describe evidence-based strategies in each of the primary academic areas: reading, mathematics, and writing. Some strategies have applicability across academic skill areas, and we point that out. Although some situations may call for multiple strategies, we stress the need for parsimony in intervention development. Select a strategy, or small set of relevant strategies, that may best meet the student’s individual needs. There is no need for an overly complex intervention when a simpler one is just as effective. Additional adjustments, additions, or intensification can be made when progress monitoring data indicate that growth is insufficient.
READING INTERVENTIONS Aligning Reading Interventions with Assessment Results Considering the “keystone” approach to assessing reading skills discussed in Chapters 2 and 4, assessment results should point to the skill deficits most responsible for preventing the student from making progress in reading, which in turn facilitates designing an intervention to address those areas of difficulty. Figure 6.1 is intended to aid this decision process and is a translation of the keystone model of reading. Students’ reading difficulties can be often be categorized into one of the following profiles: (I) Difficulties in basic (beginning) reading, which involves difficulties reading simple words and text, stemming from skill gaps in linking letters and letter combinations to sounds, phonemic awareness, and insufficient instruction and practice reading and spelling words. (II) Problems with more advanced word reading skills, with assessments revealing that the student is able to read simple words and texts, but the student struggles to read words that are longer and more complex with accuracy and efficacy. Text reading efficiency problems accompany both (I) and (II) profiles because skilled text reading is primarily made possible by word-level efficiency. (III) Reading difficulties primarily involve reading comprehension, and may be due to gaps in vocabulary and linguistic comprehension, background knowledge, and the skills and strategies to integrate language and knowledge with text. Identifying (III) as the primary area of difficulty assumes that the assessment revealed that word and text reading skills are adequate enough to support comprehension. Supports to improve language, knowledge, and text comprehension can also be integrated with intervention and practice aimed at improving word and text reading accuracy and efficiency for students with profiles I and II. Figure 6.1 provides recommendations on the overall focus of interventions for a reading difficulties skill profile, as revealed by the assessment in the previous stages. In the discussion that follows, we describe specific intervention strategies and approaches for targeting the relevant skills domains: (I) basic (beginning) reading, (II) advanced word reading, and (III) reading comprehension.
251 Accurate
Word Reading Efficient
Text Reading Efficiency
• Decoding strategies for complex words • Integrate vocabulary • Practice reading words and text to build accuracy and efficiency
target • • • •
Language (especially vocabulary) Background knowledge Text structure, other strategies as needed Practice reading text to integrate knowledge and make inferences
target
III. Reading Comprehension
FIGURE 6.1. Connecting assessment to intervention in the keystone model of reading.
• Early literacy gaps (integrate phonemic awareness with print) • Decoding and spelling, integrate vocabulary • Practice reading words and text to build accuracy and efficiency
target
I. Basic Reading II. Advanced Word Reading
Letter–Sound Knowledge
Phonemic Awareness
Language: Vocabulary and linguistic comprehension, knowledge
If assessment reveals that difficulties primarily involve…
Reading Comprehension
252
Academic Skills Problems
I. INTERVENTIONS FOR BASIC (BEGINNING) READING: INTEGRATING PHONEMIC AWARENESS, ALPHABETIC KNOWLEDGE, DECODING, AND SPELLING Basic reading involves acquiring the essential foundational skills to read and spell words. It is a common area of difficulty. In this domain, we refer to skills involved in reading “basic” words, which we define as single-syllable words with spelling patterns such as VC, CVC, CCVC, CVCC, CCCVCC, irregular words, and shorter two-syllable words (e.g., “stopping”). Students with difficulties in this area may be students in kindergarten or first grade who are struggling in reading acquisition, or students older than that with severe word-level difficulties. As discussed in depth in Chapter 2, the most important foundational skills for learning to read words are phonological awareness and alphabetic knowledge. Phonological awareness, and phonemic awareness more specifically, involve the insight that each word is a unique combination of individual speech sounds (i.e., phonemes). The phonemic awareness skills that tend to be the most important for reading and spelling words (and thus should be the primary skills involved in intervention) are skills in segmenting a word into its component phonemes (e.g., bat = /b/-/a/-/t/; catch = /k/-/a/-/tch/), and blending sounds to form a word (e.g., ssssuuuunnnn = sun). Learning letter–sound correspondence is made possible in part by phonemic awareness, and these two skills combine to form the basis of decoding and spelling. It is this interconnection between phonemic awareness, alphabetic knowledge (letter–sound correspondence more specifically), and reading and spelling words that makes intervention that integrates them more effective than teaching either in isolation (Bus & van IJzendoorn, 1999; National Reading Panel, 2000; Rehfield et al., 2022). This is the definition of phonics instruction—an instructional approach that teaches the connections between sounds and printed letters and how to use that information to read and spell words. Additionally, the relationships among the skills are reciprocal—learning letter names and sounds, how to read words, and especially learning word spellings, leads to greater sophistication in phonemic awareness (Burgess & Lonigan, 1998; Castles et al., 2003; Perfetti et al., 1987). Therefore, the most effective approaches for students struggling to learn to read words are usually those that target these skills in an integrated way. The following sections describe strategies for building basic reading skills, and we point out how phonemic awareness, alphabetic knowledge, decoding, and spelling skills can be integrated when doing so is not readily apparent.
Strategies and Activities Targeting Phonemic Awareness Historically, common recommendations for phonological awareness instruction have recommended that students first learn to identify and work with larger sound units in words in activities such as rhyming, syllable blending, and syllable segmentation, before moving on to instruction involving phoneme-level skills such as phoneme isolation or phoneme segmentation. These recommendations were largely based on a perception that students need to master basic phonological skills with larger units before being ready for phoneme-level activities. This perspective was not well grounded in research, and scholars later emphasized the importance of instruction that involves phonemes regardless of whether students have mastered skills like rhyming or syllable segmentation beforehand (e.g., Brady, 2020). Ukrainetz et al. (2011) found that PreK students (4 and 5 years old) learned to successfully segment and blend phonemes with phoneme-level instruction, without being taught to blend or segment syllables. In fact, they found that students who
Step 3: Instructional Modification II 253
received syllable-level instruction before phoneme-level instruction showed confusion at the beginning of phoneme-level instruction. There is currently no evidence to indicate that students (including students with reading difficulties) should master skills with larger sound segments first. Therefore, when working with struggling readers, intervention should focus on improving phoneme-level skills. Several resources and curricula are available for teaching phonemic awareness, and many make use of teacher-made materials. These resources include Phonological Awareness Training for Reading (Torgesen & Bryant, 1994), Phonemic Awareness in Young Children (Adams et al., 1998), Road to the Code (Blachman et al., 2000), Ladders to Literacy (O’Connor et al., 1996), and Project OPTIMIZE (Simmons & Kame’enui, 1999). Additional Web-based resources are available from Reading Rockets (www.readingrockets.org/article/how-now-brown-cow-phoneme-awareness-activities). Many of these programs are designed for children in PreK and kindergarten, and may involve activities that older students with reading difficulties find immature. Additionally, some of these programs may involve oral phonemic awareness activities (i.e., done without reference to printed letters or words). For children at very early stages of reading acquisition and before they have learned letter sounds, oral phonemic awareness activities in isolation may be appropriate for a short period of time until some letter sounds are learned. For other students that have some alphabetic knowledge, phonemic awareness activities should be integrated with letters and words. The following are a set of activities appropriate for developing students’ phonemic awareness in ways that facilitate and strengthen word reading and spelling. These activities are appropriate for struggling students in kindergarten, first grade, or perhaps students in later grades for whom phonemic awareness difficulties are at least partly responsible for their word reading difficulties. All activities should be taught using explicit instruction (“I do, we do, you do”).
Elkonin (Sound) Boxes Elkonin boxes (Clay, 1985; Elkonin, 1973), sometimes referred to as sound boxes (McCarthy, 2008) or word boxes (Joseph, 2000), are a visual scaffold strategy that help demonstrate (1) words are made up of individual sounds, (2) sounds can be represented by symbols, (3) words can be segmented by sounds, and (4) sounds can be blended to form words. Joseph (2000) and McCarthy (2008) provide guides for implementing the strategy. An example is provided in Figure 6.2. In this activity, students are presented with a frame that includes two, three, or four spaces. The number of spaces must correspond to the number of phonemes in the words chosen for instruction. Pictures or picture cards can be used to represent the words that will be “spelled” and are recommended for beginners because pictures reduce memory demands. Start with the two-sound frame and, over time, work up to the four-sound frame as students are successful. First, the teacher provides the picture or says a word, such as cat in the example. The student, segmenting the word by phonemes, moves one counter into each space as they say each phoneme aloud, as shown in part (a) of Figure 6.2. After all counters are placed, students then point to each counter while saying each sound, then sweep their finger under the frame to blend the phonemes together as a whole word (i.e., “Say it fast”). In addition to providing a visual scaffold for segmenting phonemes, the technique also demonstrates how words are sounded out, a skill they will transfer to decoding printed words.
AcAdemic SkillS ProblemS
254
Elkonin (Sound) Boxes (a)
Say-It-and-Move-It
(c)
(b)
(d)
a
c
t
a m c
t
FIGURE 6.2. Concrete models and activities for building phonemic awareness, particularly phoneme segmenting and blending. Words can be provided orally in place of picture cards. Elkonin boxes adapted from Elkonin (1973) and Say-It-and-Move-It adapted from Blachman et al. (1994).
For students that have learned letter sounds (or are in the process of learning them), the activity should quickly transition to using letter tiles, plastic letters, or letter cards in place of counting chips, as shown in part (b) of Figure 6.2 (remember that phonemic awareness tasks are more effective when integrated with printed letters). Picture cues for the words can be removed as students build success so that they begin to segment words spoken by the teacher and thereby further engage their phonological memory. The task can be further extended by having students write letters in the boxes (Joseph, 2000). Frames drawn in permanent marker on dry-erase boards are useful as students can write letters in the frame. Overall, Elkonin boxes provide a concrete representation for illustrating that words are made up of a unique series of sounds, that words can be segmented by their phonemes, and to build understanding of how sounds are used to read and spell words.
Say‑It‑and‑Move‑It Say-It-and-Move-It (Blachman et al., 1994) is an activity described in Road to the Code (Blachman et al., 2000), and another way to build phonemic awareness that is closely connected to how words are read and spelled. It is similar to Elkonin boxes, but in this case the frame is replaced by an arrow drawn on a card, as shown in part (c) of Figure 6.2. A set of counters (or letter tiles for students who have learned some letter– sound correspondences) is placed at the top of the card. Using a picture cue or word spoken by the teacher, students pull one counter or letter tile down for each phoneme they hear in the word and place it on the arrow (the first counter goes on the dot on the left), saying the phoneme aloud as they do. Students then say each sound as they point to each from left to right, then run their finger under the arrow and blend the sounds together as a whole word. As students are successful, users can add a challenge to the task by including more counters or letter tiles than might be needed for a given word (e.g., segment at when
Step 3: Instructional Modification II 255
four counters are available, or when “m” and “s” letter tiles are available in addition to “a” and “t”), thus requiring students to pull down the correct ones. Counters should be replaced by letter tiles as students learn letter–sound correspondences, as shown in part (d) of Figure 6.2, and it is also appropriate to include one or two letter tiles with counters in order to make use of letter sounds students have just learned. Say-It-and-Move-It is somewhat more versatile than Elkonin boxes because separate frames are not needed when teachers want to present words that vary in their number of phonemes. On the other hand, Elkonin boxes provide a scaffold for beginners by providing a clearer concrete referent of the segmental nature of words, and the number of sounds they should expect. The two strategies can certainly be used in tandem.
Strategies for Teaching Letter–Sound Correspondence Letter–sound correspondence, sometimes referred to as grapheme–phoneme correspondence or the alphabetic principle, involves associating a phoneme with a printed letter or letter combination (such as vowel or consonant digraphs like ea, oo, sh, and th). It is essential for learning to read. Students with reading difficulties may have no knowledge of letter sounds, incomplete knowledge (i.e., they know some but not others), or may lack automaticity in connecting a printed letter with its sound. Any of these skill difficulties will impair word reading acquisition and should be addressed. Like teaching most skills, teaching letter–sound correspondence is often best accomplished through explicit and systematic instruction. Discussed in more detail in Chapter 5, explicit instruction involves teaching that is clear and unambiguous, uses modeling and demonstration to introduce new skills, provides extensive opportunities for students to respond, and provides immediate affirmative and corrective feedback. The following illustrates the introductory sequence for teaching the sound for the letter m, based on the procedures described by Carnine et al. (2017) and expanded from Chapter 5: 1. Model (“I do”): The letter m is displayed in full view of the students. The teacher models by pointing to the letter and saying “Letter m makes the sound mmmm. My turn. Mmmmm.” 2. Lead (“We do”): While continuing to point to the letter the teacher says, “Let’s do it together—what sound does m make?” The teacher and the students both say, “Mmmm” in unison. The teacher affirms a correct response with “Very good” or something similar; if any errors were noticed, the teacher repeats Step 1. 3. Test (“You do”): The teacher points to the letter and says, “Your turn, all together—what sound?” The students respond in unison, and the teacher provides immediate feedback. The teacher then calls on individual students—for example, “Max, what sound?” For correct responses the teacher provides affirmative feedback, often paired with the response to reinforce it: “That’s right, mmmm.” In the event of a student error or a non-response, the teacher provides immediate error correction, for example, “My turn. Mmmm [while pointing to m]. Your turn. What sound?” The teacher should call on another student, but be sure to check understanding again with any students who made errors. The teacher should back up to Step 1 whenever needed. 4. Discrimination: The teacher tests for understanding by mixing the letter m within an array of other letters that were already taught. The teacher then points to a letter and calls on individual students, interspersing trials of m with the other letters, and providing immediate affirmative or corrective feedback.
256
Academic Skills Problems
The same approach is used for other letters of the alphabet and letter combinations (see Table 4.6 in Chapter 4). Research has identified several practices and instructional features that aid students’ acquisition of letter sounds (Carnine et al., 2017; Ehri, 2020; McBride-Chang, 1999; Piasta & Wagner, 2010; Trieman & Rodriguez, 1999).
• There is no need to teach letter sounds in alphabetical order. In fact, there are several benefits to teaching letter sounds in a strategic order. Carnine et al. (2017) recommend teaching letter sounds beginning with the most frequently occurring letters in words (m, a, s, t, r, e), which allows decoding instruction to begin as soon as students have learned three to four letter sounds. They also recommend separating the introduction of letters that look or sound similar, such as not teaching d immediately after b. Letter sounds taught as part of an intervention for struggling students may naturally focus on the letter sounds they are currently missing from their skill set, which is another reason why assessing students’ letter–sound correspondence with an informal inventory is important. • Knowing letter names helps students learn the sounds they make, and pairing letter name instruction with sounds results in stronger acquisition of letter sounds (Piasta & Wagner, 2010). Letter names aid letter–sound learning for several reasons. First, most letter names provide clues to their sounds (e.g., s, m, t, d, b, r). Second, humans learn by associating new information with previously learned information—in most cases, students will know letter names, so when teaching the sound, it may be easier for them to acquire the sound because students are associating the new information (letter sound) with the previously learned information (letter name). It is like attaching a new LEGO brick to an existing one. • Teach individual letter sounds in terms of the most common sound they make. The most common sound associated with each letter is listed in Table 4.6 in Chapter 4. Not all letters will make the same sound in printed English, but it is important that readers first learn the most common sounds they make. They can also learn that the majority of letters make only one sound the vast majority of the time. Teaching the most common sounds that letters make ensures that students know the letter sounds that work most of the time. Letter–sound irregularities and exception pronunciations are often driven by other letters that occur with them (e.g., vowels followed by r), position within words (e.g., ghost vs. laugh), and specific spelling patterns. Exception pronunciations are also less common than letters’ most common sounds. Therefore, alternate letter sounds are best targeted in the context of words, and in a subsequent section we provide strategies for teaching words with irregular pronunciations. • Review frequently and play games with letter sounds. Review previously taught letter sounds as new sounds are introduced (see the “Discrimination” item in the explicit instruction sequence listed earlier). There are many ways to play games with letter sounds, including letter tiles, magnet letters, letter dice, flashcards, and sidewalk chalk. • Letter–sound games and activities should involve (1) expressive responses, where the student is shown a letter or letter combination and must produce its sound; or (2) receptive responses, where the teacher says a sound and the student must select or point to the letter (from a group of letters) that makes that sound. Doing both types of activities
Step 3: Instructional Modification II 257
is recommended because they are differentially involved in reading and spelling. Expressive responding is more involved in reading words, where a student must recall the sounds for a letter or letter combination they see. Receptive responding is more involved in spelling, where the learner says a sound to themself and must recall the letter or spelling that goes with it.
Beginning Word Reading (Decoding) Instruction and Intervention For students at very basic stages of learning to read words, decoding instruction should begin as soon as students have learned three to four letter sounds. There is no need to wait until all letter sounds have been taught; as soon as a new letter sound or letter combination is learned, students should practice reading and spelling words with that letter or letter combination. For students with word reading difficulties that are related to inadequate phonemic awareness and letter–sound knowledge, the approaches described here should be integrated with phonemic awareness and letter–sound instruction, where they support each other. This is the essence of phonics instruction. Explicit phonics instruction is the most effective way to teach beginning word reading (Ehri et al., 2001; Fletcher et al., 2019), meaning that the teacher clearly and unambiguously demonstrates how to sound out and blend words, provides ample opportunities for the student to practice (with support), and provides immediate affirmative and corrective feedback.
Systematic Sequence of Introduction Consistent with the Direct Instruction model described by Carnine et al. (2017), decoding instruction should proceed systematically, meaning that there is a strategic plan for the order in which types of words are targeted. For students at the earliest stages of learning to read words, or students with significant word reading difficulties, intervention may need to fall back to the most basic word types. The simplest types of words are vowel– consonant (VC) words that are “phonetically regular,” meaning that all letters in the word represent their most common sound (sometimes referred to as decodable words). Phonetically regular VC words include at, up, it, in, on, and if. Pseudowords (e.g., ip, et, ub, ak) can occasionally be mixed in with real words to give students exposure to all the vowels and more consonants, and students often enjoy saying whether a word is a real word or made-up word after reading it. However, instruction should focus exclusively on real words after these initial activities. As students show accuracy in decoding VC words, instruction can target phonetically regular consonant– vowel– consonant (CVC) words. Instruction should proceed carefully, always following explicit instruction principles. For many students, making the jump from VC to CVC words is challenging given the increased working memory demands introduced by the addition of a letter. Students may successfully say each sound, but attempts to blend the sounds together as a whole unit may often omit the final sound, omit the initial sound, mix up the vowel, and so on. Gonzalez-Frey and Ehri (2020) found that “connected phonation”—teaching students to continuously voice each phoneme as they blended (e.g., “mmmaaat”)—improved decoding acquisition and reduced letter–sound omissions compared to inserting a pause between phonemes (e.g., “mmm . . . aaa . . . t”). As illustrated in Figure 6.3, visual scaffolds can be used to support decoding instruction by providing prompts for sounding out and blending. These supports can include
AcAdemic SkillS ProblemS
258
mat a
a
t
m
a
t
mat
p
a
t
mat
s
a
t
s
a
p
m
a
p
t
m a t m a sh
FIGURE 6.3. Scaffolds for supporting basic decoding instruction. From left to right: word boxes (Joseph, 2000; Vadasy et al., 2005), segmenting and blending prompts (Carnine et al., 2017), word building (McCandliss et al., 2003).
letters in boxes (Joseph, 2000; Vadasy et al., 2005), or dots and arrows under words, as used by Carnine et al. (2017). Over time, visual scaffolds used to support decoding acquisition should be faded (i.e., systematically removed as students are successful). In Direct Instruction, fading of the visual prompt begins with removing the dots but keeping the arrow, then later removing the arrow as students are successful (Carnine et al., 2017). The removal of the visual scaffolds is also accompanied by reducing how explicitly students sound out words aloud. Carnine et al. (2017) recommend that when students can sound out words aloud by themselves, they can be taught to sound out a word by whispering and then saying the whole word out loud. Next, students are taught to sound out the word to themselves silently and then say the whole word aloud. Decoding skills should be taught and practiced with words in isolation, in short lists, and in connected text. Word reading instruction should avoid using pictures; students must learn to read a word by using the letters, not by guessing based on a picture cue. Although typically developing readers will get feedback from illustrations that help confirm their decoding, struggling readers tend to overrely on pictures and other nonalphabetic cues to guess. Instruction should also not prompt students to guess at words based on their shape, or when reading text, what word they think should come next. Instruction with beginning or struggling readers must emphasize decoding strategies to read words. In addition to instruction and practice reading words in isolation, practice should also include reading words in connected text. For beginners, short and simple sentences can be constructed with familiar words and spelling patterns to provide supported practice reading in context. The Bob Books series (Maslen & Maslen, 2006) is a resource for “decodable” books that are appropriate for students even at the very start (the first book uses words that mainly include a, m, s, and t). The Bob Books are illustrated, so users should be reminded to prompt students to sound out and blend words as the first and primary strategy for attacking unknown words. If you notice students trying to rely on the illustrations to guess at words, prompt and reinforce their use of sounding out. Pictures can be referred to after students have correctly decoded the words on the page, which reinforces their understanding. Other decodable book series are available through justrightreader.com, readinga-z.com, and other providers. However, practice should not rely on decodable text exclusively, as it is important that students also practice
Step 3: Instructional Modification II 259
reading authentic texts (i.e., text written without requirements on the decodability of the words) within their instructional level. Additional discussion on the use of decodable and authentic text is provided later in the text-reading practice section. As they demonstrate success in reading CVC words, instruction can move to phonetically regular consonant–consonant–vowel–consonant (CCVC) and consonant–vowel– consonant–consonant (CVCC) words with consonant blends (i.e., letter combinations in which both sounds can be heard, such as in trip, stop, fast, and raft). As students learn digraphs (i.e., letter combinations that make one sound, as in shop, that, with, whip, and knit), they can be added to words targeted in decoding instruction. Longer spelling patterns, such CCVCC, CCCVC, CVCCC, and so on, are included as students build accuracy reading simpler patterns.
Reading Practice: Facilitating Orthographic Mapping for Beginning Readers Gradually, given explicit instruction and sufficient practice opportunities, students will begin to find it unnecessary to sound out every word. They are engaged in the process of linking letter strings to pronunciations in increasingly larger units, which allows them to recognize combinations of letters. This is referred to as orthographic mapping, which simply means building strong connections between word spellings and pronunciations that allows them to read words with ease and automaticity (i.e., “by sight”) without having to sound them out. These skills will also allow students to learn new words more quickly, and in some cases without being explicitly taught, by transferring knowledge of letter sequences to other words. For example, a student who learns the pronunciation of op from learning to read hop, mop, and bop is in better position to read stop. However, it must be remembered that although they are beginning to read words as whole units, that process was made possible by instruction that taught them to link individual sounds to letters. Words are added to memory based on letters within them, not the word’s shape or appearance. Letters are the key to the code. Intervention starts the process of orthographic mapping, but practice reading words is essential for making it happen. Students need multiple opportunities to read a word correctly, with feedback (either from a teacher or another way for them to recognize their pronunciation was correct). How many exposures is not clear, and this depends on the learner, the words, and the quality of the feedback, but average estimates range from about four to nine (Ehri & Saltmarsh, 1995; Reitsma, 1983; Steacy et al., 2020). High- performing readers need fewer exposures (closer to four on the range), and struggling readers need more exposures (closer to nine, sometimes more). But with enough exposures combined with feedback, letter combinations and word spellings become embedded in the student’s orthographic memory. To be clear, instruction should always target sounding out as the primary strategy students are provided for decoding words. Although there are other decoding strategies (which we discuss below), and sounding out will not always result in the correct pronunciation, it is the single most reliable word reading strategy that students have at their disposal, and the strategy they should try first when they encounter an unknown word.
Word‑Building Interventions Word-building interventions (Beck & Hamilton, 2000; McCandliss et al., 2003), sometimes called “word ladders” or “word chains,” are strategies to support word reading
260
Academic Skills Problems
instruction and help foster connections between word spellings and pronunciations. They also inherently integrate phoneme isolation, segmentation, blending, deletion, and substitution in ways that are relevant to reading and spelling words. Word building involves providing students with a small set of letters or letter combinations (e.g., digraphs) on cards or tiles. An example is provided in Figure 6.3. In its most basic form, the letters should include two consonants and one vowel, and more letters can be added as a student’s letter–sound knowledge and decoding skills develop. With the support of a tutor, students form a word with the letters and read it aloud. They are then prompted to remove, add, or exchange one letter to form a new word, again reading it aloud. Students then remove, add, or exchange another letter to form another new word. This process continues, and students build new words by changing one letter. The tutor should prompt students to change letters in any position (i.e., beginning, middle, or end), but only one letter is changed each time. Cards can include letter combinations that represent phonemes (e.g., th, ch, sh, ee, oo) used as students learn them. As an extension, students can log the words they build in a word bank. Students might often build a pseudoword, in which case they can be prompted to identify it as a made-up word and make another adjustment to change it to a real word. Other times a new word may have a letter than deviates from its most common sound (e.g., changing cost to most), in which case the tutor should help the student adjust their pronunciation to be correct. Depending on the skill level of the student, words can range from VC and CVC up to CCCVCCC. The value of the word-building activity is that it focuses students’ attention on the fact that each word is a unique combination of letters, those letters connect to sounds, and that words are read based on awareness of that set of letters, not based on a word’s overall shape or form. The procedure also helps develop phonemic awareness by illustrating the segmental nature of words, and manipulating sounds in words to make new ones. Word building also has close ties to spelling. McCandliss et al. (2003) found that a word- building intervention was associated with significant improvements in decoding for 7- to 10-year-old students with reading difficulties. Word-building interventions are also a way to call students’ attention to rime-based strategies for decoding and spelling words. The “rime” portion of the word contains a vowel and follows the onset (i.e., first sound). In CVC words like mat, /m/ is the onset and /at/ is the rime. Rimes are useful for decoding and spelling because they provide a point of generalizing decoding skills to words that have not necessarily been targeted. For example, students can quickly learn to read at. If they know the sound for m, they can read mat. Once students learn s, p, and r, they can read sat, pat, and rat. This is the basis for learning word families that in reading contexts refer to words that share a common rime unit. Word-building exercises are a good way to reinforce rime-based decoding and spelling strategies because once students have built the rime unit and can read it, they can add and swap out multiple onset sounds to make new words. Because they know the letter sounds used in the activity, students may be able to sound out multiple new words they have not encountered before. This is an important insight because it demonstrates to students that they can read new words simply by using their knowledge of letter sounds and blending. The tutor is there to provide feedback and error correction, and should also call attention to the student’s rapid reading of the rime unit. This unitization of letter sounds (in the above example, a and t become at) represents Ehri’s (1998, 2020) perspective on word reading acquisition, in which readers begin to unitize letters into larger letter chunks so that they can read words very quickly without having to sound out.
Step 3: Instructional Modification II 261
Interventions for Spelling to Support Word Reading Acquisition Rather than being taught as a separate subject with specific curriculum materials, spelling instruction is best when integrated within reading or writing instruction. For students with reading difficulties, integrating spelling instruction and practice within reading reinforces the development of both skills (Graham & Santangelo, 2014; Weiser & Mathes, 2011). Reading (decoding) and spelling (encoding) are built on a foundation of common skills in phonemic awareness, letter–sound knowledge, and orthographic knowledge (i.e., recalling letter sequences and whole-word spellings). To facilitate the interactive effects between reading and spelling words, spelling instruction and practice should be systematically aligned and integrated with reading instruction, meaning students should spell words with the same letters, letter combinations, and spelling patterns as those taught in reading. For example, if decoding instruction targets words that contain the tch consonant cluster (e.g., catch, pitch, clutch), integrated spelling instruction and practice should also include words with tch. Spelling activities should also be integrated within reading activities so that students have the opportunities to spell the words that they read. Learning to spell new or unfamiliar words and spelling patterns is best when the teacher directly teaches in an unambiguous way. As with other skills, the model–lead– test (i.e., “I do, we do, you do”) approach is used: With the word visible, the teacher spells the word out loud while pointing to each letter, the teacher then leads the students as they spell the word chorally, and then after removing the word from view, the teacher asks the students to spell the word from memory and provide feedback. In the event of a spelling error, the process is repeated. In addition to explicit instruction, other strategies and techniques have been used to target spelling, in many cases as practice strategies. Practice is important for improving spelling; therefore, practice strategies should provide students with many opportunities to spell words and obtain immediate feedback on whether their spelling is correct. Many approaches can be implemented with students working in pairs, or students working individually. The following provides an example.
Cover–Copy–Compare Cover–copy–compare (CCC; Skinner, McLaughlin, et al., 1997) is one method that has been used often as a spelling practice strategy. The approach is logistically appealing because students can do it independently. CCC requires a list of words to be learned, which are listed vertically on the left side of a page, with three to four spaces to the left of each word. The student reads the first word and spells it to themself. The student then folds or covers the left side of the page to hide the original word (“cover”), and writes the word from memory in the first response blank (“copy”). The worksheet is then unfolded or uncovered and the student checks their spelling (“compare”). If the student spelled the word correctly, they move onto the next word on the list and repeat the procedure. If the student spelled the word incorrectly, they check the spelling, cover the word, and spell it two to three more times in the available spaces. This continues until the student spells the words in the list. CCC has been associated with improved spelling performance for students with and without disabilities (Joseph et al., 2012), and evidence suggests that improvements extend to spelling practiced words correctly in other contexts (Cates et al., 2007).
262
Academic Skills Problems
Readers should note that although there are numerous benefits to integrating spelling within reading instruction, spelling activities add considerable time to intervention. Students write slowly, especially beginning readers and students with academic difficulties. A spelling response requires significantly more time than a reading response, which means that students may have the opportunity to read several words in the time it takes for them to spell one word. Thus, spelling words can result in fewer opportunities to respond compared to reading words. Spelling activities can be made more efficient by using letter tiles or cards (as in word building), but students should also spell words by writing. Overall, spelling activities should be included in reading instruction, but they should be used judiciously with the recognition that it takes students longer to spell words than reading them, and repeated exposures to words in print are a requirement for orthographic mapping to occur.
Integrating Word Meaning within Decoding Instruction It can also help to briefly define words, or use them in a familiar sentence, when they are introduced in decoding instruction. Theory and emerging empirical evidence indicate that teaching what words mean and how they are used (i.e., vocabulary) when they are targeted in decoding instruction can help students add new words to memory more easily (Austin et al., 2022; Dyson et al., 2017; Kearns et al., 2016; Kearns & Al-Ghanem, 2019; Ouellette & Fraser, 2009; Perfetti, 2007; Petersen-Brown & Burns, 2011; Seidenberg, 2017). This is consistent with connectionist frameworks of reading acquisition (Seidenberg, 2017), and Perfetti’s lexical quality hypothesis (2007), which posit that learning to read words is about learning connections between the word’s orthographic representation (how it is spelled), its phonological representation (how it is pronounced), and its semantic representation (what it means). When decoding instruction also refers to the meaning of a word and how it is used, students may more easily connect the word’s spelling with its pronunciation. Additionally, integrating word meaning with decoding instruction improves vocabulary knowledge (Austin et al., 2022), a key aspect of reading comprehension. Examples of integrating word meaning within decoding instruction are provided across the sections that follow.
Interventions and Strategies for Reading Phonetically Irregular Words English is referred to as a semi-transparent or so-called “opaque” orthography, meaning that there are rules to letter sounds and spelling pronunciations, but those rules do not always apply. For example, ch usually makes a /ch/ sound as in chat, but makes a /k/ sound in school. The vowel combination ea is pronounced differently in bear, bead, and break. The letter s usually makes a /s/ sound as in sit, but makes a /z/ sound in bees, flies, and runs. Some refer to words in which some of their letters deviate from their common sound as sight words. This perspective tends to assume that, because sounding out will not result in an entirely correct pronunciation (i.e., such words are not “wholly decodable”), phonetically irregular words must therefore be read on a whole-word basis “by sight.” This perspective is misleading, as we discussed in detail in Chapter 2. A more productive view, consistent with contemporary theories of word reading (Ehri, 2020; Seidenberg, 2017), is that words exist on a continuum of regularity. Foundational skills in phonemic
Step 3: Instructional Modification II 263
awareness and alphabetic knowledge are critical for reading all words, not just words that are phonetically regular. As we discuss in a moment, learning to read phonetically irregular words requires an additional step to adjust a phonetic pronunciation to the correct pronunciation, but word spellings are added to orthographic memory based on the letters within them, and alphabetic knowledge provides that mechanism. When teaching students to read phonetically irregular words, it is important to keep some things in mind. Often, one hears practitioners talk about how “messy” and “inconsistent” the English writing system is, lamenting the fact that letter–sound correspondence and phonics rules cannot be used to read all words that students encounter. These perspectives understate the regularity of printed English. English is indeed an opaque (semi-regular) orthography. However, written English is far more regular and consistent than many assume, which was expertly discussed by Kearns and Whaley (2019). First, the majority of letters in the alphabet make only one sound in the vast majority of words in which they occur. This is also true of many letter combinations, which are associated with one pronunciation in most words. Second, in most irregular words that are referred to as sight words, the majority of letters within them make their most common sound. Very often, irregular words have only one or two letters (usually vowel sounds) that do not conform to their most common sound. Students get a lot of useful information by attempting to sound out an irregular word, and in many cases, their approximate pronunciation may be close enough for them to make an adjustment to the correct pronunciation. The important point is that reading both regular and irregular words is made possible by the same set of skills, and that “irregular” words are actually quite regular in many respects. This has implications for the messages we send to students about word reading, as well as how we teach irregular word reading. As Kearns and Whaley (2019) pointed out, describing English as “messy” to students is problematic. It implies that reading is unpredictable and inconsistent, which can further increase anxiety among struggling readers. Instead, our messages to students should be that English has a lot of consistencies, and it is far more predictable than they might think. They should have confidence in relying on their knowledge of letter sounds and letter combinations to read words. They also should be taught that their primary strategy—the one they use first for attacking unknown words—is to try to sound them out. Thus, teaching students to read phonetically irregular words is very similar to how regular words are taught, but with an additional step. Colenbrander et al. (2022) found that when teaching students to read irregular words, it was more effective to have students practice spelling them or attempt to sound them out and correct mispronunciations of the decoded words, compared to a “look-say” approach to teaching words as whole units. Carnine et al. (2017) recommend prompting students to try sounding the word out first (provided that students have learned how to sound out words and they know most or all of the letter sounds). This reinforces sounding out as the student’s most reliable go-to strategy when encountering an unknown word. In some cases, the student’s sounding out attempt will result in a correct pronunciation. If not, the teacher immediately provides the correct pronunciation of the word, and the students repeat it. This demonstrates for students how to adjust or “fix up” their pronunciation of a word so they may add it to their oral vocabulary. This is also an ideal time to briefly define the word, or use it in a familiar sentence, which may help students establish the connections between the word’s spelling, pronunciation, and meaning. An additional step can include having students spell the word out loud. This further directs their attention to how words are spelled,
Academic Skills Problems
264
which is the key to word reading. What should be avoided are “look-say” approaches that encourage students to guess at words based on whole shape or form. Thus, considering the recommendations above, an explicit instruction approach to teaching the irregular word was might look like this: Teacher: (while pointing at was in clear view of the student) Here is a new word, sound it out. Student: Www . . . aaaaa . . . ssss . . . wass. Teacher: Almost! This word is was. What word? Student: Was. Teacher: Good. Was. Like, “I was so hungry” or “Yesterday was Monday.” Now spell was. After the student spells the word, the teacher would have them read it again. The student should then practice reading the word aloud several times, with it interspersed among previously learned words, to the point that they can recognize the word quickly without having to sound it out.
II. INTERVENTION AND STRATEGIES FOR ADVANCED WORD READING SKILLS In contrast to basic word reading that we referred to in the previous sections, we refer to “advanced” word reading skills as the ability to read words beyond basic spelling patterns. This includes words that are longer, have multiple syllables, prefixes, and/or suffixes, and more complex spelling patterns. These are words students encounter in academic texts with increasing frequency beginning in second grade and beyond, and is often an area in which students with word reading difficulties experience considerable challenges. Given the prominence of these types of words in academic texts, and the fact that students in mid-elementary grades and beyond are often expected to learn content in language arts, science, and social studies through reading, difficulties reading these types of words can be a primary source of reading comprehension difficulties and low achievement overall. Students with skill difficulties in this area will be a common type of referral for school-based evaluation professionals.
Teaching Students a Set of Decoding Strategies and How to Use Them Flexibly Sounding out should be the primary strategy that students use to attack unknown words. However, there is significant benefit to learning a set of additional, related decoding strategies for situations in which sounding out is not successful. This is especially important for reading longer and more complex words. It also promotes independence in reading by providing students with “self-teaching” strategies they can use when a teacher is not present. The following briefly discusses a set of decoding strategies that can be taught after students have learned to sound words out. For additional information and resources on these strategies, Kearns and Whaley (2019) provide a description of their use, and include several tables containing common syllables, morphemes, and spelling patterns
Step 3: Instructional Modification II 265
that are useful to teach as part of such strategies. O’Connor (2014) and O’Connor et al. (2015) are also great resources for teaching multisyllabic words. Additionally, Lovett et al. (2017) conducted a rigorous experimental study that demonstrated significant benefits in teaching students a set of decoding strategies and how to flexibly use them. We reiterate that the following decoding strategies should be taught after students have learned to read basic words and can attack unknown words by sounding out. Students should also have good knowledge of letter–sound correspondences (including individual letters, digraphs, and reading blends), although there may be some letter combinations to re-teach. As such, these approaches are often appropriate for students with word reading difficulties who have some general or semi-adequate foundational skills in phonemic awareness, alphabetic knowledge, and reading basic words. However, they do not have to be particularly strong in these areas for instruction to proceed while foundational skill gaps are remediated.
Word Identification by Analogy (Rhyming) Strategy Earlier, we discussed the use of rime-based strategies as a support for basic decoding (see the prior section on word building), in which students can benefit from learning to read words with a common rime unit (e.g., mit, sit, pit, fit, bit, hit). Students can learn to use similar rhyming/analogy strategies to help them read longer words that share a common spelling pattern, common rime unit, or other word part shared across other words (Gaskins et al., 1986). For example, if a student knows most letter sounds and has learned to read catch, they are in a good position to read match, hatch, batch, patch, thatch, and scratch. There are many times that struggling readers do not recognize the similarity in spellings across words, particularly in terms of common rime units or other larger word chunks. Intervention in these situations can focus on directing students’ attention to portions of unfamiliar words that they do know how to read. Lovett et al. (2017) taught a rhyme-based analogy strategy as part of the set of decoding strategies they taught students to use if sounding out was unsuccessful. Here, the teacher modeled the use of the strategy by demonstrating, saying, for example, “If I can read catch, I can also read match” while pointing to the words. The teacher should write match under catch so that the portions of the words that are alike are aligned vertically. Additional words like batch, scratch, thatch, and so on, would be added below, each written so that atch is aligned vertically. As with other strategies, briefly referring to word meanings or using them in a sentence may help students make faster connections. For example, when the student correctly reads match, the teacher might say, “That’s right, like ‘I don’t always wear colors that match’ or ‘Please light the candle with a match.’ ” An additional tip: When working with students directly, do not be afraid to mix in a few words that vary in their pronunciation, for example, watch. Have students read the word aloud. Doing so affords them greater opportunity to “hear” if it sounds right, and if not, they may recognize what the pronunciation should be and adjust accordingly. If not, the teacher should provide immediate feedback on the correct pronunciation. You can say, “Yes, the word looks like that and is spelled like the others, but it is pronounced ‘watch,’ like a watch you wear on your wrist, or when you sit down to watch TV.” Using the word in context or referring to its meaning can help solidify the pronunciation of the word, especially words that students have heard before. This flexibility with variable pronunciations, especially in terms of vowel sounds, is the basis of flexible vowel strategies described next.
266
Academic Skills Problems
“Vowel Alert” and Flexible Vowel Strategies Often, what makes a word “phonetically irregular” is the vowel or vowel combination. Single vowels may make a short or long sound. Vowel combinations (i.e., vowel teams such as ea, ou, oo, ai, ee, oi, etc.) typically have a most common pronunciation, but some also have additional variations. For example, the sound for oo in book and cook is different in boot; ea usually makes a long-e sound as in bead, but makes different sounds in search, heart, and tread. Lovett et al. (2017) included vowel alert in the set of decoding strategies they taught students with significant word reading difficulties in the first through third grades. Students learned to use it as one of their strategies if sounding out was unsuccessful. With vowel alert, students are taught to identify the vowel and to try sounding it out using alternate pronunciations for the vowel that have been previously taught. For single vowels, students can try sounding out with the short or long sound. For vowel combinations, students can try alternate pronunciations that have been taught. The interventionist should always be there to provide immediate affirmative feedback, prompt students to try another strategy (like those described across this section), or immediately help students to “fix up” and adjust a pronunciation so it is correct. This flexibility in trying and adjusting pronunciations has been referred to as “set for variability” and is a hallmark of what proficient readers do. Steacy et al. (2016) observed benefits for using the vowel alert strategy with students with significant word reading difficulties in the third through sixth grades. Teaching struggling readers to be flexible with vowel sounds can add significantly to their independence in reading unfamiliar words. Additional tips are provided by Kearns and Whaley (2019).
Strategies for Reading Multisyllabic Words As students progress through the elementary grades, the number of words with multiple syllables they encounter in text increases significantly. These words pose particular difficulty to students with word reading difficulties and serve as a barrier to reading comprehension. Fortunately, there are strategies students can add to their decoding toolkit to use when encountering longer words.
Syllable‑Based Strategies and ESHALOV Syllables are word parts that contain a vowel sound. For example, the word understand has three syllables: /un/-/der/-/stand/. Syllable-based strategies provide a way for students to break down a long word into smaller word units that they can read individually, and then put the parts together to read as a whole word (Kearns & Whaley, 2019). Essentially, the process is similar to how students learned to sound out single-syllable words using letter sounds. To learn this strategy, students should know the short and long sounds for all vowels (including y as a vowel), and the common vowel combinations (ea, ei, ay, ee, oo, ou, oy, etc.). One useful syllable-based decoding strategy, described by O’Connor et al. (2015), is Every Syllable Has At Least One Vowel (ESHALOV). Using explicit instruction, students are taught to do the following, using the word numerous as an example: 1. Underline all of the vowels in the word (e.g., numerous). 2. Join any vowel combinations (i.e., vowel teams) into one vowel sound (i.e., ou are joined).
Step 3: Instructional Modification II 267
3. Identify the word parts the student can read, which may include prefixes, suffixes, or smaller parts of the word (e.g., er, ous). 4. Count the number of word parts that should be put together, which is the number of vowels (e.g., three in numerous). 5. Break the word into parts, by syllables, for decoding (e.g., num-er-ous). 6. Try a pronunciation of the word. The teacher should prompt the student to fix up word parts by trying alternative vowel pronunciations (see the discussion of vowel alert above). For example, in numerous, the teacher may need to prompt the student to try the long-u sound in num. To support ESHALOV, it is recommended that intervention include teaching students the most commonly occurring syllables, such as ter, com, ex, ty, ble, di, and others (see Kearns & Whaley [2019] and Zeno et al. [1995] for common syllable lists). These syllables can be taught through explicit instruction in the same ways that basic word reading and irregular word reading skills are taught. The ESHALOV strategy should be introduced carefully across several sessions (recognizing that reading multisyllabic words is difficult for most students), and the principles of explicit instruction, ample practice opportunities, and immediate feedback apply throughout.
Morpheme‑Based Strategies Morphemes are the smallest units of speech that hold meaning and can include single words, roots, prefixes (e.g., pre = before, dis = not, sub = below), or suffixes (e.g., able = able to, less = without, est = the most). Some syllables are morphemes, but some morphemes contain more than one syllable (e.g., inter, intra). The key aspect is that morphemes mean something. Many longer words that students encounter as they get older will contain morphemes. As Kearns and Whaley (2019) note, teaching students about morphemes and how they are used in words is helpful for two reasons: (1) Like syllable strategies, they help students break down longer words into parts they recognize, which makes it easier to put the parts together and read the whole word, and (2) teaching morphological knowledge can improve students’ vocabulary by providing them with a way to independently infer the meaning of an unfamiliar word. Several studies have observed the benefits of morphological instruction on students’ word reading and other skills (for reviews, see Bowers et al., 2010; Goodwin & Ahn, 2013). To teach morpheme-based decoding strategies, first teach students a set of common morphemes, both in terms of how they are pronounced and what they mean. Carnine et al. (2017) and Kearns and Whaley (2019) provide lists of morphemes that are helpful to teach. Each morpheme should be targeted individually, through explicit instruction, in ways similar to how students were taught to decode simpler words. Previously taught prefixes, suffixes, and roots should be reviewed frequently. Morpheme-based decoding instruction can begin once students are able to read a few morphemes, and others can be taught while instruction progresses (i.e., it is not necessary to wait until all morphemes are taught). Morpheme-based decoding can be taught like syllable strategies such as ESHALOV; students identify morphemes in the word by underlining and identifying word parts they know. They read these sections individually, then put them together to read as a whole word. The interventionist provides support in affirming correct responses, adjusting pronunciations as needed, and providing corrective feedback.
268
Academic Skills Problems
Peeling Off Peeling off is another decoding strategy that Lovett et al. (2017) taught students in their study. With this strategy, students learn to underline or circle the prefixes or suffixes in a word, thereby “peeling off” word parts to isolate the root word. Students then read each part, put the pieces back together, and read the word as a whole unit. This strategy is made possible by first teaching common prefixes and suffixes (including morphemes) described in the strategies above. It is best to start with words that have only one prefix or suffix, and as students learn the strategy and are successful in reading words accurately, add words that have both a prefix and a suffix. Words used in initial instruction should include prefixes, suffixes, and roots that students have already learned to read; otherwise, they will not be able to recognize them in order to “peel them off.” As with the other strategies, introduce the peeling off technique carefully using explicit instruction, with plenty of supported practice, and immediate affirmative and corrective feedback. Interventionists are also reminded to prompt students to reflect on their pronunciations to evaluate if they sound right. When needed, prompt them to adjust and “fix up” partially correct pronunciations, and note the meaning and use of unfamiliar words as students read them.
BEST Strategy The BEST strategy (O’Connor et al., 2015) stands for Break it apart, Examine the base word, Say each part, and Try the whole word. It is very similar to peeling off and provides another way to teach students to decode complex and multisyllabic words. O’Connor et al. (2015) used ESHALOV with BEST to improve reading skills in history classes among eighth graders with reading difficulties, and observed stronger outcomes in vocabulary and reading comprehension compared to students in a control condition.
Integrating Decoding Strategies and Promoting Flexible Use BEST and peeling off are ways to organize and integrate sounding out, flexible vowel, syllable, and morpheme strategies as a “toolkit” for decoding unknown words. Students can be taught that there are multiple ways to read words, even words that are intimidatingly complex, and that they can apply their decoding strategies in the same way a carpenter or mechanic would use multiple tools in a toolbox. Flexible, adaptable use was a primary aim of studies that investigated teaching students multiple decoding strategies (Lovett et al., 2017; O’Connor et al., 2015). Lovett and colleagues, for instance, used a meta-cognitive technique in which tutors modeled a self-teaching dialogue describing the decoding strategies they had learned so far, and when and how to use them. Students then practiced the dialogue as they used the strategies in reading words. Sounding out was the first strategy taught and always the strategy students tried first. New decoding strategies were taught across a series of weeks, each time adding the strategy to the list of options. If sounding out did not work, students learned to try the others, usually in order (rhyme/analogy, vowel alert, peeling off, and “SPY,” a strategy in which students looked for smaller words within compound words). In this way, students learned to engage in a process that proficient readers do; independently use a flexible set of decoding strategies for determining the pronunciation of an unfamiliar word. In summary, promising approaches for students with word-level reading difficulties include teaching a small set of useful decoding strategies and how to apply them flexibly.
Step 3: Instructional Modification II 269
Sounding out should be taught as the go-to strategy students try first. After that, they can learn to apply decoding strategies such as rhyme/analogy, learn to read common syllables and use a strategy such as ESHALOV, and learn common morphemes and apply them to techniques such as peeling off or BEST to dissect complex, multisyllabic words. Across all instruction, help students adjust approximate pronunciations, and note the meaning and usage of words students learn to decode, which may aid the connection-forming processes needed to commit words to orthographic memory.
PRACTICE STRATEGIES FOR IMPROVING WORD AND TEXT READING EFFICIENCY As discussed at length in Chapter 2, fluency reading text is a product—a symptom—of proficiency with the skills that underlie it. The most important skill involved in reading fluency is efficient word recognition, meaning that words are read with ease and automaticity, with little conscious effort. The first step toward text reading efficiency is improving word reading accuracy, and all the strategies and interventions described above are aimed at that. But accuracy alone is not sufficient; words must be learned to the point they are read with efficiency. That requires practice. The aim of strategies described in this section is to build accuracy and automaticity in reading words and text. Therefore, they are important parts of interventions described in the prior Sections I (on basic reading) and II (on advanced word reading). The difficulty of the words and text used in practice is adjusted accordingly. It is never too early to begin practicing reading with the goal of building automaticity; the only prerequisite is that students can read the words or text used for practice with at least 90% accuracy (i.e., one error for at least every 10 words). Students should also know at least one decoding strategy to use as a fallback (sounding out is essential, plus any others that are taught). Thus, words can be practiced as students learn how to read them. Whether practice occurs with words in isolation or in passages requires some consideration.
Strategies for Building Reading Efficiency: Reading Words in Isolation or in Reading Passages? There are several reading practice strategies that use isolated words, such as reading words in lists or on flashcards. On one level, isolated word reading practice makes sense. Because a major part of reading fluency is efficiency in recognizing individual words (Jenkins et al., 2003; Levy et al., 1997; Torgesen et al., 2001), building automaticity at the word level should improve reading fluency. Levy et al. (1997) demonstrated that when students practiced reading a set of words in list form (i.e., out of context), improvements in word reading transferred to efficiency reading those words in passages (i.e., in context). Martin-Chang and Levy (2005) also found that training to read words in isolation (lists) transferred to improve reading efficiency in context; however, training words in context resulted in stronger text-reading improvements than in isolation. The two types of practice may also strengthen different things. Ehri and Wilce (1980) found that practicing reading words in lists resulted in more accurate and efficiency word recognition (i.e., stronger orthographic knowledge), whereas practicing words embedded in sentences was associated with stronger knowledge of the words’ meanings and use (i.e., semantic and syntactic knowledge).
270
Academic Skills Problems
Reading text requires a significant amount of serial processing—rapid processing of multiple successive words (Protopapas et al., 2018). Although the demands of serial processing can be approximated in flashcard or word list reading exercises, reading does not naturally take place that way. Reading comprehension when reading text can provide some benefit for recognizing words more efficiently (Jenkins et al., 2003), and reading- connected text provides the most natural practice venue. In their meta-analysis, Morgan and Sideridis (2006) observed that techniques that involved reading passages were most effective, and word recognition training was the least effective of all interventions implemented for oral reading. However, the way they defined word recognition training for their meta-analysis was more consistent with decoding instruction to build accuracy, not word reading practice to build efficiency. There are also practical issues to consider. It is difficult and time-consuming to develop meaningful reading practice passages that are controlled for difficulty and focused on words of interest, such as word types recently targeted in intervention. Word list and flashcard practice are much better suited for this situation because they can be specifically tailored (and quickly adjusted) to focus on words the interventionist wants to target. A word list can be built to include words specifically taught and can intermix words that have been previously learned (for review), as well as words that were not specifically taught but include the same letter clusters or similar spelling patterns to test for generalization. It is very difficult to write reading passages that accomplish this. Additionally, isolated word reading practice helps students focus specifically on their decoding skills, which is important because students with word-level reading difficulties tend to overrely on context (i.e., the meaning of the passage) to guess at words rather than use a decoding strategy. Should reading practice thus involve isolated word reading practice (word lists and flashcards), or reading-connected text? The answer is, as in most cases, “it depends” on the needs of the student, and in most situations, students benefit from both types of activities. Therefore, interventionists should consider including both isolated word reading and connected-text practice in intervention sessions, but the relative emphasis placed on one or the other can vary depending on the student’s skill level or purpose of instruction. In the discussion that follows, we explore strategies for both types of practice, and the situations in which they are best suited.
Isolated Word Reading Practice Strategies To reiterate, we refer to isolated word reading practice as activities in which words are practiced outside of connected text. Readers might also see this referred to as decontextualized word reading in the literature because words are read out of context. The most common ways this is implemented are with flashcards or reading words in list form. There are several things to consider when developing and implementing isolated word reading practice activities, with the goals of improving word reading accuracy and efficiency. (1) Repetition is key; students need repeated opportunities to read words to ultimately add them to their orthographic memory. Practice should occur every day, meaning that daily intervention should devote a portion of time to reading practice. (2) Immediate feedback should be provided that either affirms correct responses, immediately corrects errors by prompting students to use a taught decoding strategy like those discussed above or correcting a letter–sound or word part the student pronounced incorrectly. (3) The set of words is appropriate for the student, the instructional goals, and content that students will be reading (i.e., words practiced should be those that contain targeted letters,
Step 3: Instructional Modification II 271
letter combinations, or spelling patterns). (4) Practice should be intensive but brief; students should have continuous opportunities to read words, but sessions should be short. Word reading can be exhausting for struggling readers. Short sessions maintain engagement and motivation. (5) Recognize that the goal of isolated word reading practice is to strengthen students’ skills in “pure” word decoding, whereby they do not have the benefit of context to aid in word recognition. This can be a key goal for some students, but not all. As always, consider the individual needs of the student. WORD LIST PRACTICE
Word list reading practice simply involves a set of words to practice, arranged in list form on a page. Words written on a dry-erase board is also a good option. The interventionist asks the student to read the words in the list aloud, providing immediate affirmative feedback and error correction. Lists can be short (e.g., 10 words) for students at basic stages of word reading or longer (e.g., 25–50), depending on the skill level of the student and their stamina. Interventionists must balance the importance of opportunities to read words with the need to keep sessions brief and rewarding. Word lists should be mixed up after every few presentations so that students do not start to memorize based on word order or a word’s position in the list (which makes using dry-erase boards advantageous for practice). If using a printed list, students can read the list in a different order. Word lists can be constructed in different ways. The list can consist entirely of words that were targeted in the most recent instruction, and this can be helpful if the interventionist wants to maintain a high level of success or focus students’ attention on a particular letter or spelling pattern that was taught (e.g., a “th” initial or final sound; words with silent-e). However, there is also considerable value in interspersing (1) words that were taught in previous sessions for review, and (2) words that were not specifically targeted but include the targeted letter sounds or spelling pattern. The ability to generalize decoding skills to words that were not specifically used in instruction is something typically developing readers do and is a hallmark of reading acquisition (i.e., self-teaching; Share, 1995, 2008), but something that struggling readers find very difficult. Providing students an opportunity to generalize decoding skills, with support and feedback from the tutor, is key to fostering this skill. Word lists are constructed with words in rows and columns. Students can either be asked to read the words in columns from top to bottom, or asked to read words in rows from left to right. I (Clemens) prefer having students read words in rows from left to right because this is more consistent with the way text is read. This also relates to a benefit of practicing words in list form compared to flashcards; word lists may help promote more rapid serial processing of words that is involved in reading text (Protopapas et al., 2018), whereas in flashcard practice, serial processing is interrupted by the tutor having to display the next card. Although a student’s reading of the lists can be timed, I recommend caution in using timing within practice situations. Word list practice should not be conflated with word list assessment. In testing situations, timed word list reading provides important data on students’ word reading efficiency. But in practice, the emphasis stressed to students should be on becoming a better word-reader, meaning they should strive to read the words with 100% accuracy, and practice so that reading them becomes easier. They should always be prompted to rely on decoding strategies (never guessing), and that through repeated practice they should be able to read the word simply by seeing it. Sometimes, reading errors signal the need for the student to slow down and pay more careful attention to the
272
Academic Skills Problems
words’ spellings. The goals of practice are establishing and strengthening orthographic– phonological connections (i.e., spelling– pronunciation; so- called “orthographic mapping”). Strengthening these connections should naturally result in improved automaticity over time because students are now recalling learned word spellings. Interventionists can also praise and recognize students who are improving their automaticity in reading words while maintaining high accuracy. But interventionists should avoid overtly communicating messages about “speed” or trying to read words as fast as possible. In fact, words like fast and speed should be taken out of the interventionist’s vernacular when practicing with students. Overt encouragement for students to try to read as fast as possible could promote bad habits in guessing at words based on whole-word shapes or initial letters, rather than the goal of fostering strong spelling–phonological connections in memory. FLASHCARD PRACTICE
Word reading practice can also be conducted with flashcards. Flashcard practice can simply involve presenting words in the deck in a random order, and the student reads each word aloud. However, there are specific techniques for selecting words and arranging them in the deck that can promote stronger learning and retention than a traditional drill. Indeed, one of the benefits of flashcard practice, compared to word list practice, is that the order of the words in the list can be immediately adjusted. This can be particularly helpful for controlling the presentation and repetition of specific words. With flashcards, the number of words that appear between presentations of a target word may also be adjusted, which elicits an important learning principle discussed below. Incremental rehearsal is one flashcard technique applicable to word reading practice. We provide a general description of the strategy here; a more detailed description of incremental rehearsal can be found in the Academic Skills Problems Fifth Edition Workbook. Incremental Rehearsal. Incremental rehearsal (Tucker, 1989) is a technique in which items targeted for instruction are presented between items the student has already learned. Incremental rehearsal is applicable across academic skill areas and is also discussed later in this chapter for mathematics. When applied to word reading practice, the number of previously known words between presentation of the target words is gradually increased so that the span of time and amount of information presented between occurrences of the target word expand. Incremental rehearsal is based on several well- established learning principles: instructional match, meaning there is an optimal balance between known and unknown content, which improves learning (Gickling & Havertape, 1981; Gickling & Rosenfield, 1995; Rodgers et al., 2018); and the interleaving and juxtapositions of known items, which requires the learner to make constant discriminative contrasts, a mechanism for stronger learning and retention (Birnbaum et al., 2013; Carvalho & Goldstone, 2014). As a word reading instruction and practice procedure, incremental rehearsal is based on having students practice newly taught words by never allowing more than 30% of the material presented to be unfamiliar words (i.e., not previously learned). Incremental rehearsal is generally conducted as follows: 1. Identify three target words for practice and write each word on an index card. These might be words with spelling patterns newly targeted in instruction. Some implementations of incremental rehearsal have identified words as those the student read incorrectly from a passage of text. For our present discussion, we refer to these words as unknowns.
Step 3: Instructional Modification II 273
2. Identify seven words the student knows, either because they were mastered in previous instruction, or because they were read correctly in a passage of text. The student should be able to read these words automatically. Each word is written on an index card. We refer to these words as knowns. 3. The tutor arranges the known words in one stack, and the unknown words in another. 4. Begin the session by presenting the first unknown word. The student should read the word using decoding strategies that are being targeted in instruction. Or, the tutor explicitly models the pronunciation, for example, “This word is search. What word?” The student can also be asked to spell the word to call greater attention to its spelling. This may also be an occasion to point out a letter or spelling pattern that is targeted for instruction; in this example, the ea vowel team and the sound it makes when followed by an r. 5. Now the incremental rehearsal begins. After the unknown word is presented, one of the known words is presented and the student says the word aloud. Next, the unknown word is again presented, followed by the known word previously presented, and then a new known word is added to the deck. This sequence repeats, in which every presentation of the unknown word is followed by the previously presented known words and the addition of one new word to the deck. The process continues until all seven knowns have been “folded in” to the deck. Therefore, the sequence of presentation would be as follows (U = unknown word to be taught, K = known word): U-K1-U-K1-K2-U-K1-K2-K3U-K1-K2-K3-K4-U-K1-K2-K3-K4-K5-U-K1-K2-K3-K4-K5-K6-U-K1-K2-K3-K4-K5K6-K7-U. Readers will notice that the amount of time and number of stimuli between presentations of the unknown word systematically increase across the drill as the known words are added. 6. The unknown word is now included in the stack of known words, and one of the previously known words is removed. This maintains the same ratio of known to unknown for the next word. This stack should be shuffled to prevent students from memorizing words that followed each other. 7. Next, the second unknown word is presented in the same way as the first and, using the same process, is folded in among the seven known words (which now include the previously unknown word). 8. The process repeats for the third unknown word. By its end, all three target words will now be part of the known stack. (Note: The procedure can be shortened by reducing the initial number of known words included to six or five.) Incremental rehearsal provides high levels of repetition combined with a high level of correct responding. Numerous studies have observed benefits in students’ acquisition and retention of new skills across academic areas (Burns et al., 2012), including word reading (Burns et al., 2004; Klingbeil, Moeyaert, et al., 2017). Several studies have observed that incremental rehearsal outperforms other flashcard techniques, such as drill sandwich and task interspersal (MacQuarrie et al., 2002; Volpe, Mulé, et al., 2011). Petersen- Brown and Burns (2011) added semantic (vocabulary) instruction to word reading in
274
Academic Skills Problems
incremental rehearsal, and consistent with research noted earlier on the potential benefits of integrating semantic instruction within decoding instruction, they found that the addition of vocabulary instruction was associated with stronger word reading retention over incremental rehearsal alone. Incremental rehearsal has also been used for teaching and practicing letter–sound correspondence (Volpe, Burns, et al., 2011). Strategic Incremental Rehearsal. A recent adaptation to incremental rehearsal is strategic incremental rehearsal (SIR), which has been shown to result in students learning more words in less time than traditional incremental rehearsal (January et al., 2017; Kupzyk et al., 2011; Klingbeil et al., 2020). Several of these studies have also observed superior retention of taught words using the SIR procedure. Unlike traditional incremental rehearsal, SIR uses only unknown words. The first new word is taught, which is immediately followed by the teaching of a second new word. The two words are presented until students respond correctly and immediately to both. The words are shuffled. Then, a third word is taught, followed by presentation of the first two taught words (with presentation similar to incremental rehearsal). Incorrect responses are corrected by modeling the reading of the word. After the student responds correctly to all three words, the deck is shuffled, and a new word is taught and integrated with the previously taught words. The deck is always shuffled after each round, and new words are added only after the student responds correctly to all words in the set. Words that have been read correctly across three consecutive instructional sessions are removed from the deck and placed in a discard pile, which is periodically reviewed to ensure maintenance. The SIR procedure is interesting because it maximizes learning time by focusing only on unknown words, as opposed to traditional incremental rehearsal that includes a high number of presentations of known words. In SIR, words taught immediately become part of the “known” pile; thus, a newly targeted word is rehearsed with an incrementally increasing number of known words between presentations. Both approaches involve aspects of spaced and expanded practice that are known to improve learning and memory (Varma & Schleisman, 2014). Both also include elements of interleaved practice, a learning technique from cognitive psychology that involves randomly intermixing different exemplars or tasks so that no two of the same type appear consecutively, that has been found to result in strong recall, retention, and implicit learning of content not explicitly taught (Birnbaum et al., 2013; Lin et al., 2011; Roher et al., 2015, 2020). Another attractive aspect of SIR is that the exclusive focus on all unknown words allows for a systematic approach to word reading instruction. Most previous applications of incremental rehearsal (including SIR) appear to select words randomly for instruction based simply on the fact that they are unknown; there appears to be little consideration of the words themselves, nor does there appear to be a strategic or systematic approach to teaching words that align or build on common letter combinations and spelling patterns. SIR appears to be better positioned as a practice strategy integrated within a systematic approach to decoding intervention. SUMMARY (AND CRITIQUE) OF WORD READING PRACTICE STRATEGIES
Word reading practice using words in list form or in flashcard drills provides frameworks for practicing “pure” decoding skills, meaning that students must rely solely on their knowledge of letter sounds, letter combinations, and spelling patterns to support orthographic— phonological connections (spelling– pronunciation). This is valuable. These methods are also ways to focus practice content specifically on a target set of
Step 3: Instructional Modification II 275
words relevant for instruction or practice, something that is difficult and time-consuming to do with reading passages. With any word reading practice strategy, regardless of whether they occur in word lists or in flashcards, it is important that instruction model sounding out and other decoding strategies, and any error correction procedures prompt students to sound out and call their attention to word spellings and the sounds of letters and letter combinations in the words. Something particularly evident across incremental rehearsal and SIR studies are forms of instruction that tend to rely on whole-word “look-say” approaches to word recognition, which are less effective (Colenbrander et al., 2022) and stand in contrast to theory and evidence on word reading acquisition. Additionally, as noted earlier, many applications of incremental rehearsal and SIR appear to include words without any consideration of the words themselves, apart from the fact that they are “unknown.” The approaches in these studies also seem to reflect outdated perspectives that all words must be “taught” for students to learn to read, in contrast to emphasizing reading instruction that seeks to equip students with skills that generalize and transfer to words that have not been specifically taught (Compton et al., 2014; Ehri, 1998, 2020; Share, 1995, 2008; Seidenberg, 2017). It is not possible, or necessary, to target all 10,000+ words that students could encounter in print. Thus, when considering approaches like incremental rehearsal and SIR, readers are encouraged to integrate decoding strategies in instruction and error correction, like sounding out and others discussed earlier in this chapter, rather than whole-word “look-say” methods. Also, instead of including randomly selected unknown words, users should consider a more systematic approach to selecting words that share similarities based on problematic letters, letter combinations, spelling patterns, or word types that are at the heart of the student’s reading difficulties. In addition to increasing the number of words the student reads, an aim of these approaches should be fostering the ability to transfer decoding skills to words that were not used in instruction or practice (i.e., words that share the same spelling features; orthographic or phonological neighbors). Incremental rehearsal and SIR have many advantages, and they certainly help students learn the words that are targeted in the sessions, but they can be better aligned with the science of reading which may promote transferable skills that are characteristic of reading proficiency.
Reading Practice in Connected Text For students at any level, intervention to improve reading skills should include daily opportunities to read text. This may be the single most important thing to do for a struggling reader, regardless of the nature of their reading difficulties. If time allows for only one activity in a reading lesson, it should be practice reading text with support from a skilled reader. Similar considerations discussed for isolated word reading practice apply here as well: (1) Text should be relevant to skills in need of improvement or the types of texts the student is expected to read in their classes, (2) the text should be at the student’s instructional level (more on that below), (3) students should read aloud with the tutor present so that affirmative feedback and error correction can be provided, and (4) practice sessions should be intensive (i.e., continuous reading on the part of the student) but brief to prevent fatigue. The messages conveyed to students about the purpose of practice reading text are important. I (Clemens) do not recommend timing students during reading practice,
276
Academic Skills Problems
although some strategies involve goal setting and self-graphing. As in isolated word reading practice, the emphasis should not be on “speed” or trying to read fast. Avoid confusing how reading proficiency is measured with how it should be practiced. The goal of reading practice should be to make reading easier. Rate will improve as reading becomes more efficient. Students should be taught to read with care, striving for 100% accuracy, and to understand what they read. Error correction should include prompts for students to use decoding strategies they have learned (e.g., sounding out, rhyme/analogy, flexible vowels, ESHALOV, morpheme recognition, peeling off) and to flexibly apply them when other strategies are not successful. Repeated practice, with emphasis on strengthening the spelling–pronunciation connections that make text reading efficient, is consistent with the literature on how reading proficiency develops. Compared to isolated word reading practice, a unique aspect to consider about practice with connected text is the reciprocal effects that occur between reading fluency and reading comprehension. In Chapter 2, we discussed the facilitative effect that reading efficiency has for comprehension by getting resource-draining word recognition out of the way. But we also noted evidence that reading comprehension influences text-reading fluency (Eason et al., 2013; Jenkins et al., 2003). Understanding text can enhance reading efficiency, and lack of understanding can cause a reader to slow down. When reading connected text, what occurs at the word level is not just orthographic (spelling) to phonological (pronunciation) connections, but that words are also quickly connected to a meaning relevant for the context of the passage. This forms the triangle model (i.e., orthographic–phonological–semantic) of word representations that is central to connectionist models of reading (Harm & Seidenberg, 2004; Seidenberg, 2017), and Perfetti’s verbal efficiency hypothesis (Perfetti, 1985) and his lexical quality hypothesis (Perfetti, 2007). Resource-efficient, automatic connections between words’ spellings, pronunciations, and meanings are the fundamental drivers of text-reading efficiency. These connections are only made possible by providing practice in reading connected text. APPROPRIATE LEVEL OF TEXT FOR TEXT READING PRACTICE
A key consideration for implementing instruction and practice activities with connected text is the level of text difficulty (i.e., grade level) that will be used. This involves considering the student’s instructional level and the amount (and quality) of support that will be available to provide feedback and error correction. In Chapter 4, we described a way to determine a student’s instructional level for identifying text appropriate for instruction and practice. Reading accuracy is a primary factor to consider, and consistent with other perspectives, we considered the student’s instructional level as text in which they mostly read with accuracy between 91 and 97%, which translates to one error for every 10 to 20 words. For reading practice, this level of difficulty is appropriate for practice when a skilled reader (i.e., teacher, adult tutor, parent) is present to provide immediate and consistent feedback and error correction. This level of difficulty provides sufficient challenge to foster growth (remember, errors are learning opportunities), but text is not so difficult that it results in frustration and discouragement. For situations in which the student reads with a peer, the student should demonstrate 97% accuracy or higher in the text used for practice. Peer-tutoring activities, such as PALS, provide very good options for reading practice; however, the feedback and error correction that peers provide will usually not be as accurate or consistent as that provided by an adult. Thus, students should be more accurate with the text provided for practice
Step 3: Instructional Modification II 277
when a less skilled reader is responsible for feedback. Independent practice (e.g., the student reading alone; so-called “sustained silent reading”) should be avoided for students with reading difficulties because errors go unnoticed and uncorrected. As Seidenberg (2017) noted, “Children who struggle when reading texts aloud do not become good readers if left to read silently; their dysfluency merely becomes inaudible” (p. 130). In summary, selecting the level of text difficulty used for reading practice should consider the Goldilocks principle in terms of students’ accuracy with it—text should be not too difficult and not too easy—but there also should be consideration of the quality of the feedback and support that will be available. Text can be more challenging when a skilled reader is present to correct errors and the student can tolerate it, but accuracy should not consistently fall below 90%. Also, keep in mind that the goal of reading connected text is to build automaticity and comprehension. Word-level decoding instruction accompanies intervention and naturally targets content that is unknown. SHOULD THE TEXT FOR READING PRACTICE BE DECODABLE OR AUTHENTIC?
There is some debate regarding whether beginning readers or students with reading difficulties should practice reading with texts that are decodable. “Decodable” text refers to stories or passages written with a high proportion of words that are phonetically regular (i.e., words with predictable letter–sound correspondence) and are thus considered “decodable” by students that have learned the letter–sound correspondences contained in the words. In contrast, “authentic” texts are stories or passages written without constraints on the phonetic regularity of the words. The distinction is not a simple dichotomy; however, decodable texts will always involve some phonetically irregular words such as the, was, she, his, was, and said (it is impossible to write meaningful text without them), and authentic texts will include regular words. Unfortunately, very little research has directly compared the effects of using decodable versus authentic texts, and what exists is equivocal. Studies have indicated that students improved their reading skills with either kind of text without a clear benefit of one over the other (e.g., Jenkins et al., 2009), and others have observed advantages of authentic text in improving reading comprehension more than decodable text (e.g., Price-Mohr & Price, 2020). Until research says otherwise, it is better to think about how decodable and authentic texts can be used together strategically. Decodable text offers students opportunities to immediately practice new word reading skills in connected text. In addition to providing practice, the accessibility of decodable text can build confidence, which may be important for some students with reading difficulties that have experienced repeated frustration and embarrassment in reading. On the other hand, authentic texts help familiarize students to natural syntax and exposes them to a broader range of spelling patterns and vocabulary than they encounter in decodable text. There are benefits to using both decodable and authentic text for reading practice, and the relative frequency that one type is used can vary based on the student’s skill level. Decodable text may be used more often for students at early stages of reading, and when teaching a new letter sound or spelling pattern. (Most providers of decodable texts offer books with a high proportion of words with a specific letter sound or letter combination.) Authentic text at the student’s instructional level can also be used periodically, even for beginning readers. As students become more skilled, authentic text should be used more often, with use of decodable text only as needed. Eventually, authentic text can be used exclusively. Keep in mind that successful reading of authentic text should be the ultimate
278
Academic Skills Problems
goal for all students; decodable text is a tool to help students get there. Regardless of what type of text is used, a teacher should be present to provided affirmative and corrective feedback while the student reads orally, and error correction should prompt students to rely on their decoding skills to read unfamiliar words. “Predictable” text is a third type of text that may appear in early reading program materials, but it should be avoided. Predictable texts are passages written with consistent sentence structures (e.g., I see a tiger. I see a zebra. I see a butterfly.). They often include pictures of key words, and some words might be written in a different color. Predictable texts are problematic because they promote guessing at words based on pictures, salient visual features (e.g., word color), or memorization of the sentence structure. Avoid using predictable texts and use decodable and authentic texts strategically.
Connected‑Text Reading Practice Strategies Several strategies exist with the intent of improving efficiency in connected text. We provide an overview of those with the strongest empirical and theoretical support for improving reading, as well as critique, considerations, and innovations. REPEATED READING
Repeated reading (Samuels, 1979) has been the most frequently studied strategy for targeting reading fluency (Stevens et al., 2017). Repeated reading commonly involves asking a student to read a passage of text three to four times in succession, in the same session. The text is usually a 100- to 200-word extract from a larger passage; however, the length should be adjusted based on the student’s reading level (i.e., shorter passages should be used for students at more basic reading levels). The text should be within the student’s instructional level, as noted above. Therrien and Kubina (2006) provide additional tips for implementing repeated reading. The effects of repeated reading have been investigated in numerous studies and summarized in several research syntheses and meta-analyses (Chard et al., 2002; Kim et al., 2017; Lee & Yoon, 2017; Stevens et al., 2017; Strickland et al., 2013; Therrien, 2004). Several of these reviews have observed benefits of repeated reading on reading fluency of struggling readers. Studies that have combined repeated reading with other components, such as passage preview prior to repeated reading (i.e., a skilled reader reads the text aloud while the student follows along) and timed reading with self-graphing (i.e., the student charts their fluency progress on a graph), have also observed positive outcomes on students’ reading fluency. However, a closer look at the results of these reviews, and the studies included, reveal some important considerations. It is common for studies to evaluate fluency effects using the passages that were practiced as part of repeated reading, and in these cases, very large improvements in fluency are typically observed. On the other hand, studies that examined effects on unpracticed passages (i.e., generalized effects), on average, observe effects on generalized reading fluency that are roughly half as large compared to the effects on practiced passages (Lee & Yoon, 2017; Strickland et al., 2013). An additional note of caution is that a small number of studies found that reading errors increased as reading rate increased (see Stevens et al., 2017). The logic behind repeated reading interventions requires reevaluation. Frequent and extended opportunities to read text are clearly important, as one of the things that
Step 3: Instructional Modification II 279
students with reading difficulties need the most is to read more. Repeated reading certainly accomplishes that. However, repeated reading interventions represent a narrow focus and may inadequately address the underlying problem with word recognition efficiency that causes problems with reading text. Problems are magnified when repeated reading is used as the only strategy for targeting reading difficulties, as has been common in many studies of repeated reading. In light of the conceptualization of reading efficiency we have focused on in this text, reading rate (i.e., words correct per minute) is a symptom of proficiency with basic skills (word reading most importantly), not a discrete skill. If the rate of reading connected text is viewed as the instructional target, interventions are more likely to be geared to treat a symptom of reading problems but not the cause. Moreover, most implementations of repeated reading and other rate-focused strategies fail to consider that reading needs only to occur at a rate that does not impede comprehension. Additional gains in rate beyond that point are unlikely to improve reading comprehension (O’Connor, 2018). There are additional concerns with repeated reading. It can perpetuate problematic messages to students of reading fluency being about “speed,” or how to best improve their reading (Kuhn et al., 2010). In repeatedly reading, only a small portion of text is practiced; thus limiting students’ exposure to varied words and vocabulary. The repetitive nature of repeated reading can make the task burdensome to students; anyone who has implemented repeated reading will attest that the third or fourth reading of the same passage can elicit boredom, and in some cases, refusal. Motivational strategies and incentives for effort and persistence can help. If repeated reading is paired with instruction that targets the reasons for the student’s lack of fluency, most commonly their difficulties in reading words efficiently, then repeated reading is likely to be an appropriate part of the student’s intervention. It will provide the repetition and opportunities for reading text that are necessary for developing automaticity, while the word-level instruction should address the gaps in decoding skills. It is also very appropriate to practice reading text while word-level accuracy issues are being addressed, provided that the text used for practice is within the student’s instructional level and a skilled reader is present to provide feedback. Although repeated reading is a viable strategy when used the right way, there are alternatives that may be even more attractive, and perhaps better aligned with research and theory on the development of reading proficiency. WIDE READING AND OTHER ALTERNATIVES TO REPEATED READING
Scholars have investigated alternative text reading practice strategies to address concerns with repeated reading. Particularly, these approaches use non-repetitive reading strategies. Wide reading and continuous reading involve having students practice reading on a non-repetitive basis, meaning that they read text aloud for the same amount of time they would in a repeated reading session. However, instead of repeatedly reading the same small portion of text, they continuously read new text. This may involve reading several different shorter passages of text, or reading a longer text for the same amount of time that repeated reading would be conducted. For example, a repeated reading session might have a student read the same 100-word passage four times. In contrast, a wide-reading session would have the student read four different 100-word passages, and continuous reading would have the student read one 400-word passage.
280
Academic Skills Problems
Studies have found that wide and continuous reading are associated with comparable effects on reading outcomes compared to repeated reading (Ardoin, Binder, Zawoyski, et al., 2013; Ardoin, Binder, Foster, et al., 2016; Homan et al., 1993; Rashotte & Toregesen, 1985; Wexler et al., 2008). Ardoin and colleagues (2016), in a randomized control trial, assigned fourth graders to either a repeated reading condition (students read the same portion of text four times per session), a wide reading condition (students read continuously for the same amount of time that students read in repeated reading), or a business-as-usual control condition. Results indicated that all students, especially the lowest achievers, made significant gains across measures of reading skills, including oral reading, word identification, and passage comprehension. Although group differences were not statistically significant, students in the two intervention conditions (repeated reading and wide reading) grew more on the outcome measures than students in the control condition. More interestingly, there were no differences between the repeated reading and wide reading condition—students in both conditions grew a similar amount across the measures. Additional analyses of reading prosody (i.e., voice inflection and pitch while reading, which is influenced by comprehension) and eye-tracking (i.e., measuring how long students’ eyes fixated on words and portions of text) revealed similar benefits of both repeated reading and wide reading, with one exception: Students in the wide reading condition demonstrated improvements in a reading prosody measure of expressiveness, while students in the repeated reading condition demonstrated decreases in expressiveness. Because the repeated reading condition involved a timing component, the authors speculated that students in this condition may have tacitly encouraged students to try to read faster, but not necessarily with more understanding. Another variation was investigated by Reed et al. (2019). In a study with fourth graders spanning a range of reading skill levels, they randomly assigned students to either a repeated reading condition, in which students read the same passage three times each session, or a varied practice condition, in which students read three different passages each session and 85% of unique words (i.e., words counted only once) overlapped across the three passages. Although most of the words overlapped, the three passages that students read each session in the varied practice condition were allowed to differ in terms of genre, perspective, or approach to writing. Students in both conditions practiced reading in a peer-tutoring format. Results indicated statistically significant effects in generalized reading efficiency that favored students in the varied practice condition over students in the repeated reading condition, with an effect size of 0.13 (considered educationally meaningful). Students in the low to middle achievement range benefited the most. Reed et al. interpreted their findings from a statistical learning perspective (see Chapter 2 for more in-depth discussion), where the varied practice condition exposed students to multiple opportunities to read words in different contexts. This variation is central to statistical learning and may be a reason for stronger outcomes in the varied practice condition. SUMMARY: PASSAGE READING PRACTICE STRATEGIES
The most important aspect of text-reading interventions is that they involve a lot of practice reading instructional-level text with some form of supervision from a teacher or skilled reader to provide feedback. The development of reading efficiency (and reading proficiency in general) depends on many, many exposures to words, preferably in context (i.e., connected text). Reading in context is important because it helps readers see how words work together and how they are used to convey meaning. Meanings of
Step 3: Instructional Modification II 281
words are reinforced, expanded, and new meanings are learned when reading in context. Good readers read more, and by reading more they become better readers by expanding their word recognition skills to new words, adding new vocabulary knowledge, and building background knowledge. Reading in context also helps support the development of skills in reading irregularly spelled and alternative pronunciations of words, such as correctly recognizing when read is pronounced as “reed” versus “red.” These experiences encountering words in text build a student’s lexical quality (Perfetti, 2007)—they establish stronger representations and links between a word’s spelling to its pronunciation and, simultaneously, to its meaning. This section was somewhat critical of repeated reading as an approach to improving text reading. One thing that repeated reading gets right is that it increases the amount of time students read aloud to a skilled reader. What is not so right is the expectation that repeatedly reading a small portion of text (one naturally bound by a limited corpus of words within it) will naturally transfer to gains in reading fluency in other text. Research to date indicates compelling findings for non-repetitive strategies such as wide and continuous reading. Studies so far have indicated that they result in comparable (and in some cases better) outcomes on generalized measures of reading compared to repeated reading. Wide reading naturally exposes students to more words, which often means more variation in words and broader vocabulary. From a statistical learning perspective thought to be involved in reading acquisition (Arciuli, 2018), perception of statistical regularities of word spellings and meanings requires sufficient variation in words and contexts. Indeed, Ardoin et al. (2016) calculated that, although students in their invention conditions read for the same amount of time, students in the wide reading condition saw 220% more unique words compared to students in the repeated reading condition. Therefore, because evidence to date suggests that non-repetitive strategies are at least just as effective as repeated reading, wide reading techniques are arguably a better recommendation. This is not to say that repeated reading is a bad recommendation; students will likely improve in their reading if they practice it. But it speaks to the need to consider repeated reading with regard to the individual needs of a student. Repeated reading is a way to build automaticity reading words and spelling patterns targeted in instruction, provided the passage contains a high proportion of such words. However, wide and varied reading practice approaches are more consistent with the evidence and theory of how reading proficiency develops, they may elicit less boredom and burden than repeated reading, and they convey fewer problematic messages about what fluency entails and how to become a better reader. Of the non-repetitive reading approaches discussed here, the continuous reading approach used by Adroin et al. (2016) involving continuous reading of the same passage has some practical benefits because the varied reading condition implemented by Reed et al. (2019) required considerable time and resources to develop sets of different reading passages that contained a certain percentage of word overlap. Overall, the most important point we hope readers take from this discussion is the importance of considering text reading in the correct manner. Reading efficiency is not just a score on an oral reading fluency measure; it is the outcome of successfully integrated basic skills and practice that allow words to be read with ease and automaticity. Problems with reading efficiency demand interventions that address why reading is disfluent and provide ample practice opportunities to improve. The focus should be on helping students become better readers, not faster readers.
282
Academic Skills Problems
Summary of Overall Intervention Considerations for Students with Basic and Advanced Word and Text Reading Difficulties Students who struggle reading words and text with accuracy and efficiency make up the most common referrals for school-based evaluation personnel. Across the preceding sections, we described a number of strategies and intervention approaches that, based on the evaluator’s data-driven hypothesis, should be useful for meeting their needs. To wrap up, we offer the following set of considerations and recommendations for designing interventions, consistent with theory and evidence for students with basic and word-level reading difficulties:
• Students with significant difficulties reading words often require remediation of
foundational decoding skills, which may include phonemic awareness, addressing knowledge gaps in linking sounds with letters and letter combinations, and using that information effectively to read words. • Word-level reading difficulties (in terms of low accuracy and efficiency) will likely be the primary reason for a student’s difficulties in reading fluency. • Intervention for word-level difficulties should address their decoding problems and consider teaching a set of useful decoding strategies beginning with sounding out. Students in middle elementary grades will often require strategies for reading longer, multisyllabic words. • Intervention for word-level reading difficulties should usually include (1) instruction addressing the decoding problem; (2) isolated word reading and spelling practice with words and spelling patterns relevant to decoding instruction, using words in lists or flashcard drills, to practice targeted decoding strategies and build word-level accuracy and efficiency; and (3) practice reading connected text to promote generalized reading fluency. • During intervention for a student with word or text reading difficulties, closely monitor the student’s error rates in addition to other progress monitoring metrics (e.g., oral reading fluency). If accuracy does not improve over time, the intervention should be adjusted. Few things are more detrimental to reading comprehension than decoding errors; even with 90% accuracy, comprehension can be significantly impaired (Rodgers et al., 2018). Strive to improve reading accuracy. Rate-focused interventions that ignore accuracy are misguided. • The aim of word and text reading instruction and practice is to make reading easier (i.e., more efficient), not “faster.” Avoid confusing intervention with how reading proficiency is measured.
III. INTERVENTIONS FOR READING COMPREHENSION AND CONTENT‑AREA READING In this section, we review intervention options for improving students’ reading comprehension. We include reading comprehension strategies, as well as approaches that address language and knowledge sources fundamental to reading comprehension. These approaches may be appropriate for students with reading difficulties that are specific to comprehension (i.e., they appear to have adequate skills that allow them to read text with accuracy and reasonable efficiency), or they may be used in conjunction with interventions addressing word reading difficulties.
Step 3: Instructional Modification II 283
As discussed in Chapter 2, reading comprehension is a complex construct that should be thought of less as a “skill,” and more as the efficient orchestration of word reading efficiency, automatic integration of language and background knowledge with the contents of the text, and maintenance of attention and motivation. Thus, interventions for students with reading comprehension difficulties may require multiple components. The specific skills that should be targeted will be driven by the evaluator’s assessment-based hypothesis. The most common approaches to reading comprehension intervention have involved comprehension strategies, which include questioning techniques, self-monitoring of comprehension, main idea/summarizing strategies, and graphic organizers (e.g., story/concept maps). Meta-analyses of intervention studies have observed moderate to large effect sizes on improved reading comprehension using comprehension strategies (e.g., Berkeley et al., 2010; Edmonds et al., 2009; Scammacca et al., 2015; Solis et al., 2012; Swanson et al., 2014). However, these effects are largest on tests developed by the research teams that have some alignment to the intervention or content used in instruction. Meta-analyses have consistently observed that effects are lower on standardized, norm-referenced tests of reading comprehension. This does not necessarily mean that the interventions studied or the comprehension strategies they used are ineffective; researcher-developed tests should be considered valid indices of effects, and standardized measures may not always measure relevant skills (Clemens & Fuchs, 2022). However, the differential effects of reading comprehension interventions have led some scholars to question whether interventions have been targeting the right things (Compton et al., 2014; Elleman & Compton, 2017; Kamhi & Catts, 2017). Specifically, these perspectives have questioned whether interventions have sufficiently targeted skills and knowledge sources that make comprehension possible: vocabulary and linguistic comprehension, background knowledge, and inference making. Reasons for their importance are clear: The best comprehension strategy ever created will not improve comprehension if students lack the linguistic and background knowledge needed to understand the text. We discuss common strategies here, followed by considerations on balancing them with instruction that builds language and background knowledge.
Reading Comprehension Strategies Reading comprehension strategies are a category of approaches that teach students techniques to better understand and learn from print.
Summarization and Main Idea Strategies The ability to summarize a text or identify its most important details, messages, and conclusions (i.e., main ideas) requires monitoring one’s comprehension and filtering important aspects from irrelevant details. Good readers do this rather readily, but they are areas of difficulty for students that struggle in reading (Duke & Pearson, 2011; Gernsbacher & Faust, 1991). Thus, several approaches have taught students strategies for summarization and main idea identification, and a meta-analysis by Stevens et al. (2019) found that instruction in these areas is generally effective in improving reading comprehension. Main idea identification and summarization are great places to start with a reading comprehension intervention because they involve a straightforward way of conceptualizing the “comprehension” of a text. Students will often be already familiar with the concepts of summarization or main idea identification (or they can be easily explained),
284
Academic Skills Problems
thus making it easier to introduce the strategies (Carnine et al., 2017). They help foster metacognition in reading and provide students with skills important for supporting their learning across skill domains, especially content areas like social studies and science. Jitendra and Gajria (2011) provided a description of approaches for teaching main idea identification and summarization. Both involve starting with short paragraphs that are appropriate for the student’s reading level. For main idea instruction, initial paragraphs should have an easily identifiable main idea with one main subject and clear action. Across a series of sessions, main idea identification instruction involves naming the subject and describing or characterizing the action; selecting main idea statements from a set of choices (i.e., multiple-choice answers); and building their own main idea statements that omit irrelevant details and describe important details as relevant, such as where, why, how, and so on. Intervention can include prompts and notations within paragraphs to help students identify the important subject (i.e., who or what) and main actions. Students can also practice crossing out irrelevant details. Narrative (fiction) paragraphs are a good place to start because main idea identification is easier, but later lessons should include informational/expository paragraphs, and eventually work up to identifying the main idea from across multiple paragraphs (Jintendra & Gajria, 2011). A text summary distills a passage into a few sentences and omits irrelevant or extraneous details. Teaching summarization skills is easier after students have learned to identify the main idea. Beginning with simple texts on paper that students can mark and notate, Jitendra and Gajria (2011) recommended teaching students a set of summarization rules that include reducing content to lists and using terms to represent categories or groups of things; crossing out repeated information and unimportant details; and selecting or writing topic sentences for each paragraph (using main idea skills). Then, applying this information and their marked text, students learn to gather the information from across the paragraphs to write a summary statement that consists of a few short sentences. Shelton et al. (2020) provided another resource for teaching main idea identification and text summarization skills. They described an approach called get the gist, which involves teaching students to identify the most important who or what in a section or paragraph, to identify the most important information about that who or what, and to use that information to derive a “gist” statement that is between 8 and 13 words. Shelton and colleagues offered graphic organizers to support instruction and describe instruction that uses gist statements to help write text summaries.
Building Knowledge of Text Structure The texts that we read often have a general structure. Narrative texts typically involve a sequence of elements such as a description of the characters or setting, the conflict, rising action, the climax of the story, falling action, and a resolution. Expository and informational texts are structured differently and may include one or more of the following structures: cause and effect, compare and contrast, description, sequence, position and reason, or problem and solution. Some informational texts might be written more like narrative texts, such as biographies or the retelling of a particular event. Why should we care about text structure? There is evidence that awareness of text structure and ability to identify text elements are associated with stronger reading comprehension (Hebert et al., 2016). Readers who are familiar with narrative text structure are more likely to anticipate that some sort of problem or situation will occur in the story, which helps them make better sense of these kinds of events when they are introduced.
Step 3: Instructional Modification II 285
They are also more likely to anticipate that rising action is approaching, and that the action will likely lead to some sort of climax, followed eventually by some sort of resolution. Similar effects can occur with expository or informational texts; when readers can identify that the text is describing a cause-and-effect relationship, the connection between the cause and effect described in the passage may be clearer. This familiarity with text structures and anticipation of upcoming information may make it easier to assimilate incoming information. In other words, the text environment is more familiar and predictable. Thus, knowledge of and familiarity with text structure are thought to aid reading comprehension by creating a schema, or mental framework, in which to fit and match incoming information and thereby make it easier to understand (Elleman & Compton, 2017; Hebert et al., 2016; Roehling et al., 2017). Teaching knowledge of text structure may be particularly important for improving comprehension in expository and informational texts (Elleman & Compton, 2017). Narrative text structures are usually more familiar to students because they have been exposed to stories for most of their lives, which is one of the reasons why reading comprehension of narrative texts tends to be stronger compared to informational texts (Best et al., 2008). However, students are far less familiar with expository text structures. Thus, building awareness of the various ways that expository and informational texts are structured may help improve students’ reading comprehension in texts that are more difficult and more common in school settings, especially as grade levels increase. Roehling et al. (2017) provided a description of procedures for teaching expository text structures and discussed instructional sequences, signal words, and graphic organizers useful for learning different types. Graphic organizers are commonly used to teach text structures, which help provide visual scaffolds for learning different types and organizing information within them. Instruction also includes teaching students signal words that alert them to the type of text they are reading. For example, words or phrases such as because, due to, or as a result of can signal that the text is likely describing a cause-and-effect relationship. Words such as before, afterward, or during can signal that the text describes a sequence or chronology of events. See Roehling et al. (2017) for more detail and examples. Idol (1987) and Idol-Maestas and Croll (1987) described story mapping, a graphic organizer procedure, to teach text structure and build comprehension in narrative text. The procedure involves bringing the reader’s attention to important and interrelated parts of the passage. Students are taught to organize the story into specific parts, including the setting, problem, goal, action, and outcome. Figure 6.4 illustrates the story map. Idol-Maestas and Croll used the procedure for students with learning disabilities and found significant improvements in comprehension without continuation of the story- mapping procedure, as well as maintenance of comprehension levels after the procedure was discontinued. Similar positive outcomes for story mapping and use of graphic organizers have been reported by several investigators (e.g., Babyak et al., 2000; Baumann & Bergeron, 1993; Billingsley & Ferro-A lmeida, 1993; Boulineau et al., 2004; Davis, 1994; Gardill & Jitendra, 1999; Mathes et al., 1997; Taylor et al., 2002; Vallecorsa & deBettencourt, 1997).
Questioning Strategies Questioning techniques (also referred to as self-questioning and question generation) have been used as ways to promote students’ self-monitoring while reading, reading more carefully, and building independence in learning from text. Questioning strategies involve
FIGURE 6.4. Form for completing story-mapping exercises. From Idol (1987, p. 199). Copyright © 1987 PRO-ED, Inc. Reprinted by permission. 286
Step 3: Instructional Modification II 287
teaching students to ask and answer a set of questions during and after reading. Although some approaches have used questioning before reading, others emphasize having students generate questions while reading (Stevens et al., 2020), which often makes the best use of instructional time and focuses students’ attention on what is relevant to the text. Questions may be provided by the teacher or generated by the student. Strategies have involved teaching students a set of signal words that represent the first word in a question stem about the text: who, what, where, when, how. Ideally, intervention should teach students to orient questions according to what will be important to learn from a text, and the reason for reading. Research reviews have observed positive effects of questioning strategies on students’ reading comprehension (Rosenshine et al., 1996; Joseph et al., 2016; Stevens et al.; 2020). Stevens and colleagues (2020) reviewed question-generation strategies for supporting reading comprehension in social studies and science texts, with instructional steps and examples. Their process includes setting a purpose for reading, modeling question generation for students using think-aloud procedures, and setting stopping points in a text to prompt question generation. Students often have difficulty locating answers to questions, especially questions posed by a teacher or test. The question–answer relationship (QAR) strategy (Pearson & Johnson, 1978) is aimed at helping students categorize questions and locate information in the text to answer questions. With QAR, a student is taught that answers to questions may be found in the following ways: (1) “right there” in a single place (i.e., a single sentence) in the text; (2) when the student “puts it together” using pieces of information across multiple locations or sentences in the text; (3) between the “author and you,” meaning the answer requires an inference about the author’s meaning or intent; or (4) “on your own,” meaning the answer must be derived from the student’s background knowledge (Green, 2016). QAR has been commonly used across questioning studies (Joseph et al., 2016). Graham and Wong (1993) taught students a self-instruction technique to identify relationships in the text that were implicit, explicit, or script implicit. Using a question– answer method, they were taught a variation of QAR called Here, Hidden, and In My Head (3H) to cue themselves to find answers to comprehension questions. In one of the conditions, a self-instruction procedure was utilized to teach students how to use the 3H intervention. Results of the study showed that self-instruction training was more effective than a didactic teaching technique in improving comprehension and maintaining performance over time.
Comprehension Monitoring Good readers monitor their comprehension in most instances. When their reading comprehension breaks down, they may slow down, back up, and reread previous portions of text to “fix up” their comprehension. Struggling readers do this considerably less often; they frequently fail to recognize when their comprehension degrades or breaks down, and if they notice, they often lack strategies for repairing it (Kinnunen & Vauras, 1995). Metacognitive techniques can improve students’ monitoring and active attention to what they are reading. The self-questioning techniques described above are one method of improving comprehension monitoring. Another strategy, click or clunk, is part of Collaborative Strategic Reading (CSR; Klingner & Vaughn, 1998), an evidence-based peer- mediated approach targeting multiple reading comprehension strategies (Boardman et al., 2016; Vaughn et al., 2001, 2011). In click or clunk, students are taught to self-monitor as they read, thinking about what they know and do not know. Clicks are defined as
288
Academic Skills Problems
sentences and elements of texts that students “get,” meaning that the words, concepts, and ideas make sense. Clunks are those occasions in which comprehension breaks down, which may include words the student cannot read or does not understand, or concepts and ideas that do not make sense. Instruction involves first teaching students to actively monitor their comprehension to identify clicks and become so-called “clunk detectors.” Students are provided with multiple opportunities to read text and identify clicks and clunks. In some cases, text can be offered that strategically insert problematic words or concepts to trigger clunks, thus providing checks on students’ understanding and teaching opportunities. Once students can identify clicks and clunks, they are taught a set of fix-up strategies that involve rereading, reading the sentence without a problematic word, reading sentences before and after the clunk, and using decoding strategies. These strategies are written on “de-clunking cards” and used as reminders when students reach other parts of the passage where their comprehension breaks down. Additional details on click or clunk, and CSR in general, can be found in Klingner and Vaughn (1998) and Vaughn et al. (2001).
Interventions Beyond Comprehension Strategies: Vocabulary, Background Knowledge, and Inference Making Scholars have raised questions about the use of comprehension strategy instruction (Catts, 2022; Catts & Kamhi, 2017; Compton et al., 2014; Elleman & Compton, 2017; Kamhi & Catts, 2017). It is not that reading comprehension strategies are bad—most generally agree that some form of comprehension strategy instruction should be part of a reading comprehension intervention—rather, they caution about what is not being taught when interventions exclusively focus on strategies. Specifically, some have argued that it is a problem if strategy instruction is implemented at the expense of instruction and opportunities to build knowledge, with regard to vocabulary and background knowledge, that is essential for reading comprehension (Catts, 2022; Elleman & Compton, 2017). Elleman and Compton (2017) recommend the following areas that a next generation of reading comprehension interventions should emphasize: (1) building knowledge across grades, including the use of media (video, audio, interactive software) to help build students’ knowledge; (2) purposefully selecting texts that help students build knowledge; (3) teaching content-related vocabulary, including teaching students strategies to infer word meanings from the text and morphological analysis; (4) making greater use of discussion (i.e., discourse) and critical thinking about nonfiction texts; (5) and using graphic organizers to teach text structure and conceptual frameworks, including signal words that help students identify text structure. Although all are potentially important, we provide additional discussion on the following: building knowledge and content-related vocabulary, and selecting texts to help students build and integrate their knowledge. We also discuss the role of instruction in building students’ ability to make inferences while reading.
Supports and Interventions for Vocabulary and Background Knowledge to Support Linguistic and Reading Comprehension In Chapter 2, we discussed the fundamental roles that language plays in reading development, including its influence on phonological awareness, word reading acquisition, reading fluency, and reading comprehension. At the heart of language proficiency is vocabulary knowledge (i.e., knowledge of word meanings and how they are used). Language comprehension, and reading comprehension by extension, cannot occur if word meanings
Step 3: Instructional Modification II 289
are unknown, and even a small number of unknown words in a passage will significantly disrupt comprehension (Schmitt et al., 2011). Vocabulary knowledge is also closely tied to background knowledge (Ahmed et al., 2016), which is another key aspect of reading comprehension (Kintsch, 1988), because having knowledge of a topic often depends on understanding the terminology pertaining to that topic. The close connection between vocabulary knowledge and background knowledge is good news from an intervention standpoint: It means that an intervention that teaches important vocabulary terms and phrases for understanding academic texts also helps establish background knowledge. Vocabulary knowledge is acquired through listening, reading, and instruction. Typically developing readers acquire a great deal of vocabulary knowledge from reading, but the reading comprehension problems of struggling readers, combined with the fact that they read considerably less, make reading an unreliable source of vocabulary acquisition for students with reading difficulties. Additionally, children come to school with profound individual differences in the amount of language they have been exposed to. Many students arrive in the classroom with little prior experience with the language or dialect used by their teachers. Therefore, vocabulary instruction and intervention may be an important component of academic support for many students. The following strategies are relevant for integration within reading interventions, as well as components of mathematics and writing intervention. VOCABULARY INSTRUCTION AND INTERVENTION
Vocabulary intervention can be successful for students of all ages, and effective approaches to intervention are similar across age ranges. A commonly adopted framework for vocabulary instruction is Beck and colleagues’ (2013) robust vocabulary instruction, based on studies of vocabulary instruction and acquisition. Their framework describes a process for teaching a large set of highly relevant vocabulary terms across a period of time, whereby 6 to 10 new words are targeted each week. The general process and activities are as follows, beginning with the strategy for selecting words. Word Selection: Determining What Words to Teach. Reading instruction programs may include lists of key vocabulary to target, but other times, instructors must generate this list themselves. The primary recommendation of vocabulary instruction experts is that words should be important and highly useful within the academic content that students will be exposed to in the coming weeks and months of instruction. Some refer to these types of words as academic vocabulary because they are highly common in texts that students read and the language that teachers use. These words might also be referred to as Tier 2 words (Beck et al., 2013). Within this perspective, Tier 1 words are words that are common in everyday language and with which most students will be familiar. Tier 3 words refer to very rare, seldom used, and antiquated terms. Tier 2 words are situated in the middle—they are words that are infrequent in everyday speech, but common in texts that students read, achievement tests, teachers’ language in instruction, and language used in other forms of instructional media. An important point, however, is that Beck and colleagues’ concept of Tier 2 words is that they must also be of high utility, meaning they are important for comprehension and used in a number of contexts. Beck and colleagues also point out that the words chosen for instruction should be those for which students have at least some sense of the general concept behind them, or knowledge of more commonly used synonyms. A new vocabulary term should not
290
Academic Skills Problems
represent a concept that is completely new or foreign to the students. This facilitates being able to teach and explain the meanings of the new terms. For example, most students will understand what it means to guess, which will help them learn what the word estimate means when it is taught. Academic vocabulary terms often have multiple meanings, some of which are more common or relevant to an academic context. For example, students may understand the word system in reference to their video game system; however, system has a somewhat different meaning when considering the vascular system in the human body, an ecosystem, or an economic system. Thus, vocabulary intervention may involve teaching meanings of words that are unique to academic contexts as opposed to everyday use. There are several resources for selecting academic/Tier 2 vocabulary words for instruction. Some of the most important resources are the student’s textbooks in reading/ language arts, science, social studies, and even mathematics. Textbook glossaries, unit previews, and curricular scope and sequence materials will be good sources for identifying words important for reading comprehension. Other general resources are available, such as Coxhead’s (2000) Academic Word List and the University Word List (Xue & Nation, 1984). Gardner and Davies (2014) sought to improve on previous academic word lists. Using a corpus of over 425 million words, they identified the top 500 words most commonly used in academic texts across multiple disciplines and genres. They excluded general high-frequency words that most students are likely to know and highly technical words that occur only in a specific discipline. All 500 words are listed in their article. Instructional Principles and Weekly Structure of Robust Vocabulary Instruction. Several important principles of Beck and colleagues’ (2013) robust vocabulary instruction make important considerations for designing vocabulary interventions. Teaching vocabulary by focusing on dictionary definitions is problematic (Carnine et al., 2017; Beck et al., 2013). Dictionary definitions are as short and concise as possible, and therefore can be vague, difficult to differentiate from other words, and provide little context for understanding how the word is used and applied in language. A more practical approach is to teach the meaning of a new word with a student-friendly definition (Beck et al., 2013). A student-friendly definition is still brief and concise so that it is easy to learn, but unlike a dictionary definition, a student-friendly definition (1) explains the word in simple, everyday language, and (2) communicates what is unique and important about the word, including when and how it is used. Beck et al. (2013) use the example of the word controversy. Whereas the dictionary definition will refer primarily to a disagreement, a student-friendly definition will capture the contentious nature that is important to the term. Initial instruction of new vocabulary (about 6 to 10 words) takes place on the first day of the week. The words are displayed on the board or on individual cards. Instruction involves providing student-friendly definitions of the words. Students should understand the definitions, but not are not expected to memorize them. This is a key point. Rarely, in conversation or in reading, are we asked to define a word. Rather, we understand its meaning when it comes up in conversation, instruction, or text. With this in mind, after a student-friendly definition is provided, the teacher engages in interactive activities aimed at fostering students’ ability to (1) understand the word when it is used in speech or in text, (2) evaluate when it is used appropriately or not, and (3) be able to use the word themselves in conversation or in their writing. Beck et al. (2013) provided descriptions of the various activities that can be used to build this type of knowledge and understanding.
Step 3: Instructional Modification II 291 INTEGRATING BACKGROUND KNOWLEDGE AND CONTENT-R ELATED VOCABULARY
We have discussed at length the importance of vocabulary and background knowledge for reading comprehension, and how the two are intertwined. Unfortunately, a predominant use of comprehension strategies in practice suggests that practitioners may not recognize how essential vocabulary and background knowledge are for comprehension, and that comprehension strategies are of little use if students do not understand what the words, concepts, or ideas in the passage mean. An additional aspect that often goes unrecognized is the importance of vocabulary and background knowledge for inference making (i.e., “reading between the lines”). For instance, consider: “After glancing at the forecast, John grabbed his umbrella and headed out the door.” Understanding this sentence, and why John grabbed his umbrella, first depends on understanding what glancing, forecast, umbrella, and other words mean. A reader with knowledge that a forecast refers to a prediction about the weather, and that umbrellas are used to protect one from the rain, can infer that the weather forecast called for rain, and that John would have to be outside at least some point during the day. These details are inferred because the passage does not mention weather or rain in any form. Background knowledge is another area in which systemic inequities play a role. Students come to school with significant individual differences in the opportunities they have had to build knowledge relevant to academic texts and teachers’ content-area instruction. Children and youth from historically marginalized communities may experience a mismatch between their background knowledge and the knowledge required to fully comprehend text and instruction in school. Strategies to build relevant background knowledge are a way to ensure that all students have opportunities to learn. The strategies for targeting vocabulary knowledge described earlier, especially robust vocabulary instruction (Beck et al., 2013), are certainly applicable here. Additionally, vocabulary building is something that should extend across all grade levels and be integrated within reading intervention. The same can be said for overall background knowledge—students should be exposed to topics and knowledge sources that will be important for understanding and learning from text. A focus of this instruction should be expository and informational texts, teaching content-area vocabulary and knowledge applicable to science, social studies, subject areas in which reading comprehension is difficult for students and where foundational knowledge and vocabulary will benefit achievement in subsequent coursework. Swanson and colleagues (2021) provided an example. The researchers trained fourth- grade teachers, in the area of social studies, to (1) teach relevant background knowledge essential for an upcoming text and unit using illustrations; (2) teach high-utility social studies vocabulary relevant to the upcoming lessons; (3) utilize text-based discussions during reading by asking questions about the text; (4) teach students to identify the main idea of a text (i.e., “get the gist”); (5) review and connect the target vocabulary after reading and discuss how it was used in the passage they read; and (6) teach students how to write a summary of the text using the gist statements they generated while reading. Swanson et al. (2021) found that compared to students who received business-as-usual control instruction, large effects favored students in the knowledge-focused condition on outcome measures of social studies content knowledge, relevant vocabulary, and reading comprehension in social studies content. A key aspect of the intervention was the focus on building students’ background and vocabulary knowledge to support their comprehension of social studies texts.
292
Academic Skills Problems
In addition to direct instruction, Elleman and Compton (2017) recommended other approaches to building students’ knowledge. Greater opportunities to read expository and informational texts from diverse subject areas, with support and text-based discussions, help expose students to new vocabulary and conceptual knowledge. These texts should be purposefully selected because they promote knowledge building. In addition to text, multimedia (films, videos, podcasts, internet content) are attractive ways to build knowledge and vocabulary because they are engaging and, in some cases, interactive. Share the content with students before reading—foster the knowledge first—then, during reading, support students’ integration and expansion of knowledge with the content they are reading. Key prompts and questions by the teacher are critical to help link the content students learned initially with what they read in the text. INFERENCE-MAKING INTERVENTIONS
Reading will nearly always involve making inferences because authors almost never explicitly provide every detail. If they did, reading would be incredibly boring and exhausting. In fact, in fiction writing, it is often more satisfying to read authors that give the reader just enough information, allowing them to infer the rest. Inference making is involved in all forms of reading, text types, and genres, even in children’s books, and can be so subtle that readers rarely even notice them. Just the simple act of connecting a pronoun with its referent involves an inference. Good readers more readily make inferences, given their automatic word reading that poses no barriers to higher-order cognitive awareness and reasoning, deep and broad vocabulary knowledge so that the correct word meanings are automatically processed for the context, and a warehouse of knowledge about the world and experience with a variety of situations to draw from. Their inference-making skills also allow them to infer meanings of new words and thus build their vocabulary knowledge, and connect new knowledge to existing knowledge, the more they read (Cain & Oakhill, 2011). Poor readers, in contrast, given difficulties in any one (or several) of these skill and knowledge areas, face a number of hurdles to effective inference making. This is another reason why good intervention to support reading comprehension focuses on building and strengthening the skills and knowledge sources that make comprehension possible. But having the knowledge is only part of the puzzle. Barnes et al. (1996) taught students about a fictional planet with facts about its geography and inhabitants using vocabulary familiar to all the students. All students mastered the knowledge base. However, despite having equivalent levels of background knowledge and vocabulary relevant for reading, poor comprehenders struggled to make inferences when reading. Similar findings were observed by Cain et al. (2001), who found that compared to poor comprehenders, better comprehenders were better able to activate and utilize knowledge when reading. Therefore, just as important as building knowledge is instruction that helps students integrate existing knowledge with information in the text. Improving inference making through intervention is possible. Elleman (2017), in her meta-analysis, found that inference-focused instruction was associated with an average effect size of 0.58 (i.e., strong and educationally meaningful) on students’ overall reading comprehension, and an even larger average effect size of 0.68 on students’ inferential comprehension specifically. Interestingly, inference-making instruction with struggling readers improved not only their inferential comprehension, but also their literal comprehension. Most of the interventions were multicomponent in nature, and many components overlapped across studies, making it difficult to draw conclusions regarding
Step 3: Instructional Modification II 293
which approaches were most effective. However, Elleman noted that several interventions provided explicit instruction to students on finding pertinent information in a text and integrating it with prior knowledge, which she speculated may have been particularly important for struggling readers given their need to read more actively and attend to important details as they read. Thus, it bears repeating: Students must build vocabulary and background knowledge, but instruction should also show them how to integrate and use that information to better comprehend text.
Summary: Reading Comprehension Interventions Reading comprehension strategies can and should be used as a component of an intervention for students with reading comprehension difficulties when assessment indicates they are needed. However, strategy instruction should not be implemented at the expense of opportunities to build vocabulary and background knowledge essential for understanding text, and instruction that shows students how to integrate that knowledge to make inferences and connect information within and across texts. It is also worth reiterating that persistent difficulties with reading words and text with accuracy and efficiency will impede reading comprehension. If difficulties in word and text reading are present, intervention should continue to address those deficits. Reading comprehension can be targeted as part of practice opportunities the student is provided to read connected text, but improvements in reading comprehension will not be appreciable if students continue to struggle reading words. That leads to perhaps the most important point about reading intervention—it should be aimed at making reading words effortless for students, which will address the majority of reading difficulties encountered by school-based practitioners.
Identifying Packaged Evidence‑Based Reading Intervention Programs There are also a number of packaged intervention programs with research support for their efficacy. Space does not allow a description of each, but four resources are useful for identifying reading intervention programs and reviewing their evidence base. 1. National Center on Intensive Intervention (NCII; intensiveintervention.org). The Tools Charts maintained by the NCII summarize evidence for interventions in reading, mathematics, and behavior. Their Academic Intervention Chart can be filtered by reading interventions and grade range (PreK, elementary, middle school, high school), providing links to program descriptions, indicators of the quality of the evidence submitted for each program, effect sizes for targeted skills (T) and broader reading outcomes (B), and whether available data are disaggregated by student subgroups. Clicking tabs at the top of the chart provides a summary of evidence in other indicators, intensity (group size, duration, interventionist requirements, and training requirements), and other research summarizing its effects. The chart lists both commercially available programs such as Sound Partners, as well as individual strategies such as incremental rehearsal. The NCII website also offers a number of high-quality resources and professional learning modules on intervention, assessment, and progress monitoring. 2. What Works Clearinghouse (WWC; ies.ed.gov/ncee/wwc). The WWC applies a rigorous review process to programs and practices in education. Listings are available for
294
Academic Skills Problems
reading intervention programs under “Literacy,” and clicking each program provides a description, indicators of the quality of evidence available, and estimates of effects. 3. Evidence for ESSA (evidenceforessa.org) is linked to the WWC and provides indicators of effectiveness for various programs, with specific lists for reading programs. 4. IRIS Center (iris.peabody.vanderbilt.edu) provides evidence-based practice summaries including programs and practices for reading. The IRIS Center website is also a valuable resource for training and professional development resources on assessment, academic intervention, behavior classroom management, and progress monitoring.
INTERVENTIONS FOR MATHEMATICS In this section, we discuss evidence-based intervention strategies and approaches for mathematics, organized according to the keystone domains discussed in Chapters 2 and 4. We discuss evidence-based principles of interventions for students with mathematics difficulties that are applicable across multiple skill areas, as well as the concrete– representational–abstract framework of instruction. We then describe aligning intervention with the results of an assessment and discuss intervention in three domains: (1) early numerical competencies (i.e., number sense, early numeracy); (2) basic computation and number combinations fluency (i.e., math facts); and (3) interventions in more complex forms of mathematics, including procedural computation, word-problem solving, rational numbers, geometry and measurement, and algebra. We also discuss considerations for integrating support in mathematics language and vocabulary, which has considerable bearing on word-problem solving but other skill areas as well.
Evidence‑Based Principles of Effective Mathematics Interventions L. S. Fuchs, D. Fuchs, et al. (2008) described seven principles of effective practices in interventions for students with mathematics difficulties and disabilities. These principles should be considered when identifying, developing, and implementing interventions.
Explicit Instruction As in almost all areas of mathematics instruction and intervention, clear, direct, and unambiguous teaching is critical for maximizing learning and acquisition for students with math difficulties. The model–lead–test (“I do, we do, you do”) sequence of explicit instruction described across this text is just as applicable and effective here as it is in other academic skills. An essential part of explicit instruction is using clear and concise mathematics language. The language used in mathematics instruction has unique importance. Hughes et al. (2016) pointed out that for many children, the unfamiliar vocabulary and ways of describing symbols, operations, and strategies may represent a second (or even third) language. This requires teachers and interventionists to carefully consider the language they use in explaining mathematics concepts and strategies. Hughes and colleagues (2016) provided a set of recommendations for adjusting language to make it more direct, unambiguous, and prevent confusion across mathematics skill domains. For example, instead of using terms like bigger number or smaller number, which will not transfer to positive
Step 3: Instructional Modification II 295
and negative integers, say “the number that is greater” or “the number that is less,” which retains the mathematical meaning of magnitude. Instead of saying “carry” or “borrow,” which implies procedures, say “regroup,” “trade,” or “exchange,” which communicates the concepts of regrouping and ungrouping 1’s into 10’s, 10’s into 100’s, and so on. Hughes et al. (2016) offered other suggestions for adjusting language to be consistent with mathematics, make mathematics easier to understand, prevent confusion later, and reinforce conceptual understanding.
Systematic Instruction Systematic instruction refers to a prespecified plan for introducing new skills in a logical, effective sequence. L. S. Fuchs, D. Fuchs, et al. (2008) recommended that mathematics instruction be systematically designed to minimize the learning challenge. Here, skills are carefully sequenced and instruction is presented so that new skills build on previously learned skills and success and understanding are maximized. The plan should focus on skills that students can apply broadly and transfer to other mathematics situations, and unnecessary explanations or skills are eliminated from the sequence.
Strong Conceptual Basis In addition to learning mathematics procedures, it is important that students learn the concepts that underlie mathematics problems and their solutions. In Chapter 2, we discussed the role that conceptual understanding plays in facilitating the acquisition of mathematics procedures, flexibility in problem solving, accuracy in solutions, and generalization skills to other mathematics domains. Conceptual understanding begins with basic concepts of whole numbers and extends through all other areas such as understanding what fractions represent, what multiplication and division mean in addition to learning how to complete those kinds of problems, understanding the concept of mathematics properties (e.g., communicative property), and accurate conceptual knowledge of mathematical symbols (e.g., the equal sign). Targeting conceptual understanding does not mean avoiding teaching procedures— they can (and should) be integrated. There is no reason to do one and exclude the other.
Drill and Practice Practice is critical for building proficiency in so many areas, especially in mathematics. L. S. Fuchs, D. Fuchs, et al. (2008) underscored the importance of providing students with regular, multiple opportunities to practice new skills. Practice is necessary for fostering automaticity with recalling answers and fluency with procedures, which directly support success with more complex skills. Practice should always include teacher support and immediate feedback.
Cumulative Review Intervention should also program for regular and systematic cumulative review, in which previously learned problems and skills are regularly practiced with newly learned skills. Review informs when reteaching or additional practice with certain skills is needed. Previously taught skills and problem types should be rotated so that all are periodically reviewed. Cover–copy–compare (CCC) and incremental rehearsal (described below) are ways to do that.
296
Academic Skills Problems
Include Elements for Maintaining Motivation, Engagement, and Effort High rates of student engagement are critical for mathematics success, which depends on attention to symbols, changing operations and strategies, and evaluating solutions. Sustained effort is also important, which may be challenging for students with mathematics difficulties who have experienced significant frustration and failure. Interventions should include strategies for maintaining student effort and attention, such as self-monitoring and reinforcement systems (see Chapter 5), as well as strategies that increase motivation, such as team-based contingencies and having students track their progress with “beat your score” elements.
Progress Monitoring Noted by L. S. Fuchs, D. Fuchs, et al. (2008) as the most essential feature of all intensive intervention, progress monitoring indicates when effective programs should continue, or when interventions should be adjusted or changed to improve student growth. We discuss progress monitoring in depth in Chapter 7.
Additional Consideration for Mathematics Intervention: The Concrete–Representational–Abstract Framework A common recommendation in contemporary mathematics instruction and intervention is the use of the concrete–representational–abstract (CRA) framework (Agrawal & Morin, 2016; Gersten et al., 2009; Witzel et al., 2008). This framework helps teach conceptual understanding within procedural knowledge. Concrete refers to the use of manipulatives or hands-on activities to represent the mathematics concept targeted in instruction, which may include counting chips, base-10 blocks, number lines, and balance scales. Representational refers to depicting the concept or problem with visual representations, such as images of the manipulatives (e.g., base-10 blocks displayed in pictures), tally marks, icons that represent the manipulatives, or graphic organizers. Abstract refers to demonstrating the skill or solving problems using standard numerals and symbols (i.e., the mathematical model). CRA is sometimes referred to as a linear sequence (i.e., first C, then R, then A), but it is more productive to view it as a framework in which elements overlap and are used when needed (Sarah Powell, personal communication, 2021). For example, a visual representation may be paired with the written problem from the start, a teacher may bring back manipulatives when reviewing or reteaching a previously taught concept, or manipulatives may not be used at all if not needed. It should be noted that some conceptualizations simplify the framework to just the visual and abstract aspects (Gersten et al., 2009) because concrete manipulatives or hands-on activities are not applicable, possible, or even necessary with some mathematics skills and concepts. See Witzel et al. (2008) and Agrawal and Morin (2016) for descriptions of how to apply the CRA framework across mathematics skill areas.
Aligning Intervention with Assessment in Mathematics The identification of mathematics interventions for a student will be driven by the evaluator’s data-based hypothesis, developed and refined across the assessment and any
Step 3: Instructional Modification II 297
intervention approaches previously attempted. Although it is possible that an individual intervention plan may warrant strategies in multiple skill areas, we stress the importance of parsimony and focusing on the area of difficulty that is likely the primary reason for the student’s low achievement in mathematics. When the primary problem is addressed, intervention can move on to other skill areas as needed. Figure 6.5 helps connect the keystone model of mathematics discussed in Chapter 2 to assessment-driven intervention decisions. In this chapter, we discuss mathematics intervention in two areas associated with the development of mathematics proficiency. Assessment that identifies the skill domain most likely related to the underlying source of the student’s difficulty would be the focus of intervention: (1) basic, whole-number skills, which includes approaches for improving basic computation and number combinations fluency (i.e., math facts) and, when needed, addressing skill gaps in early numerical competencies (i.e., number sense, early numeracy); and (2) approaches for intervention in one or more skill domains of mathematics, including procedural computation, word-problem solving, rational numbers, geometry and measurement, and algebra. Interventions for (2) assume that basic computation and number combination skills are adequate, or intervention will target those skills in addition. Underdeveloped mathematics language and vocabulary may be identified as part of the assessment, and strategies in this area can be integrated with interventions in either of the domains.
Interventions and Strategies for Early Numerical Competencies Early numerical competencies (ENCs; Powell & L. S. Fuchs, 2012) have also been referred to as early numeracy, number sense, and whole-number concepts, and include students’ knowledge and skills with whole numbers with regard to counting, number recognition, and number comparison. Knowledge in this domain is foundational to mathematics development and particularly implicated in the acquisition and fluency with calculation skills (Locuniak & Jordan, 2008). ENCs include a constellation of skills; therefore, interventions in this area tend to include multiple activities targeting multiple skills. However, counting skills and number comparison are some of the most central. We reviewed important counting skills in Chapter 2 but provide a brief summary here. One-to-one correspondence involves pairing a number name with an object or item to be counted, and after the item is counted, it is not counted again. Stable order is the understanding that number names must be repeated in the same order whenever a set of items is counted. Cardinality is the recognition that, when counting a set, the last number name spoken is the number of items in the set. Abstraction involves learning that anything can be counted and that the principles listed above apply in all counting situations. Number recognition is also a critical skill in this domain, which is involved in learning to compare numbers in terms of their quantity (e.g., recognizing that 5 is greater than 4).
Versatile Mathematics Models and Manipulatives for Teaching ENCs Models, manipulatives, and visual representations are useful for teaching skills and concepts across mathematics for students with mathematics difficulties (Peltier et al., 2020). They are especially useful in teaching counting skills and other ENCs. Some of the most common models are shown in Figure 6.6 and summarized here.
298 Efficient
Geometry and Measurement
Rational Numbers, Especially Fractions
Word-Problem Solving and Algebraic Reasoning
Procedural Computation
Advanced Mathematics
• Skills of concern that are relevant for later success in algebra • Practice to build accuracy and efficiency • Integrate mathematics language (vocabulary)
target
FIGURE 6.5. Connecting assessment to intervention in the keystone model of mathematics.
• Whole-number understanding and any early numerical competency gaps • Counting strategies to solve number combinations • Practice number combinations to build accuracy and efficiency • Integrate mathematics language (vocabulary)
target
Algebra
Skill Domains
Language: Vocabulary and Linguistic Reasoning Skills
Accurate
(“Math Facts”)
Number Combinations Fluency
Basic, Whole-Number Skills
Counting, Numeral ID, Quantity, Place Value
Early Numerical Competencies:
If assessment reveals that difficulties involve…
Step 3: Instructional Modification II
299
“7” “7” 0
1
2
3
4
5
6
7
8
9
10
13
14
“4 + 5 = 9” 0
“5 + 5 = 10”
1
2
3
4
“3” 3
5
6
7
8
9
10
“13”
11
12
15
16
17
18
19
20
“33”
FIGURE 6.6. Examples of models and manipulatives for teaching early numerical competencies: ten frames, number lines, and base-10 blocks.
TEN FRAMES
Ten frames consist of a simple 5 × 5 grid in which students place counting chips in each square (see Figure 6.6). They help support teaching beginning counting skills including one-to-one correspondence, stable order, cardinality, and subitizing. Students are taught to always begin counting in the upper left square, moving left to right across the rows. Five frames can also be used for students at the very earliest stage of counting. With instruction, the structure of the frame reinforces that counting always goes in order, each counter is counted only once, and the last number counted is the total amount of the set. These concepts are not as easily demonstrated by just counting a cluster of items, where students are more prone to skip some items, double- count other items, and continue counting items repeatedly without realizing they had counted the set. Ten frames provide a framework for making their counting orderly and systematic. Ten frames also help demonstrate the component nature of numbers (e.g., that 4 represents two 2’s or four 1’s). Later, ten frames may be used to demonstrate addition and subtraction concepts within 10, where students can represent addition number combinations with different- colored disks (e.g., 6 + 4 = 10 is represented by six blue disks and four red disks). NUMBER LINES
Number lines support instruction in multiple mathematics skills and concepts (see Figure 6.6). Initially, they support teaching and reinforcing ENCs such as one-to-one correspondence, reinforcing number identification, stable order, and cardinality. Number lines are particularly useful for teaching number comparison skills, as they provide a visual representation of quantity (i.e., the next number to the right is always greater than the previous number). As such, when teaching students to compare two numbers, they can locate both numbers on the line to determine which is greater (or less). Later, number lines help demonstrate the concepts of addition and subtraction, and they provide a scaffold for using counting strategies to solve number combinations. Skip
300
Academic Skills Problems
counting (i.e., counting by 2’s, 3’s, 5’s) can also be demonstrated and taught using number lines, which sets the stage for later helping students understand the concepts of multiplication and division. Even later, teaching negative numbers, and comparisons and operations of rational numbers (i.e., fractions and decimals), is supported by number lines. They are an extremely powerful tool for helping to develop students’ conceptual knowledge and procedural fluency across multiple skill areas. BASE‑10 BLOCKS
Base-10 blocks are used to demonstrate place value (see Figure 6.6). Individual cubes represent 1’s, rods represent 10’s, and squares represent 100’s. Base-10 blocks can be used to first teach the concepts of numbers from 11 to 19 (i.e., one 10 rod and a corresponding number of cubes), two-digit numbers from 20 to 99 (i.e., 47 = four 10 sticks and seven cubes), skip counting by 10’s, three-digit numbers, and so on. Along the way, the concepts of adding and subtracting 10 to or from a two-digit number can be represented concretely by adding or removing a 10 rod, which facilitates mental computation of multidigit numbers. FINGER COUNTING
There is one type of mathematics model that does not require any purchase or construction, and students will never misplace them: their fingers. As we discuss in more detail later in this chapter, finger counting can be used for teaching counting strategies for solving number combinations and helping students commit them to memory, which also serves as a reliable backup strategy. Finger counting is used in several contemporary, evidence-based intervention programs described in this chapter.
Intervention Research in ENCs Several examples of interventions for ENCs have been studied over the years. Dyson et al. (2013) implemented an intervention targeting number sense and whole-number concepts with kindergarten students. The intervention involved (1) the base-10 numeration system to teach the identification of numerals 11 to 99 and their underlying concept; (2) number recognition; (3) number sequencing to 100 using number cards placed in rows and columns like a 100’s chart; (4) subitizing up to 4; (5) finger counting and making 5 with fingers quickly; (6) placing chips with a number list (like a number line) to reinforce one-to-one correspondence and cardinality; (7) number plus or minus 1 (i.e., that “plus 1” always means the number after, and “minus 1” always means the number before); (8) number comparisons for quantity discrimination; (9) rows of dots separated by a line to demonstrate the communicative property of addition (e.g., a row of four dots with a line after the third dot, to demonstrate that 3 + 1 = 4, and by rotating the card, 1 + 3 = 4), which supports both addition and subtraction (see the “counting up” strategy for subtraction number combinations below); (10) using counting to solve problems; and (11) a numbers board game that served as a reward and promoted skills in “counting 1” one or two digits beyond a given number. Dyson et al. (2013) also provided descriptions of the activities and examples of the visual representations they used. The intervention was delivered in groups of four students, 3 days per week at 30 minutes per session, for 8 weeks. Compared to a business-as-usual control group, students who received the number sense intervention outperformed control group students on
Step 3: Instructional Modification II 301
measures of number sense, number recognition, number knowledge, story problems, and number combinations at both posttest and delayed posttest (showing the maintenance of effects). Students in the intervention group also outperformed students in the control group on standardized measures of math achievement at posttest. ROOTS (Clarke, Doabler, Turtura, et al., 2020), is an intervention program designed for kindergarten that targets whole-number concepts (i.e., early numerical competencies). It can be used as a stand-alone intervention. Designed to be implemented by instructional assistants with small groups of four to five students, ROOTS includes 50 lessons targeting three areas of whole-number understanding: counting and cardinality, addition and subtraction number operations, and place value (i.e., base-10 understanding and decomposing numbers 11–19). An important aspect of ROOTS is that the intervention does not require or assume that interventionists have any background or training in mathematics; the materials include lesson scripts and resources that permit instructional assistants or other interventionists to implement with high fidelity. Clarke, Doabler, Smolkowski, Baker, et al. (2016) randomly assigned kindergarten students at risk for math difficulty to ROOTS intervention groups or school-designed intervention (control). Statistically significant and educationally meaningful effects favored students in the ROOTS groups on 4 out of 6 dependent measures of early numeracy, with effect sizes ranging from 0.28 to 0.75 (all educationally meaningful). There was also evidence that students in the ROOTS intervention experienced reduced mathematics achievement gaps compared to their typically achieving peers. Similar results were observed by Clarke, Doabler, Smolkowski, Kurtz Nelson, et al. (2016) in another randomized control trial with at-risk kindergarten students; students in the ROOTS condition outperformed students in the control condition on dependent measures of early mathematics skills and experienced narrowed achievement gaps compared to typically achieving peers. ROOTS is available from the Center on Teaching and Learning at the University of Oregon (https://dibels.uoregon.edu/market). Another intervention targeting early numerical competencies is the early numeracy intervention (ENI; Bryant et al., 2011). The authors described the intervention as designed to “promote conceptual, strategic, and procedural knowledge development for number and operation concepts and skills” (Bryant et al., p. 13). Target skills include counting and number recognition, number comparison (greater/less), part/whole relations, base-10 and place value, addition and subtraction counting and decomposition strategies, and the associative and communicative properties of addition. With first graders with low early mathematics achievement, students randomly assigned to ENI outperformed students in a typical practice comparison condition on a statistically significant basis on 5 of 11 mathematics outcome measures. Of note were large effect sizes of 0.44 and 0.55 on measures of computation. Fusion (Clarke et al., 2022) is a small-group early mathematics intervention designed for first-grade students. Across the first 30 lessons, Fusion seeks to develop whole-number concepts and skills, targeting base-10 and place value, counting and number recognition to 100, and fluency in addition and subtraction number combinations (math facts). The latter half of the program targets two-digit addition and subtraction, mentally adding or subtracting 10 from a two-digit number, word-problem concepts, and problem-solving strategies. The program includes 60 lessons designed for 30 minutes each with groups of two to five students. With 60 first graders with mathematics difficulties, Clarke et al. (2022) randomly assigned students to one of three conditions: Fusion intervention (30 minutes per session, 5 days per week for 12 weeks) in groups of two students, Fusion in groups of five students,
302
Academic Skills Problems
or school-designed business-as-usual intervention. Instructional assistants employed by the district or staff hired by the project served as interventionists in the Fusion groups. Results indicated that students in the Fusion groups demonstrated significantly greater growth than control students on a measure of early numeracy and a researcher-designed measure of whole-number understanding and procedural knowledge. Group differences were not observed on a standardized measure of early mathematics or a CBM measure of grade-level math skills. Interestingly, although kindergarten studies of ROOTS (of which Fusion is related) indicated that group size (either two or five students per group) did not matter in terms of the math growth they achieved, students in groups of two demonstrated somewhat greater gains than students in groups of five with Fusion.
Interventions for Basic Calculation and Number Combinations Fluency (Math Facts) As discussed in Chapter 2, fluency with number combinations (i.e., so-called “math facts”; single-digit operations in addition, subtraction, multiplication, and division) are broadly implicated across nearly all domains of mathematics achievement. Automaticity in recalling number combinations is predictive of mathematics success, and problems in this area are a very common characteristic of students with mathematics difficulties and disabilities (Cumming & Elkins, 1999; L. S. Fuchs, D. Fuchs, et al., 2008; Geary et al., 2007). Intervention in number combinations should be implemented when a student’s assessment reveals inaccurate or dysfluent skills. This is true for addition and subtraction number combinations for any student after first grade, and multiplication and division combinations for students after third grade. Even students in middle or secondary grades with mathematics difficulties may have low fluency with number combinations and improving this skill can bolster achievement in other mathematics skill areas. They can be targeted as a stand-alone intervention or included as a component in an intervention targeting multiple skills (e.g., included in an intervention also targeting procedural computation or word-problem solving). As with the majority of skills, the most important aspect to developing automaticity with number combinations is direct practice. This is evident across numerous studies and meta-analyses, but how number combinations are taught and practiced with struggling learners is important to consider.
Strategic Counting to Support Acquisition of Number Combinations Learning number combinations has close ties to ENCs. Knowledge and skills in number recognition, counting, and quantity comparison play significant roles in learning to solve number combinations and commit them to memory (Powell & L. S. Fuchs, 2012). Therefore, before targeting number combinations, students should have prerequisite counting skills including stable order, one-to-one correspondence, and cardinality; and have learned the meaning of the addition, subtraction, and equal sign symbols. As will be seen shortly, the most efficient counting strategies for solving number combinations involve immediately recognizing the larger of two numbers; therefore, improving students’ number comparison fluency (i.e., quantity discrimination; identifying which of two numbers is larger) may be needed to enhance students’ success in solving number combinations. Students are more successful with addition problems compared to subtraction, so instruction should target addition first if a student is at the very early stages
Step 3: Instructional Modification II 303
of mathematics. Otherwise, for intervention with students in grades 1 and up who are referred for mathematics difficulties and have been exposed to addition and subtraction, the two operations can be targeted together for number combinations within 20. We focus on this scenario, as it will be applicable for most students referred for mathematics difficulties and is also consistent with most studies of mathematics interventions. A deliberate, systematic sequence of introduction aids a student’s acquisition and fluency of number combinations. The assessment may have identified certain number combinations or sets in which the student is less accurate or fluent, in which case intervention can focus on those. Otherwise, if students have pervasive difficulties across most number combinations within 20, it may make sense to start from the beginning. Instruction can move quickly across number combination sets in which students are more successful. Sequences that have been used in interventions for students with mathematics difficulties (L. S. Fuchs, D. Fuchs, et al., 2008; L. S. Fuchs, Powell, et al., 2009; Powell & L. S. Fuchs, 2012) have targeted +1, –1, +0, and –0 combinations first (e.g., 3 + 1; 5 – 1; 0 + 6), which will most likely be very easily acquired by the majority of students. Manipulatives or a number line can be used to demonstrate +1 and –1 if needed. Next, instruction targets double from 0 to 6 because (1) doubles are easily acquired, and (2) they make learning additional facts easier (Powell & L. S. Fuchs, 2012). Doubles from 0 to 6 include 0 + 0, 1 + 1, 2 + 2, 3 + 3, 4 + 4, 5 + 5, 6 + 6, 0 – 0, 2 – 1, 4 – 2, 6 – 3, 8 – 4, 10 – 5, 12 – 6. L. S. Fuchs, Powell, et al. (2009) taught doubles using manipulatives and doubles chants. After doubles, +2 and –2 number combinations are targeted, again using manipulatives or a number line. As in all instruction and practice targeting number combinations, the aim is for students to become automatic in recalling the answers, meaning they should eventually be able to look at the problem and immediately recall the correct answer. The next sets of number combinations take more time to memorize than the earlier sets. Evidence-based programs teach students that there are two ways to solve number combinations: from memory (i.e., “just know it”), or if they do not know it, by using a counting strategy (L. S. Fuchs, D. Fuchs, et al., 2008; L. S. Fuchs, Powell, et al. 2009). Teaching a set of counting strategies is a key step in building proficiency in solving number combinations. Explicit instruction and frequent practice strategies are structured with the ultimate goal of committing number combinations to memory. However, counting strategies are an important foundation for students with mathematics difficulties because they provide them with a reliable backup strategy to correctly solve a number combination they have not yet committed to memory. As shown in Figure 6.7, there are several types of counting strategies that can be employed, and they can be taught using fingers or with a number line. Using both is preferable; teaching finger counting is important because a number line will not always be available, and including a number line with instruction provides additional conceptual understanding by illustrating the operation and how the numbers relate to each other. Counting all is the most straightforward counting strategy and thus one that students at the earliest stages of mathematics are likely to be most successful. With counting all, students put up fingers for the first addend, then put up fingers for the second addend, and then count all to determine the solution. Clearly, the counting principles of stable order, one-to-one correspondence, and cardinality are critical. Although it is the most straightforward approach, Powell and L. S. Fuchs (2012) note that counting all is not very efficient and is prone to error because it involves the maximum amount of counting. Counting up involves counting up from one of the numbers and therefore less counting than counting all. To use counting up, students must be able to begin counting from a given number. Because many students have not yet learned the communitive property
304
Academic Skills Problems
1
Counting All
3 + 5 = ___
0
3
2
1
1
2
1
3
3
2
4
4
5
6
2
3 1
3 + 5 = ___
1
7
0
1
2
3
3
2
4
5
8
4
6
9
5
6
“5”
6
“5”
6
4
5
10
5
7
8
9
4
78
10
Counting On from Larger 1
3
5
“3”
Counting Up
2
7
8
3
2
3 + 5 = ___ 0
1
2
3
4
5
6
7
8
9
10
Counting Up (Subtraction) 1
7
8
3
2
8 – 5 = ___ 0
1
2
3
4
5
6
7
8
9
10
FIGURE 6.7. Counting strategies for supporting acquisition of addition and subtraction number combinations (based on recommendations by Powell & L. S. Fuchs, 2012).
of addition, when presented with an addition problem in the form of a number sentence (e.g., 3 + 5 = ), they often start with the first number they see (3) and count out the second number (5) to reach the answer. Although more efficient than counting all, there is still a more efficient strategy. When students can readily identify the larger of two numbers, they can be taught a more efficient counting strategy, referred to as counting on from larger (sometimes just referred to as counting on). As shown in Figure 6.7, students look at an addition number combination, identify the larger number, and count up the smaller number. So, with 3 + 5 = , the student would start with 5 fingers (or from the 5 on a number line), count up 3, and reach the answer of 8. This strategy involves less counting and is thus faster and less error-prone than the others (Powell & L. S. Fuchs, 2012). This is also a way to teach the communitive property of addition: Numbers can be added in any order and the result will be the same. Thus, counting on from larger is the best counting strategy to teach for solving addition number combinations. Similar counting strategies are used for solving subtraction number combinations. Students may have a tendency to count down for a subtraction problem because teachers often teach counting backward for subtraction. However, as Powell and L. S. Fuchs (2012) point out, counting backward is more challenging to students and thus presents
Step 3: Instructional Modification II 305
more opportunities for errors. It is ultimately more beneficial to teach students to solve subtraction problems by counting up: Students start by saying the subtrahend (i.e., the smaller number in a conventional subtraction sentence) with a closed fist and count up until they reach the minuend (i.e., the first number or the larger number in a conventional subtraction sentence). The answer is the number of fingers. The same can be done on a number line; students start at the subtrahend, count up to the minuend, and the answer is the number of hops. This strategy will require explicit instruction. L. S. Fuchs and colleagues (L. S. Fuchs, Powell, et al., 2009, 2010) have taught the strategy by teaching students to recognize the minus number (i.e., “the number directly after or below the minus sign”) and “the number you start with” (i.e., the first number in the problem). Students are then taught to start with the minus number and count up to the number you start with. The number of fingers, or hops on a number line, is the answer. The value of counting up for solving subtraction problems has been demonstrated in several studies of students with mathematics difficulties (L. S. Fuchs, Powell, et al., 2009, 2010). Because it involves counting forward rather than backward, it is less error-prone. It also demonstrates that addition can be used to solve subtraction problems, and thus helps reinforce the interrelation between addition and subtraction. The use of number lines within counting strategies instruction affords numerous benefits for reinforcing not only computation understanding, but also conceptual knowledge of whole numbers overall. However, students should not be discouraged from counting on their fingers when using a strategic counting strategy. Their fingers are math manipulatives that they will always have with them, unlike a number line. As we discuss below, the practice activities provided to students help them transition to memory-based retrieval of facts. Once students have learned strategic counting strategies—counting on from larger for addition and counting up for subtraction—the remaining number combination sets are taught (L. S. Fuchs et al., 2008; L. S. Fuchs, Powell, 2009, 2010). This begins with the 5 set in which one of the numbers or the answer includes 5: 0 + 5, 1 + 4, 2 + 3, 4 + 1, 5 + 0, 5 – 1, 5 – 2, 5 – 3, 5 – 4, 5 – 5. Several will have already been learned in the previous steps and thus can be included in practice with the ones that are new. The remaining sets (6 to 18) are then targeted, with doubles 7 to 10 targeted between the 12 and 13 set (L. S. Fuchs, Powell, et al., 2009). Fingers and number lines should still be used to support student’s counting strategies in learning these sets. In this way, all 200 number combinations within 20 are targeted systematically. As instruction progresses, new sets are learned faster because students will use what they know about other facts. TRANSITIONING FROM COUNTING STRATEGIES TO MEMORY RECALL
Counting strategies provide students with a support for accurately solving number combinations even though they have just been targeted. Over time, through repeated practice opportunities using number combination worksheets or flashcard drills, students begin to rely more on memory for solving number combinations and recognize that it is much faster than counting out each time. Students should be encouraged to commit number combinations to memory, which L. S. Fuchs and colleagues have referred to as “just know it” or “pull it out of your head” with students. Instructors can also encourage memorization of number combinations through the use of timed practice activities in which students are encouraged to beat their previous score (L. S. Fuchs, Geary, et al., 2013). However, students should not be discouraged from counting when they need to. This is one of the most important aspects of teaching counting strategies for solving number
306
Academic Skills Problems
combinations: It provides students with a strategy to determine the correct answer when memory fails. INCREMENTAL REHEARSAL FOR NUMBER COMBINATIONS
The incremental rehearsal flashcard procedure reviewed earlier as a strategy for word reading has also been effectively applied to mathematics interventions, particularly number combinations. The procedures for incremental rehearsal with number combinations are exactly the same as when words are used: An unknown fact is introduced, and systematically presented among previously learned facts such that the number of known facts between presentations of the new fact increases incrementally. Students should be encouraged to commit number combinations to memory and recall them quickly. But when memory fails, or when they make an error, students should be prompted to use their counting strategies (counting up from larger for addition and counting on for subtraction). Several studies have demonstrated the effectiveness of incremental rehearsal for improving number combinations fluency (Burns, 2005; Codding et al., 2010; McVancel et al., 2018), and that incremental rehearsal is often a more efficient technique (i.e., greater gains in less time) than traditional flashcard drill methods in both single-case and group design studies (see Burns et al., 2012, for a meta-analysis). The repeated nature of the presentations makes it very amenable to developing automaticity with recall. WORKSHEET METHODS (TIMED PRACTICE) AND COVER-COPY-COMPARE
Timed practice with worksheets is a simple but effective way to practice number combinations fluency, and practice activities like this are part of most intervention programs. Students complete a series of number combinations on a page, while the tutor times their work and helps check for accuracy. Answer keys can be provided for students to score their own worksheet. Several web-based resources are available for developing mathematics worksheets and allow for tailoring the problem types, such as Intervention Central (interventioncentral.com) and Super Kids (www.superkids.com/aweb/tools/math). CCC, described earlier in this chapter for spelling, has also been found to be effective for improving number combinations fluency (see Joseph et al., 2012, for a review). Students look at the completed problem, cover it, write the problem and answer from memory, and compare their answer with the model. It should be noted that when students know a reliable counting strategy, they do not necessarily need to have access to problem answers as in CCC. However, CCC might be particularly appropriate as a practice strategy as students are learning counting strategies so that they can self-check their answers. Studies indicate that students improve in number combinations fluency with the technique, and when used in conjunction with other methods, CCC may be a way to help students move from counting strategies toward memorization. Before any type of drill and practice with number combinations is implemented, words of caution are worth reiterating: Students should have a reliable counting strategy that allows them to determine the correct answer when they are unable to recall the answer from memory. Students should be prompted to use their counting strategy when memory fails or when they make an error, using a number line or their fingers. Feedback from the tutor should be provided if they are still unsuccessful with using the counting strategy, and reteaching may be necessary. Reinforcing students’ use of counting strategies promotes independence—the tutor will not be there for situations in which the
Step 3: Instructional Modification II 307
student must solve computation problems in their classroom or testing situations. Counting strategies provide them with a reliable fallback they can use independently. Number combinations are included in several evidence-based packaged mathematics intervention programs, including Number Rockets (L. S. Fuchs, Powell, & D. Fuchs, 2017; available from the Fuchs Research Group at vkcsites.org), SpringMath (sourcewelltech.org), ROOTS and Fusion (https://dibels.uoregon.edu/market), PALS Math (vkcsites.org), and others. The National Center for Intensive Intervention (intensiveintervention.org) maintains “academic intervention tools charts” that list additional program options for basic calculations strategies, and their evidence base.
Interventions for Procedural Computation Skills Procedural computation involves the use of procedures and algorithms to compute multidigit numbers in addition, subtraction, multiplication, and division (e.g., 379 + 256). Completing these types of problems often requires several steps and may involve regrouping (previously known as carrying and borrowing). Assessment that reveals difficulties in procedural computation make this area an important target for intervention because these skills are necessary for success in other areas of mathematics. As discussed in Chapter 2, success with procedural computation is facilitated by fluency with number combinations and understanding place value. There will be occasions in which a procedural computation intervention should include components aimed at improving number combinations fluency and place-value knowledge, and a need for these components will be indicated by good assessment data. However, students will also need to learn procedures and algorithms for solving multidigit computation problems, and this requires explicit instruction and practice strategies. Procedural computation represents a broad range of skills (see Appendix 4C in Chapter 4) that can be organized in general hierarchy from less to more complex. Proficiency with lower-order computation skills facilitates success with higher-order computation skills. This is one skill domain in mathematics in which survey-level assessment is most applicable—the assessments conducted at earlier steps should indicate what skills in the hierarchy the student has mastered (i.e., independent; near 100% accuracy and acceptable fluency), and what skills are well beyond their understanding (frustrational; very few or no problems correct). The skills in the middle are usually the best targets for instruction. Intervention for procedural computation is best conceptualized as explicit instruction (with modeling) of procedures and algorithms, and extensive opportunities to practice with feedback. There are several resources, options, and considerations for developing and implementing procedural computation interventions. The first are the mathematics curriculum materials being used with the student for instruction. The purpose of accessing these materials is to identify the procedures and algorithms that are taught in core instruction for solving multidigit computation problems. Procedures can vary across mathematics curricula, and there should be considerations of alignment between the strategies targeted in intervention and those in the student’s curriculum. Students also may find it easier to improve in their use of algorithms taught in class given some initial familiarity. Additionally, improvement in their use of the algorithms and procedures that are taught in core instruction may help the students benefit more from such instruction. It may not be necessary or advantageous that they be completely aligned, especially if a simpler approach or better algorithm can be taught from what is used in instruction. However, the procedure taught in intervention should not cause problems or confusion
308
Academic Skills Problems
for a student in their regular mathematics class. Discussing the intervention options with the student’s mathematics teacher can determine if any conflicts will arise with the student’s use of an algorithm that is not used in the core program. The second resource to consider for targeting procedural computation is packaged intervention programs. SpringMath (sourcewelltech.org) offers individualized intervention solutions across the full range of procedural calculation skills. Small-group programs, like Math Wise (L. S. Fuchs, Powell, & D. Fuchs, 2017), target procedural computation and may be good options if multiple students are in need of a similar intervention. Additional programs can be identified in the Academic Tools Charts that are maintained by the National Center on Intensive Intervention (intensiveintervention.org) and What Works Clearinghouse (ies.ed.gov/ncee/wwc). Third, several publications provide guidance on teaching procedures and algorithms for multidigit computation. For example, Direct Instruction Mathematics (Stein et al., 2018) provides extensive resources for teaching procedural computation according to the principles of explicit instruction. Other texts such as Effective Math Interventions: A Guide to Improving Whole Number Knowledge (Codding et al., 2017) and Teaching Elementary Mathematics to Struggling Learners (Witzel & Little, 2016) offer resources for implementing procedural computation interventions. Similar types of guidance are provided in a series of articles by Flores and colleagues on subtraction and multiplication with regrouping for students with mathematics difficulties (Flores, 2009; Flores, Hinton, & Schwenk, 2014; Flores, Hinton, & Strozier, 2014). Readers are reminded that recommended procedures and algorithms will vary across perspectives, even among scholars who work with students with mathematics difficulties. Mathematics research has not identified what specific algorithms are best, and the answer is probably “it depends” for each individual student, but one area of agreement among scholars is that algorithms and procedures should be simple for students to complete and easily learned through explicit instruction. Fourth, once the targeted procedures are identified, and explicitly modeled and taught, CCC has been shown to be an effective strategy for improving procedural computation skills (Codding et al., 2009; Joseph et al., 2012; Stocker & Kubina, 2017). The implementation procedure for CCC with procedural computation is very similar to those described earlier for spelling and number combinations; a worksheet is provided with problems and their answers printed on the left side of the page, and the problems without their solutions printed on the right. Some options have the student recopy the problem; however, this can be challenging with long multidigit problems. Students (1) look at the problem and answer, (2) cover the problem and answer, (3) solve the problem, (4) uncover the problem with the answer and compare their solution. If correct, students move on to the next problem and repeat the steps. If incorrect, students repeat the steps with the same problem. Practice should emphasize improving accuracy first, followed by improving fluency (but not at the expense of accuracy). CCC is a flexible strategy that can be applied across mathematics, not just computation. In fact, CCC is used extensively as an intervention across mathematics skill areas in the SpringMath system. Intervention Central (interventioncentral.org) offers a generator to create CCC worksheets. Timed “beat your score” practice can be used for building fluency, with and without CCC. Fifth, cueing strategies, prompt cards, mnemonics, self-instruction, and other techniques discussed in Chapter 5 can be integrated with instruction and practice. The multiple steps involved in multidigit operations often require scaffolds for students to recall and apply algorithms for correctly completing problems (e.g., knowing when and how to regroup).
Step 3: Instructional Modification II 309
Sixth, continue to be mindful of the importance of number combinations fluency for success with procedural computations. One multidigit computation problem can contain numerous number combination operations, in some cases multiple operations within the same problem. An error in any one number combination will often make the entire problem incorrect, and lack of automaticity with number combinations will require more time, effort, and consume more cognitive and attentional resources that could otherwise be allocated to remembering algorithms and understanding underlying concepts. Therefore, if needed, intervention for establishing fluency with the number combinations within 20 should occur before or in conjunction with intervention for procedural computation. In summary, interventions for procedural computation may involve using more than one resource for instruction and practice. Key considerations, however, include (1) considering the student’s survey-level assessment results in relation to a hierarchy of problem types (see Appendix 4C in Chapter 4), and targeting problems within the student’s instructional level; (2) identifying a straightforward algorithm and teaching it using explicit instruction; and (3) providing extensive opportunities to practice with scaffolds as needed (that are faded over time) and immediate corrective feedback.
Interventions for Fractions and Other Rational Numbers In contrast to whole numbers (e.g., 1, 5, 37), rational numbers refer to fractions, decimals, and percent. Fraction understanding, in particular, is a key foundational skill for algebra and success with ratio and proportion (Geary et al., 2012; NMAP, 2008). Understanding and working with rational numbers, especially fractions, are challenging for most students (Siegler & Lortie-Forgues, 2017) and a particular source of difficulty for students who struggle in mathematics in elementary grades and beyond (NMAP, 2008). As in many areas of mathematics instruction and intervention, there is now greater attention to building students’ conceptual understanding of rational numbers, in addition to teaching procedural knowledge (L. S. Fuchs, Malone, et al., 2017). This instruction seeks to develop students’ understanding of rational numbers in terms of what they represent (e.g., that ½, .5, and 50% all mean the same thing); understanding and comparing fractions, not just from a part–whole perspective, but from a measurement perspective (see below); and learning fraction operations (addition, subtraction, multiplication, division) in terms of what is conceptually represented by those operations, in addition to learning the procedures. There is also increased recognition of the importance of knowledge and skills with whole-number operations for success with fractions. Namkung et al. (2018) found that, with fourth graders, students with difficulties with whole-number operations (i.e., number combinations; multidigit addition, subtraction, multiplication, division operations with whole numbers) were 5 times more likely to have difficulties with understanding fractions than students without difficulties in whole-number operations. Students with severe difficulties in whole-number operations were 32 times more likely to experience problems with fraction understanding. This is another situation in which the root of the student’s mathematics difficulties must be considered, and another example of how a lack of accuracy and automaticity with basic operations and number combinations poses problems for higher-order mathematics success. In the case of fractions, multiplication and division number combinations increase in importance. Therefore, considering the results of the initial assessment and any observations from interventions previously attempted, intervention to improve number combinations and whole-number operations may be needed before or during fraction intervention.
310
Academic Skills Problems
Although they are less common than intervention programs in basic mathematics skills, programs focused on fractions’ concepts and operations are available. SpringMath includes significant intervention content for fractions and rational numbers skills, spanning basic concepts to complex operations with fractions and other rational numbers. As with other skills in SpringMath, each includes explicit instruction routines and practice procedures. Fraction Face-Off! (FFO; L. S. Fuchs, Schumacher, et al., 2015) is a fraction intervention designed for fourth grade. Using a sports theme, students work as a team in some activities and independently in others, earning “fraction money” that can be used to periodically buy prizes. Lessons are 30–35 minutes long, and FFO was designed to be taught 3 times per week for 12 weeks. A key aspect of FFO is the inclusion of two perspectives of fraction conceptual understanding: the part–whole perspective, and the magnitude interpretation (sometimes referred to as measurement perspective). Although both perspectives are included, FFO focuses more on the magnitude interpretation of fractions. As discussed in Chapter 2, a part–whole perspective explains fractions as a part or subset of a whole unit or set, and has traditionally been dominant in American mathematics instruction (L. S. Fuchs, Schumacher, et al., 2013). Fractions do indeed represent a part of a whole, and part–whole models tend to make intuitive sense. However, exclusively emphasizing the part–whole perspective may be inadequate for establishing conceptual understanding and tends to encourage students to consider the numerator and denominator of fractions as two separate whole numbers, rather than recognize that a fraction is a number (L. S. Fuchs, Schumacher, et al., 2017; NMAP, 2008). Instead, it is recommended that instruction develop an understanding of fractions in terms of their magnitude, which teaches the placement of fractions on a number line in relation to other fractions and whole numbers (Gersten et al., 2017; Siegler et al., 2011). A magnitude emphasis reinforces the understanding that a fraction is a number (not two whole numbers), and that its position on the number line (i.e., its magnitude relative to other fractions and whole numbers) is determined by its numerator and denominator. This emphasis on magnitude comparison extends to learning decimals and percent, where instruction also emphasizes their placement on the number line and understanding that the same rational number can be represented in multiple ways. The effects of the initial version of the FFO program were investigated by L. S. Fuchs, Schumacher, and colleagues (2013). Fourth-grade students, identified as low-achieving in mathematics, were randomly assigned to either the FFO intervention or control condition instruction that was more focused on part–whole understanding and fraction procedures. Following 12 weeks of intervention, students in the FFO condition outperformed students in the control condition with statistically significant and large effect sizes for both conceptual understanding of fractions and operations, even on test items focused on part–whole understanding. Furthermore, students in the FFO condition significantly closed the achievement gap with typically achieving peers in mathematics, whereas gaps widened for students in the control condition. Mediation analyses (i.e., determining what intervention aspects were responsible for the effect) found that improvement in magnitude/measurement understanding of fractions was at least partially responsible for improvements in fraction skill outcomes. Interestingly, it was also found that improvement in measurement understanding led to improvements in part–whole understanding, whereas improvements in part–whole understanding did not lead to improvements in magnitude/measurement understanding. Overall, the results of the study demonstrated
Step 3: Instructional Modification II 311
the potential for the FFO intervention to improve fraction knowledge and understanding for fourth graders with math difficulties. It also provided evidence that development of the magnitude/measurement perspective of fraction understanding is instrumental in building proficiency with fractions. The benefits of FFO have been demonstrated in subsequent studies (L. S. Fuchs, Malone, et al., 2016, 2017). L. S. Fuchs, Malone, et al. (2016) found that teaching students to provide high-quality explanations of their comparison of fraction magnitudes or their solutions to fraction word problems resulted in stronger fraction understanding, and especially benefited students with low working memory. Malone et al. (2019) found that improvement in fraction knowledge through FFO transferred to understanding the magnitude of decimals, and that instruction specific to decimals did not improve decimal understanding over and above fraction intervention alone. FFO is available from the Fuchs Research Group (https://frg.vkcsites.org).
Interventions for Word‑Problem Solving Word problems are challenging for many students, and they are a common source of difficulty for students who struggle in mathematics. In addition to having to determine a correct solution strategy by reading and comprehending text, which is sometimes convoluted or even intentionally misleading, students must accurately complete any addition, subtraction, multiplication, division operations (which may involve rational numbers) to solve them correctly. Significant advances in interventions to improve word-problem solving (WPS) for students with mathematics difficulties have emerged across the past 20 years. As discussed in Chapter 2, intervention approaches have evolved away from problematic “keyword” strategies that teach students to look for words in the text to apply a problem-solving strategy. In contrast, contemporary evidence-based word-problem-solving approaches involve schema-based instruction (Jitendra & Hoff, 1996; Jitendra & Star, 2011; Powell & L. S. Fuchs, 2018). Schema refers to an organizational structure or framework used to organize and represent information or procedures. Schema-based instruction in word-problem solving teaches students that word problems can be categorized by type, there is only a small number of types, and each type can be solved using a solution framework (i.e., schema) in which known quantities are entered to determine an unknown quantity. Powell and L. S. Fuchs (2018) provided additional description, models, and instructional routines for implementing schema-based word-problem-solving instruction. Students will commonly encounter three types of word problems: total, in which students must combine two or more quantities to find a new amount; difference, which require comparing two amounts to determine another amount; and change, in which students must increase or decrease a starting amount to identify a new amount. In schema-based word-problem-solving interventions, students learn that the known and unknown amounts can differ across problems, but that the problem-solving process can be dramatically simplified by learning a small set of standard computational frameworks for solving each type. Students first learn to identify the type of problem (i.e., total, change, or difference), and then learn to use a diagram that corresponds to the problem type to enter known quantities. The framework provides a model of a simple equation to identify the unknown quantity (reflecting the close connection between word-problem solving and algebra), and indicates the mathematical operations needed
312
Academic Skills Problems
to solve the equation and identify the unknown quantity (Jitendra et al., 2009; Powell & L. S. Fuchs, 2018). Schema-based instruction has been studied extensively by Jitendra and colleagues in elementary grades with students struggling in mathematics, and has been observed to be more effective in improving word-problem solving than instruction that targets word problems more generally (without schema-based instruction; Jitendra, Griffin, Deatline- Buchman, et al., 2007; Griffin & Jitendra, 2009; Jitendra, 2002; Jitendra, Griffin, McGoey, et al., 1998; Jitendra, Star, et al., 2009; Jitendra, Griffin, Haria, et al., 2007; Jitendra & Hoff, 1996; Jitendra, Hoff, & Beck, 1997, 1999; Xin et al., 2002). Figure 6.8 provides an example of a schema diagram from Jitendra and Hoff (1996). Research
Change Problem Change Amount
Beginning Amount
Ending Amount
Group Problem
Smaller Group Amount
Larger Group Amount
Compare Problem More
Difference Amount Compared Amount
Referent Amount Less
FIGURE 6.8. Schema for change, group, and compare problems. From Jitendra and Griffin (2001). Reprinted with permission from the authors.
Step 3: Instructional Modification II 313
reviews of this work and that of other research teams indicate significant positive effects for schema instruction in word-problem solving (Lein et al., 2020; Peltier et al., 2018). Lynn Fuchs and colleagues led other work in schema-based instruction for word- problem solving for students with mathematics difficulties. They developed Pirate Math, an intervention program for word-problem solving that also targets foundational skills in arithmetic, procedural calculations, and pre-algebraic knowledge (L. S. Fuchs, Powell, et al., 2009). Pirate Math teaches students to (1) identify the type of word problem, (2) represent it with an equation that includes X to represent the unknown quantity, and (3) “find X” like a pirate’s treasure map. Pirate Math intervention programs are available from the Fuchs Research Group (frg.vkcsites.org/what-are-interventions/math_ intervention_manuals). Pirate Math is built on principles of explicit and systematic instruction. It seeks to minimize the learning challenge and students’ anxiety by teaching them that most word problems can be categorized into three types, and each type has its own solution equation. Introductory lessons of Pirate Math target foundational mathematics skills that include counting strategies for solving computation problems and number combinations, two-digit addition and subtraction algorithms, and solving simple equations in which X varies its position (e.g., 4 + 2 = X; 5 – X = 3; X + 2 = 4). All of these skills are important for solving word problems. The remaining lessons of Pirate Math are organized in three units that each target a word problem type: Total, Difference, and Change. For each problem type, students learn a “meta-equation” that represents the structure of the problem type and the equation for solving it. Each unit targets conceptual understanding of the problem types so that students can identify them when the language and vocabulary are different. Instruction also teaches students that not all numerical information present in a problem will be necessary for solving it. Several large-scale randomized control trials have demonstrated the benefits of Pirate Math for improving the word-problem-solving skills of elementary students with mathematics difficulties (e.g., L. S. Fuchs, Powell, et al., 2009, 2010, 2014). These studies have also found that students improve in calculation skills in addition to word-problem solving (e.g., L. S. Fuchs, Powell, et al., 2009). Pirate Math has been extended by Sarah Powell and colleagues. Powell and L. S. Fuchs (2010) added equal sign instruction to Pirate Math, in which students were provided additional instruction in the meaning of the equal sign as a relational symbol that means “the same as,” and not simply as a signal for calculation (Powell, Berry, & Barnes, 2020). This understanding of the true meaning of the equal sign as that of “balance”— that one side of the equation is the same as the other side—is a key understanding for algebra and word problems. When contrasted with groups of third-grade students who received standard Pirate Math or school-designed business-as-usual instruction, Powell and L. S. Fuchs (2010) found that students in Pirate Math plus equal sign instruction outperformed both other groups in equation solving and word-problem solving. Furthermore, mediation analyses revealed that equal sign instruction was most likely responsible for students’ improvements in equation solving, which in turn was most likely responsible for students’ gains in word-problem solving. Powell extended Pirate Math further in Pirate Math Equation Quest (Powell, Berry, & Barnes, 2020; Powell, Berry, Fall, et al., 2021). The Equation Quest component adds instruction in pre-algebraic reasoning for solving equations and the meaning of the equal sign. Students learn to balance equations, using manipulatives, pictures, and equations, and to isolate X and perform the same calculation on both sides of the equal sign to determine its quantity. Comparing standard Pirate Math and Pirate Math Equation Quest
314
Academic Skills Problems
across 138 third graders with mathematics difficulties, Powell, Berry, and Barnes (2020) found that both Pirate Math groups outperformed a business-as-usual control condition. However, Pirate Math with the Equation Quest component resulted in superior improvement, with large effect sizes, for equation solving, equal sign understanding, and word-problem solving compared to students who received standard Pirate Math or were in the control condition. The effect sizes for word-problem solving in particular were very large (>1.5) compared to the control condition, and students in the Pirate Math Equation Quest condition demonstrated an effect size of 0.31 (educationally meaningful) over students who received standard Pirate Math on word problems. In a subsequent study, Powell, Berry, Fall, et al. (2021) further explored the effects of Pirate Math Equation Quest and standard Pirate Math, and included analyses of mediation effects of equal sign understanding and pre-algebraic reasoning. Among 304 third graders with mathematics difficulties, they found that standard Pirate Math and Pirate Math Equation Quest resulted in substantial effects for word-problem solving over instruction-as-usual control instruction (effect sizes of 2.44 and 2.66, respectively, which are considered very large). A sequential mediation pathway was observed whereby the Equation Quest component resulted in stronger equal sign understanding, which in turn was associated with stronger pre-algebraic reasoning on an open-equations measure, which in turn was associated with stronger word-problem solving at posttest. Pirate Math Equation Quest is available in individual student and small-group intervention formats at www.piratemathequationquest.com. In summary, state-of-the-art intervention for word-problem solving for students with mathematics difficulties involves schema-based instruction. Word-problem success is facilitated by proficiency with number combinations, procedural computation, and understanding the true meaning of the equal sign. This work has also been important in demonstrating that students in early and middle elementary grades can make sizable improvements in pre-algebraic reasoning through word-problem solving, which has significant positive implications for future success in algebra.
Geometry and Measurement Geometry and measurement are seldom viewed as skills necessary for school survival like reading comprehension and mathematics computation, and it may be rare to receive a referral for difficulties specific to geometry or measurement. However, like algebra, they serve as gateway skills to accessing more complex forms of mathematics and are used heavily in engineering, technology, and science. Measurement involves whole- and rational-number understanding, and measurement is often a part of word problems. Geometry builds reasoning and justification skills (National Research Council, 2001). Geometry and measurement in middle and secondary grades overlap with algebra, as equations are used for finding the length of sides, area, volume, and so on. Therefore, geometry and measurement can be important foundational skills in helping students gain access to postsecondary education and employment opportunities that they may not have otherwise had. It is likely that students experiencing persistent mathematics difficulties will also have problems in geometry and measurement; therefore, interventions in this domain may benefit their overall achievement in mathematics. Geometry instruction begins early in elementary school, where early difficulties with it can be detected (Dobbins et al., 2014), although scholars have noted inadequate attention to geometry in mathematics education (Clements & Samara, 2011). Geometry instruction typically begins with the recognition of shapes and how they compare and
Step 3: Instructional Modification II 315
relate to each other, reasoning with shapes and their attributes, and later drawing and recognition of lines and angles. In subsequent grades, the focus moves to skills such as graphing points on a coordinate plane to solve problems; measurement and problem solving in determining properties such as perimeter, area, volume, radius, circumference, and slope; and measurement of sides. Unlike other areas described earlier such as early numeracy and word-problem solving, the field has not developed off-the-shelf intervention programs targeting geometry. The one exception is SpringMath, which includes instructional routines and interventions for geometry and measurement skills ranging from elementary through secondary grades, and even advanced skills such as solving equations for intercept and slope. A number of studies have examined strategies and approaches for improving the geometry skills of students with mathematics difficulties, and effects have been summarized in research syntheses. Bergstrom and Zhang (2016) reviewed 32 studies that investigated geometry interventions for students in K–12 with and without math disabilities, and Liu et al. (2019) reviewed nine studies that investigated geometry interventions for students with learning disabilities. Overall, given the broad range of intervention types, skills targeted, and inconsistent quality of the studies, conclusions were limited regarding the characteristics of effective practice. Interventions, especially for students with difficulties or disabilities, tended to be limited to angle recognition, perimeter, area, and volume problems. Some features that were associated with stronger effects used multiple representations (i.e., concrete visuals or manipulatives), the concrete–representational– abstract framework, skills modeling, and explicit instruction. Bergstrom and Zhang (2016) observed mixed effects for integrating technology in geometry interventions; however, Liu et al. (2019) found that technology was incorporated in most studies (perhaps indicating an increasing trend in its use) through video modeling or computer programs, and that interventions with technology were associated with improvement in students’ geometry skills and understanding. In the absence of established and available intervention programs (with the exception of SpringMath), readers can consult texts such as Direct Instruction Mathematics (Stein et al., 2018) and Teaching Elementary Mathematics to Struggling Learners (Witzel & Little, 2016) for information on strategies and instructional routines. Recent studies are also informative. Satsangi and colleagues published several studies of geometry interventions for students with mathematics learning disabilities (most in middle school grades; Satsangi & Bouck, 2015; Satsangi, Hammer, et al., 2019; Satsangi, Billman, et al., 2020). Their studies demonstrated the potential for education technology to aid geometry intervention, such as the use of virtual manipulatives from the National Library of Virtual Manipulatives (http://nlvm.usu.edu), and the use of video modeling in which a skill was demonstrated visually, explained with explicit instruction, and key aspects were highlighted with colored markers.
Pre‑Algebra and Algebra Interventions specific to pre-algebra and algebra (hereafter referred to as algebra) are not common, but examples exist and work is being done in this area. Algebra represents a culmination of skills and conceptual knowledge learned in earlier grades, including whole and rational numbers, calculation and computation, and problem solving. As with most complex academic skills, difficulties in algebra may be related to (or completely caused by) deficits in the skills that underlie it. Thus, intervention for students experiencing algebra difficulties is likely to require improvement in multiple mathematics skills.
316
Academic Skills Problems
The assessment and any intervention activities implemented prior to this point should identify foundational skill areas that are problematic. Nevertheless, intervention specifically targeting algebra skills is warranted for some students, perhaps in tandem with the remediation of other related skills that are problematic. Although options are limited, there are existing programs that target algebra skills. SpringMath includes intervention techniques for a range of basic and advanced algebra skills. Word-problem-solving interventions like Pirate Math and Pirate Math Equation Quest discussed earlier include algebraic concepts, reasoning, and solving equations; however, the age range for which the programs are aimed may limit their use with adolescents. Studies have indicated intervention characteristics and components that may be important for algebra intervention. A meta-analysis by Rakes et al. (2010) reviewed 82 studies on improving instruction in algebra. The results highlighted the importance of improving students’ conceptual understanding; interventions that included components to improve conceptual understanding demonstrated average effect sizes almost double the size of effects for interventions that were focused only on procedural knowledge or skills. Lee et al. (2020) systematically reviewed studies that implemented interventions for algebraic concepts and skills with students with mathematics difficulties in middle and high school grades. Effective interventions tended to include (1) multiple representations, such as the use of manipulatives (e.g., base-10 models, object counting) or symbolic representations of the problems; (2) a sequence of examples that increased in difficulty and variations; (3) explicit instruction; and (4) the use of student verbalizations, which included having students think aloud and explain the problem and their solution to a peer and/or to the instructor. Several other intervention studies have investigated techniques and activities for algebra interventions with struggling learners. Strickland and Manccini (2013a, 2013b) used graphic organizers as part of the concrete–representational–abstract framework and observed improvements in students’ equation solving and transition to solving problems with abstract notation only (i.e., no graphic organizer). Namkung and Bricko (2020) investigated the effects of Mystery Math, a brief (15-session) supplemental intervention targeting conceptual and procedural understanding of algebra equations, with sixth graders with mathematics difficulties. Mystery Math includes explicit instruction and conceptual activities with manipulatives and visualizations to build knowledge of equations, develop vocabulary knowledge specific to algebra (e.g., expression, variable, order of operations), and improve fluency with whole-number computation. Results indicated that students assigned to the Mystery Math condition outperformed students in the control condition on posttest measures of equation concepts and algebra-specific vocabulary. However, effects on distal measures of equation solving were not observed. In a series of studies, Satsangi and colleagues used concrete and virtual (i.e., computer- based) manipulatives in explicit instruction of multistep equation solving for secondary students with learning disabilities (Satsangi, Bouck, et al., 2016; Satsangi, Hammer, & Evmenova, 2018; Satsangi, Hammer, & Hogan, 2018). For example, studies made use of balance scales with counting chips, which help demonstrate the need to balance both sides of the equal sign in solving an algebra equation. Virtual manipulatives also used virtual balance scales from the National Library of Virtual Manipulatives application (http://nlvm.usu.edu). Satsangi et al. (2016) noted the potential for virtual manipulatives to benefit older students as they offer ease of access and are often more age-appropriate than materials used with younger students.
Step 3: Instructional Modification II 317
Summary: Mathematics Interventions This is an exciting time in mathematics. Research has made considerable strides in understanding the features and characteristics of effective strategies and interventions, and the availability of intervention programs for mathematics has grown considerably. Although there is much work to be done, a number of scholars are engaged in high-quality research programs and are demonstrating exciting advancements in this area. Overall, research has revealed that key features of interventions across mathematics areas include (1) explicit and systematic instruction; (2) frequent practice with a range of examples; (3) establishing fluency with number combinations and procedural computation; and (4) use of the concrete– representational– abstract framework to reinforce conceptual understanding and, in particular, representing problems visually with models and manipulatives. Innovation in mathematics interventions will occur rapidly across the next 10 years and will offer new options for addressing skill difficulties.
INTERVENTIONS FOR WRITING SKILLS Over the last two decades, researchers such as Karen Harris, Steven Graham, Virginia Berninger, and others have conducted extensive work developing and evaluating intervention programs for improving students’ writing skills. Research and scholarship have also included a series of comprehensive systematic research reviews and meta-analyses on the characteristics of writing difficulties, and the effects of strategies and programs to improve writing skills across grades and levels of achievement (e.g., Gillespie & Graham, 2014; Graham, 1982; Graham & Harris, 2003, 2009; Graham, Harris, et al., 1991; Graham, Collins, et al., 2017; Graham & Perin, 2007; Graham, Harris, & Troia, 2000; Graham, MacArthur, et al., 1992; Graham, Olinghouse, et al., 2009; Mason et al., 2002; Santangelo et al., 2008; Sexton et al., 1998). The results of this work form the basis for this section. In Chapter 2, we reviewed evidence indicating that writing difficulties often include underdeveloped skills in multiple areas. Knowing how to develop an intervention that targets the right skills for an individual student depends on the data collected across the assessment.
Connecting Writing Assessment to Intervention Figure 6.9 provides a way to connect the assessment with intervention, in the context of the keystone model of writing discussed in Chapter 2. Although skill difficulties are multiple and pervasive, writing difficulties can generally be categorized into two primary domains: problems with transcription, which refers to fluency in handwriting and spelling that determine how efficiently students can put words to paper; and problems with composition, which includes skills in planning, drafting, organizing, maintaining coherence, and revising that determine writing quality. In Chapter 4, we discussed the importance of assessment that identifies whether a student’s writing difficulties are situated to a greater degree in one of the domains. More specifically, it is important to determine the extent to which the student’s writing skills are impaired by problems with transcription, which negatively affect higher-order writing skills in ways similar to how automaticity with basic skills in reading and mathematics facilitates higher-order processes in their respective domains. This will help determine whether transcription skills should be
318 Efficient
target
• Teach strategies to improve planning, organization, output, revising • Strategies for motivation, persistence • Language (vocabulary) as needed
• Handwriting, spelling instruction and practice to build efficiency (typing if relevant) • Strategies for motivation, persistence • Language (vocabulary) as needed
FIGURE 6.9. Connecting assessment to intervention in the keystone model of writing.
target
Writing Quality
Composition
Self-Regulation, Motivation
Composition: Planning, Drafting, Grammar, Syntax, Revising
If assessment reveals that difficulties involve…
Accurate
Transcription: Handwriting (or Typing) and Spelling
Oral Language and Reading
Transcription
Step 3: Instructional Modification II 319
targeted before, or as a part of, intervention for composition. Intervention for composition difficulties is appropriate when skills in transcription are adequate, and they can also be implemented in conjunction with components that improve transcription fluency. The student’s language skills (and relatedly, background knowledge), as well as their motivation to write and ability to maintain their effort, are components implicated in both transcription and composition, and therefore may be targets of intervention components to support writing improvement.
Evidence‑Based Components of Effective Writing Interventions Several research syntheses and meta-analyses have summarized the evidence base for interventions aimed at improving students’ writing skills across grade levels. These reviews have helped identify strategies and components of interventions associated with stronger effects than others.
Elementary Grades Graham et al. (2012) reviewed 13 interventions that had been investigated in at least four studies and were implemented with students in grades 1–5. Their review observed that most interventions addressed multiple components of writing, and a variety of strategies and skill targets were effective in improving students’ writing skills. The one exception, however, were interventions focused only on grammar instruction, which did not yield appreciable improvement in students’ writing. Given that poor writers tend to struggle with basic transcription skills, as well as very low production and planning, it makes sense that an intervention that targets only grammar would be insufficient for improving writing skills. In contrast, the intervention characteristics that were effective in improving elementary students’ writing, with large effect sizes, included explicit teaching that focused on writing strategies, which involved teaching students skills in planning, drafting, and revising. Most of the studies of strategy instruction used the self-regulated strategy development (SRSD) model (Harris & Graham, 1996, 2016), a comprehensive, multicomponent intervention approach that covers multiple facets of the writing process, which will be described in more detail below. Of all approaches, SRSD was associated with the largest effects. Additionally, the review observed that the addition of self-regulation components—teaching students to set goals for their writing and self-assess their wiring products—improved outcomes over instruction without self-regulation components. Other beneficial aspects included peer assistance (i.e., when students worked in pairs or small groups on planning, drafting, or revising), teaching students about text structure, prewriting activities that supported planning and building knowledge before writing, and transcription (handwriting and spelling). A more recent “best-evidence” review (i.e., only high-quality studies based on a set of criteria were included) by McMaster et al. (2018) focused on interventions that targeted early writing skills, in which interventions had to include transcription skills (handwriting or spelling). Their review found that transcription instruction had significant benefits on the quantity and quality of students’ writing, and the authors concluded that transcription should be included as a component in interventions for struggling writers. They also observed that the SRSD model was highly effective for students with early writing difficulties, particularly the strategies used in SRSD that target planning and strategies specific to writing genre.
320
Academic Skills Problems
Adolescents Graham and Perin (2007) conducted a meta-analyses of interventions for students in grades 4–12. Similar to the findings of Graham et al. (2012) with elementary students, Graham and Perin observed that explicit strategy instruction that targeted skills in planning, drafting, and revising text were associated with the strongest effects, and of these approaches, SRSD interventions had the largest effects of all. Other more effective components included explicit instruction in summarization, peer assistance and support, and setting goals for writing products. In contrast, interventions that targeted only grammar were generally not effective in improving students’ writing. Additionally, instruction or professional development that focused on a process approach to writing instruction was less effective, which refers to less explicit forms of writing instruction in which students are provided with minimal teaching and extended opportunities to write freely, interact with others about their writing, and engage in self-reflection. Another component that was associated with only small effects was study of models, which referred to having students review and study one or more examples of exemplary writing, and then try to emulate it in their own writing. Given the importance of explicit and systematic instruction observed across academic areas, it is not surprising that less explicit forms of instruction such as the process approach or studying models were less effective.
Students with LD across Grades Gillespie and Graham (2014) conducted a meta-analysis of writing interventions implemented for students with LD across grades 1–12. Their findings were similar to the previous reviews; the most effective writing instruction included explicit instruction in strategies for planning, drafting, and revising. The strongest effects of all were associated with strategy instruction that used the SRSD approach. Other strategies that tended to be effective were dictation and teaching students to set goals for their writing. Efforts aimed at helping students learn writing processes such as planning and revising were only effective if they included direct instruction, as opposed to simply providing students with models or writing opportunities without instruction. In summary, multiple research syntheses to date that have included over 100 experimental studies of writing interventions reveal that the most effective techniques involve explicit instruction in planning, drafting, and revising. Strategies that embed self-regulation within instruction are effective, as are components that target transcription (especially for students in early grades). Most notably, SRSD is recognized as the intervention approach with the most evidence for improving writing skills across grades. However, before talking about SRSD more specifically, it is worthwhile to discuss intervention in transcription, which should be considered primary when students demonstrate difficulties with this aspect.
Transcription Interventions It is important to consider transcription interventions first because transcription represents a potentially significant limiting factor of writing output and quality (Limpo et al., 2017). Students will be significantly constrained in drafting high-quality written content if they are severely limited in their ability to get words onto the page. Even among students in the seventh and eighth grades, Limpo et al. (2017) found that better handwriting fluency was associated with stronger writing planning, which in turn was predictive of
Step 3: Instructional Modification II 321
overall writing quality. In fact, developers of SRSD recommend that students should be able to write at simple sentences (e.g., “The dog ran”) before starting SRSD (Harris et al., 2008). Transcription need not be perfect; if students can only write just very short and simple sentences, they can start interventions like SRSD while transcription is addressed. But if students have severe difficulties with handwriting and spelling that impair their ability to form even simple sentences, then transcription should be addressed first. HANDWRITING INSTRUCTION TO IMPROVE WRITING
Graham, Harris, and Fink (2000) randomly assigned first-grade students with handwriting difficulties to one of two conditions: (1) instruction and practice writing letters, words, and sentences to build accuracy and fluency, or (2) a phonological awareness instruction control condition. Handwriting practice took place in 15-minute sessions, 3 times per week, for 9 weeks. At posttest, students in the handwriting practice condition demonstrated stronger and statistically significant performance differences compared to the control group in letter writing legibility and fluency, measures of writing and compositional fluency, and students’ attitudes about writing. Improvements in writing fluency remained at a maintenance assessment 6 months after the intervention. However, differences between the groups in writing quality were not observed. Overall, results indicated that handwriting improvement is a causal factor in learning to write. Graham and Harris (2006) provided a description of handwriting instruction and practice strategies that can be used for intervention. SPELLING INSTRUCTION TO IMPROVE WRITING
Graham et al. (2002) randomly assigned second graders with spelling difficulties to either (1) spelling instruction and practice, or (2) mathematics control instruction. Spelling instruction took place for 20 minutes per session, 3 times per week, for 16 weeks and included direct instruction in spelling letter patterns and words, spelling practice, and word sort and word identification activities. At posttest, compared to students who received math intervention, statistically significant differences were observed that favored students in the spelling instruction condition on measures of spelling and writing fluency (with an effect size of 0.78, indicating strong effects). However, differences were not observed on measures of story length or quality, and effects on spelling or writing fluency were not maintained 6 months following intervention. As discussed in the section on reading interventions in this chapter, CCC offers a technique for improving students’ spelling skills and providing practice opportunities. Strategies for targeting spelling are discussed in detail by Harris et al. (2017), which include instructional routines and examples. Additional research-based strategies for spelling practice are provided here. Add-a-Word, described by Pratt-Struthers et al. (1983), involves having students copy a list of 10 words, cover each word and write it a second time, then check each word for correct spelling against the teacher’s list. Misspelled words are repeated and remain on the list. If a word is spelled correctly on 2 consecutive days, the word is dropped from the list and replaced with a new word. Results of this study showed that the percentage of words correctly spelled by all nine of the fifth- and sixth-grade students with LD increased during a creative writing assignment using this procedure. McLaughlin et al. (1991) replicated the Add-a-Word program with nine 12- to 14-year-old students. Comparing the program to the more traditional approach of giving
322
Academic Skills Problems
a Monday spelling pretest, followed by an end-of-the-week spelling posttest, students’ overall accuracy in spelling was higher during the Add-a-Word program than during any other form of instruction. Similar findings for the Add-a-Word program were reported by several other investigators (McAuley & McLaughlin, 1992; Schermerhorn & McLaughlin, 1997; Struthers et al., 1989, 1994). Delquadri et al. (1983) investigated the effects of classwide peer tutoring on spelling performance in a regular third-grade classroom. Students were awarded points for correct performance during tutoring. In addition, demonstration and feedback were employed when errors were made during tutoring sessions. Significant reductions in the numbers of errors made on weekly Friday spelling tests were evident when the peer-tutoring procedures were in effect. Others have found that peer tutoring is a way to ensure sufficient practice in spelling (DuPaul et al., 1998; Mortweet et al., 1999). Madrid et al. (1998) showed that peer-tutored spelling resulted in greater increases in performance when compared to teacher-mediated instruction. Peer-tutoring procedures can be implemented in small groups or as a supplemental support strategy for students. Shapiro and Cole (1994) developed the self-management spelling (SMS) procedure as a practice strategy. The procedure requires that the list of spelling words be recorded on a digital device, and that the student practice their spelling during some independent work time. The recording provides the word to be spelled and a sentence using the word if needed, pauses for about 7 seconds, and then says and spells the word. Students are instructed to listen to the word and sentence, pause the device, write their response, write the correct spelling when they hear the word, and then immediately compare the word they have written to the correct spelling. Students then move through the list of words until completion. After they complete their practice, students chart their performance. This technique is repeated until students are able to achieve success, operationalized as over 90% of the words presented are spelled correctly, at which time they inform the teacher that they are ready to take their test. The test itself can be placed on a recording device with the correct spelling of the word removed, and students are not permitted to stop the recording. The advantages of SMS are that students can work on the activity independently, manage their own time, receive immediate corrective feedback, and see the outcomes of their work. COMBINED HANDWRITING AND SPELLING INSTRUCTION TO IMPROVE WRITING
Graham et al. (2018) examined the effects of combining handwriting and spelling instruction for first-grade students with low achievement in both areas. Students were randomly assigned to either (1) handwriting and spelling instruction and practice, or (2) phonological awareness control instruction. All students received one-to-one instruction in 20-minute sessions, 3 times per week, for 16 weeks. At posttest, compared to students in the phonological awareness condition, students in the handwriting and spelling condition demonstrated significantly stronger outcomes on measures of handwriting fluency and legibility, spelling, and composition vocabulary (i.e., proportion of unique words with seven or more letters in their passage), but not on composition length or quality. A research review by McMaster et al. (2018) found that several studies demonstrated improvements in students’ writing skills, following interventions for handwriting and/or spelling. Furthermore, in contrast to the findings of Graham et al. (2018), several studies found that handwriting and spelling improvements were associated with improved writing quality. Expectedly, the average effects of transcription interventions were stronger for writing fluency and quantity compared to the effects on writing quality; however, the
Step 3: Instructional Modification II 323
results provide evidence of the facilitative effects of transcription fluency for higher-order composition skills. TYPING AND WORD‑PROCESSING PROGRAMS TO IMPROVE WRITING
Research has also investigated whether students with writing difficulties benefit from writing on computers, as opposed to writing by hand. Overall, research reviews have revealed that typing and using word-processing programs were associated with benefits in students’ quality of writing, and effects were moderate (Graham & Perin, 2007; Graham et al., 2012). However, Connelly et al. (2007) found that for 5- and 6-year-olds, keyboarding output lagged significantly behind their handwriting output. Overall, the findings should be interpreted as follows: Younger students, especially those with difficulties in handwriting, may benefit from typing instruction and writing on computers. However, the use of a computer alone is likely to be insufficient as a recommendation for writing intervention—for that, instruction in strategies for planning, drafting, and revising content are needed. SUMMARY: TRANSCRIPTION INTERVENTIONS
In summary, research indicates that for students with writing difficulties, intervention targeting transcription skills (i.e., improving legibility and fluency in handwriting and spelling), improves handwriting and spelling, and may also benefit students’ writing output and writing quality. These effects can be viewed as removing barriers to quality writing. Although the effects of transcription interventions did not always transfer to improving the quality of students’ writing, it helped them write compositions more fluently and, in one study, improved the richness of the vocabulary that students used in their writing. Clearly, additional instructional components are needed to improve overall composition quality, but it is notable that improving basic foundational skills in transcription is an important facilitator of stronger overall writing.
Composition and Multicomponent Writing Interventions: SRSD As demonstrated repeatedly across the literature syntheses and meta-analyses, the writing intervention framework with the strongest empirical support to date is SRSD. SRSD in writing is the result of work by Karen Harris and Steve Graham and their colleagues spanning several decades. SRSD has been successfully implemented with whole classes, small groups, or individual students, and can be used across genres and purposes for writing. Instruction is typically implemented 3 times per week for 20 to 60 minutes per session (shorter sessions for younger students). Graham and Harris (2003) noted that elementary students can complete SRSD in 8 to 12 sessions that are 30 to 40 minutes in length. Detailed descriptions of SRSD are provided in several publications (Graham & Harris, 2003; Harris & Graham, 1996; Harris et al., 2003, 2008), and resources for training and implementation are available from the developers (srsdonline.org) and the Vanderbilt IRIS Center (https://iris.peabody.vanderbilt.edu/module/srs/#content). An overall summary, based on these publications and resources, is provided here. SRSD has three overall goals: (1) build students’ knowledge about writing and their skills in planning, drafting, revising, and editing; (2) support students’ development of their abilities to monitor and manage their own writing; and (3) improve students’ attitudes about writing and their writing abilities. The integration of self-regulation aspects
324
Academic Skills Problems
throughout the intervention is particularly notable and was highly intentional, given the difficulties that students with academic and behavioral difficulties have with regulating their attention, cognitions, planning, impulsivity, seeking help, and sustaining effort across complex and challenging tasks. Handwriting and spelling components can be added and integrated within SRSD as needed, but the only prerequisite transcription skills for SRSD are that students should be able to write simple sentences (Harris et al., 2008). SRSD involves six stages, but the stages refer to a general format and are recursive (i.e., stages can be retaught or revisited, as needed). Some stages may be skipped if they are not needed for individual students. 1. The focus of Stage 1 is Developing and Activating Students’ Vocabulary and Knowledge about writing. This includes developing their understanding of terms such as setting and character. The basis for the self-regulation components is targeted in this stage as well, which includes teaching students to make self-statements that they will later utilize to help regulate their strategy use, effort, and persistence while writing. Instruction also teaches students to recognize self-statements that can be detrimental, such as “I’ll never be good a writing,” and ways to reframe those thoughts to be more adaptive. 2. In Stage 2, Discuss It, the teacher explains the strategies that students will learn and how they will be helpful. The strategies involve planning, drafting, revising, and editing. Most strategies in SRSD utilize acronyms to facilitate learning and recall, and as each strategy is explained, the teacher describes how the strategies can be applied across writing situations and contexts. Students’ attitudes about writing are discussed as well. Graphic organizers are used to help students record and organize their thoughts. The strategies can also be explained in the context of the students’ current writing performance to explain how they will help students’ writing. For example, some of the strategies include: a. The POW acronym (Pick my idea, Organize my notes, Write and say more) serves as an overarching framework for guiding students before, during, and after writing, and is used as the foundation across the different types of writing targeted in SRSD (e.g., writing a story or persuasive essay). b. Story writing is supported with two acronyms that refer to story parts: WWW (Who is the main character? When does the story occur? Where does the story take place?) and W = 2 H = 2 (What do the main character and other characters do or want to do? What happens then and with the other characters? How does the story end? How do the main character and the other characters feel?). Thus, POW is used with the WWW and W = 2 H = 2 acronyms. A graphic organizer is used for students to record and organize their thoughts for each aspect of the acronym. Because stories come more naturally to students, story writing is targeted first, especially for younger students. c. The TREE acronym is used to support writing a persuasive essay: Topic sentence (tell what you believe), Reasons (three or more: Why do I believe this? Will my readers believe this?), Ending (wrap it up), and Examine (Do I have all my parts?). d. Other mnemonics can be taught depending on students’ age, writing skills, and situations. For example, Ray et al. (2019) taught high school students a mnemonic for writing college entrance essay examinations.
Step 3: Instructional Modification II 325
3. In Stage 3, Model It, the teacher demonstrates use of the strategies by collaboratively composing a piece of writing with the students. The teacher “thinks aloud” by systematically making use of self-statements that support students’ planning and problem solving, maintaining their attention, evaluating their work and correcting errors, self- control, and reinforcement. They collaboratively graph elements of a composition and collaboratively revise the composition to make it better. Students also learn to write their own self-statements. Mnemonics, cue cards, and graphic organizers are explicitly used throughout the demonstration. 4. Stage 4, Memorize It, involves instruction for students to memorize the steps and strategies defined by the mnemonics. Cue cards, quizzes, and games are used to support students in memorizing. Teachers emphasize that memorization of the strategies is key to becoming effective, independent writers. 5. In Stage 5, Support It, the teacher gradually fades use of the visual prompts and graphic organizers, and supports students as they develop skills in using the strategies and self-statements. Students are encouraged to add their own versions of the process and strategy mnemonics and graphic organizers to their work to support their writing. Teachers may revisit previous stages as needed. They might also continue to write collaboratively with the students to draft and revise, and collaborative writing with peers (including peer evaluation for revising and editing) is supported as well. Overall, the supports are faded as students become successful in using them, and teachers emphasize how to generalize use of the strategies to other situations. SRSD developers note that Stage 5 often lasts the longest, especially for students with significant writing difficulties, because it represents the gradual transition from teacher support toward increasing student independence. 6. Stage 6 is Independent Performance, in which the teacher monitors students as they use the strategies themselves. Teachers review and demonstrate as needed and continue to discuss generalization and bolstering of students’ attitudes about writing. More resources and information for implementing SRSD are available from the developers at srsdonline.org. In summary, SRSD is the strongest empirical support intervention for improving writing skills. It is the result of extensive development work (Harris & Graham, 1996, 2016; Harris et al., 2003, 2008), refined across 20+ years of experimental research, during which it has demonstrated significant benefits in improving the writing skills of students across grade levels. It comprehensively addresses strategies for before, during, and after writing, and can be applied to any type of writing (e.g., stories and narrative writing, persuasive essays, expository and informational passages).
SUMMARY AND CONCLUSIONS: SPECIFIC AND INTENSIVE INTERVENTIONS Although there are still areas in which intervention development can improve, extensive and comprehensive options now exist for intervening on students’ skills in reading, mathematics, and writing. These interventions range from relatively simple strategies that can be applied within and across academic domains, to comprehensive intervention programs. Although not exhaustive, this chapter was an attempt to summarize some of
326
Academic Skills Problems
the interventions with the strongest evidence of effectiveness and utility with struggling learners. As readers come across other interventions and review intervention recommendations, it is important to keep in mind what consistently appear to be the common and consistent characteristics of effective interventions: (1) explicit instruction that models new skills, teaches directly, and scaffolds instruction; (2) frequent opportunities for students to actively respond and practice with teacher support and feedback; and (3) ensuring or targeting accuracy and automaticity with critical foundational skills. Missing from this review of intervention techniques are the procedures more commonly viewed as cognitive and intellectual processing approaches to remediation. For example, we did not include recommendations for aligning instruction or curriculum materials to a student’s hypothesized learning modality, given such an approach’s lack of benefit (Kavale & Forness, 1987; Pashler et al., 2009). We did not review interventions targeting students’ working memory because research indicates they are seldomly effective in improving overall academic skills (Melby-L ervag & Hulme, 2013; Melby-L ervag et al., 2016). We did not review interventions that exclusively target executive functioning because there is no convincing evidence these interventions alone improve academic skills (Jacob & Parkinson, 2015; Kassai et al., 2019). We did not review interventions that target sensory and perceptual processes or motor skills, based on extensive evidence that such approaches are ineffective (Kavale & Mattson, 1983; Leong et al., 2015). Recommendations for interventions of these types, common in the past and still present today, are unlikely to result in improved academic achievement. Instead, what tends to be more effective are approaches that target skills in reading, mathematics, and writing directly. These approaches teach skills that are relevant to the student’s academic problem and provide extensive opportunities to practice such skills. Emphasizing a functional approach to academic problems, this perspective recognizes that proficiency in reading, mathematics, and writing is built on automaticity and efficiency with basic “keystone” skills in reading words, completing mathematics calculations, and writing sentences, respectively. Intervention that directly targets deficits in these keystone skills offers the best foundation for success in instruction targeting higher-order skills and performance. Practitioners now have several resources for identifying effective intervention strategies, as well as for locating and purchasing support materials for their implementation. The What Works Clearinghouse reviews and disseminates information to educators regarding effective strategies for academic intervention (ies.ed.gov/ncee/wwc). Similarly, the National Center for Effective Intervention conducts reviews of interventions and assessments, and summarizes the evidence for each in easy-to-read charts and tables (intensiveinterevention.org). The IRIS Center at Vanderbilt University (https:// iris.peabody.vanderbilt.edu) offers numerous self- guided tutorials, video demonstrations of implementation, and comments from program authors on implementing intervention strategies and techniques. Efforts to catalog and review effective intervention strategies have also been provided by similar groups at Johns Hopkins University (www. bestevidence.org) and the Center on Instruction (www.centeroninstruction.org). The Institute for Education Sciences of the U.S. Department of Education has published an excellent series of practice guides derived from the empirical research that is designed to guide instructional practice in interventions in reading, mathematics, and writing, as well as in assessment (ies.ed.gov/ncee/wwc/publications/practiceguides). These documents provide resources to guide practice in many areas of intervention delivery. NASP Best Practices in School Psychology edited by Harrison and Thomas, as well as Handbook of Learning Disabilities edited by Swanson et al. (2014), offer further volumes of
Step 3: Instructional Modification II 327
information on empirically supported interventions for students with learning difficulties and disabilities. Although the field continues to uncover “what works,” it is also true that what may work with one particular student may not work with the next student, even though their difficulties appear identical. So-called “evidence-based intervention” does not mean it is universally effective (L. S. Fuchs, D. Fuchs, et al., 2008). Some students will not respond to even the most powerful interventions. Although data- and hypothesis-driven intervention selection is a key aspect in the problem-solving process, whether that intervention remains in place depends on the progress monitoring data we collect.
CHAPTER 7
Step 4: Progress Monitoring
R
egardless of which instructional modifications or interventions are implemented in Step 3, the next step is to continuously examine whether the intervention results in improved academic performance. Setting a meaningful academic performance goal, and collecting brief, repeated measures over time to determine whether the student is making progress toward their goal, are referred to as progress monitoring. Progress monitoring is a type of formative assessment (i.e., assessment collected while instruction is ongoing) that helps indicate when effective intervention programs should continue, when intervention programs should be adjusted or changed, or when intervention may no longer be needed. Failing to evaluate a student’s academic progress after starting an intervention program is tantamount to undergoing months of diagnostic tests for a medical condition, and being prescribed intensive pharmacotherapy by a physician, but never checking back with the physician to see if the therapy is working. Progress monitoring is another way to consider the unique needs of each individual student. Indeed, progress monitoring is a primary component of any system of problem-solving assessment. One can argue that out of all the assessment activities discussed in this text, progress monitoring is the most important.
CURRICULUM‑BASED MEASUREMENT Overall, when teachers collect data on an ongoing basis and use it to make decisions, students benefit. But there are important aspects regarding how and why the data are collected that are important. In the 1970s, Stanley Deno, Phyllis Mirkin, and other special education researchers and educational psychologists at the University of Minnesota developed data-based program modification (Deno & Mirkin, 1977), which later became known as curriculum-based measurement (CBM; Deno, 1985; L. S. Fuchs, Deno, & Mirkin, 1984; L. S. Fuchs & D. Fuchs, 1986a). CBM was designed as a progress monitoring framework in which brief skill probes, reflective of students’ overall achievement 328
Step 4: Progress Monitoring 329
in an academic domain (reading, mathematics, writing, and spelling), were administered on a regular basis while an intervention was ongoing. Data were regularly interpreted to identify if students were on track to meet important objectives or annual goals, thereby signaling the need for instructional changes when progress was below expectations. CBM will be extensively discussed across this chapter, as it represents the most well-researched and operationalized framework for monitoring student progress. L. S. Fuchs, Deno, and Mirkin (1984) reported what is perhaps one of the most historically important and significant studies on the relationship between teacher monitoring of progress and student achievement. Special education teachers in New York City public schools were randomly assigned to either a CBM repeated measurement condition or a conventional measurement group. Those in the CBM group were trained to assess reading using 1-minute probes taken from the students’ reading series. Data from these probes were graphed and implemented using a data-utilization rule that required teachers to introduce program changes whenever a student’s improvement across 7–10 measurement points appeared inadequate for reaching their goal. Teachers in the conventional measurement group set IEP goals and monitored student performance as they wished, relying predominantly on teacher-made tests, nonsystematic observation, and scores on workbook exercises. Data were obtained on a CBM of reading (using third-grade reading material), the Structural Analysis and Reading Comprehension subtests of the Stanford Diagnostic Reading Test (Karlsen et al., 1975), the Structure of Instruction Rating Scale (Deno et al., 1983), a teacher questionnaire designed to assess progress toward reading goals and describe student functioning, and a student interview. Results of the study showed that children whose teachers used the CBM framework of repeated measurement and decision making made significantly better academic progress than students whose teachers used their own methods. These improvements were noted not only on the passage-reading tests, which were similar to the repeated measurement procedure, but also on decoding and comprehension measures. In addition, teachers using CBM procedures wrote more precise and specific goals for students, and their students showed significantly more knowledge about the progress they were making. Extensive, subsequent research with CBM indicated that systematic, formative assessment and a framework for making data-based instructional adjustments were associated with stronger student achievement compared to when teachers do not use such a process (Filderman et al., 2018; Förster et al., 2018; Jung et al., 2018; Powell, Lembke, et al., 2020; Stecker et al., 2005). Studies have also shown that when teachers regularly collected CBM data and used a structured set of decision rules to interpret the data, they were more aware of students’ progress and demonstrated greater structure in their teaching, wrote more specific instructional plans and goals, and tended to set more ambitious goals for their students (L. S. Fuchs, Deno, & Mirkin, 1984; L. S. Fuchs, D. Fuchs, & Hamlett, 1989; Stecker et al., 2005). Readers should note that CBM is a measurement process, not an intervention. So why do students have better outcomes when their teachers monitor progress? Is it because students get more practice by being assessed more often? No. Collecting data on an ongoing basis, and using it within a decision-making framework, help teachers more readily recognize when instruction should be adjusted to boost students’ progress. Said differently, the fundamental notion behind CBM is that when educators have frequent information on their students’ progress and follow a framework for using the data to make decisions, they are more aware of student progress (or lack thereof) and thus are more likely to adjust instruction in ways that maximize student growth.
330
Academic Skills Problems
Distinguishing Terms: Progress Monitoring, CBM, and Data‑Based Individualization Readers may encounter several terms in the literature that may be used interchangeably; therefore, it is worthwhile to discuss how they overlap. Data-based individualization (DBI; Danielson & Rosenquist, 2014; D. Fuchs et al., 2014) refers to a framework for using progress monitoring and other sources of data to tailor instruction for students with difficulties and disabilities in learning or behavior. DBI is consistent with the assessment and intervention process we describe across this book. Formative assessment, and CBM in particular, are central to DBI. Essentially, DBI is a further operationalization of the CBM decision-making framework developed by Deno and colleagues. There are five steps in DBI, as outlined by the National Center on Intensive Intervention (NCII, 2013): 1. The teacher delivers an intervention program, which may include an “off-theshelf” program that targets skills the student needs to improve. 2. The student’s progress is monitored (often, with CBM) on an ongoing basis (i.e., once per week), and data are plotted on a graph and routinely examined. 3. When data indicate the student is not making desired progress, a diagnostic assessment is used to determine the student’s specific needs and how intervention could better meet those needs. 4. The intervention is adapted according to the results of the diagnostic assessment. 5. Monitoring of student progress continues and Steps 3–5 are repeated when necessary. In short, terms such as CBM, progress monitoring, and DBI overlap. Progress monitoring refers generally to repeatedly measuring student performance during intervention, CBM refers to a specific framework for monitoring progress and decision making, and DBI describes how CBM and other data are used in a recursive process for tailoring instruction to maximize student growth.
CHARACTERISTICS OF CBM AND OTHER FORMS OF PROGRESS MONITORING Progress monitoring can be used to evaluate both the general and specific skill outcomes of academic interventions.
General Outcomes Measurement General outcomes measurement (GOM) is central to CBM (see Chapter 1, this volume, and L. S. Fuchs & Deno, 1991). It involves monitoring progress with measures in which performance is strongly indicative of achievement in the overall academic domain (reading, mathematics, writing). They can be viewed as measures of academic “vital signs.” Change in these skills is reflective of overall improvements in achievement rather than growth in only a specific subskill. GOM measures (1) must directly assess academic skills that are relevant and important to improve, (2) must be measured frequently and repeatedly throughout the intervention period with probes that are equivalent in difficulty, and (3) must provide a score that can be charted and allow for visual analysis of graphed data (Deno, 1985; Marston & Magnusson, 1988; Marston & Tindal, 1995).
Step 4: Progress Monitoring 331
GOM is the primary approach to measuring progress in CBM. It uses a standardized, systematic, and repeated process of assessing students directly from material that reflects student outcomes across curricular objectives, and thus provides an index of students’ progress toward important objectives at a given time. The most well-known, well-researched, and well-utilized GOM tool is CBM oral reading (CBM-R, also known as oral reading fluency, reading CBM, or passage reading fluency). As reviewed in Chapters 2 and 4, the rate at which students orally read from a passage of text is highly indicative of their overall reading achievement, given the importance of efficient, automatic word recognition for allowing reading comprehension to take place. Improvement in CBM-R is usually reflective of improvement in reading overall, especially for most students with reading difficulties. Additionally, CBM-R samples are easy to administer and score, and the words-correct-per-minute metric involved in scoring CBM-R can be readily charted to evaluate progress. Thus, for struggling readers, especially those who experience difficulties with word and text reading accuracy and efficiency (which characterize most students with reading difficulties), CBM-R represents a powerful measure for monitoring progress during reading intervention. CBM-R does not say everything about a student’s reading achievement, but it was never intended to. It is meant to provide an overall index of the student’s level of reading proficiency, and it performs remarkably well in that role. To illustrate GOMs with another example, suppose the third-grade curriculum for mathematics computation indicates that, by the end of the school year, students should be proficient with adding and subtracting two- or three-digit numbers, with and without regrouping, and fluency with single-digit multiplication number combinations. A CBM mathematics computation GOM measure could include a series of 36 probes (36 weeks of school, 1 probe administered per week) consisting of items from all of these skill areas: two- and three-digit addition and subtraction problems, with and without regrouping, and multiplication facts. One probe would be administered weekly. During the first few weeks of school, a student’s performance on these probes may not be very high. However, as these skills are taught, student performance on the probes should improve, if the instruction is effective. After all, the material being taught is appearing on the test, and the student’s performance should reflect the learning that is occurring. Lack of progress indicates that the current instruction is not meeting the student’s needs.
Specific Subskill Mastery Measurement Most recommendations for progress monitoring, including our recommendations in this text, will focus on GOM measures. They tend to provide the most straightforward way of monitoring student progress continuously across a period of time in a skill highly representative of overall achievement in that academic domain. However, there are situations in which information on a student’s growth in a specific subskill that (1) underlies performance on the GOM and (2) is specifically targeted by the intervention is of value in evaluating a student’s progress. Specific subskill mastery measurement (SSMM) refers to monitoring specific skills that are directly targeted by an intervention (e.g., Blankenship, 1985; Idol-Maestas, 1983), in contrast to more general skills measured by a general outcomes measure. For example, consider Hector, a third-grade student with reading difficulties who is receiving intervention to improve his decoding, especially words with common vowel combinations (e.g., oo, ou, ea, ai, oi), silent-e, and variable vowels that occur with r (i.e., r-controlled). Instruction in Hector’s case will target each of these skills, in sequence. A GOM
332
Academic Skills Problems
measure, like CBM-R, would monitor his progress with the idea that improvements in these specific skills (as well as others) will ultimately translate to improved text reading over time. An SSMM approach, on the other hand, might monitor Hector’s accuracy reading only words with the vowel combinations. Once he demonstrates mastery of this skill, SSMM might then shift to monitoring his accuracy reading words with silent-e, and after that skill is mastered, shift to monitoring accuracy reading words with r-controlled vowels. In this way, SSMM provides a clear index of whether the student acquires the skills targeted by the intervention. In contrast, change on the CBM-R GOM will not be as immediate because passage reading fluency requires accuracy and efficiency with lots of other kinds of words. Over time, aggregated improvement in the skills targeted will be expected to improve the student’s overall word reading skills, thus translating to longterm improvement on CBM-R. Thus, a primary strength of SSMMs is their sensitivity; they can readily detect whether a student is acquiring the specific skills targeted by the intervention in the short term. The limitations of SSMM can also be observed in Hector’s example. SSMM requires shifting to new measures each time a skill is mastered, in contrast to monitoring with a GOM across the entire period of time. SSMM provides information on acquisition of a specific skill, but not whether the student transfers that skill to general situations in which it is expected. SSMM measures are often unique to skills being targeted, meaning that users may have to develop their own SSMM tasks. Exceptions, for example, are times when early literacy measures like letter–sound fluency or phoneme segmentation fluency serve as an SSMM for a given reading intervention, or measures of number combination fluency serve as an SSMM for a mathematics intervention. Measures of these types are available as described later in this chapter; however in other situations, users will have to create SSMMs themselves. Some intervention programs may include SSMM tools embedded or integrated within the program. SpringMath (sourcewelltech.org) is one example; the system generates subskill mastery progress monitoring probes for the user based on the instruction a student receives, which allows the user to monitor progress in that skill, input the data, and allow the system to generate recommendations when instruction can move on to the next skill.
Should I Monitor Progress with a GOM, an SSMM, or Both? Whether to use a GOM or SSMM approach to monitoring progress has been a frequently asked question over the years, and as with so many questions in our field, the answer is “it depends.” L. S. Fuchs and D. Fuchs (1986a) conducted a meta-analysis of studies that used GOM or SSMM methods of measuring progress toward outcomes on students’ IEPs. Analysis of these data showed that the largest effects corresponded to the type of measure. When progress toward broader, general, and long-term goals was measured with GOMs, students were more likely to show greater progress than when SSMMs were used. Likewise, when progress on more specific skills toward short-term goals was measured, SSMMs were more likely to reflect progress than those measuring global performance. Although monitoring progress with GOM methods is more often recommended (L. S. Fuchs & Deno, 1991), decisions regarding whether to use a GOM or an SSMM should not be an either/or proposition. They absolutely can work together, and sometimes including SSMM adds significant value for instructional decision making over using a GOM alone. Obviously, this adds more to the assessment load for teachers, but the combination
Step 4: Progress Monitoring 333
of these two data sources can provide sensitive information on the effects of instruction and understanding students’ progress toward important overall outcomes. SSMMs are particularly useful when intervention targets skills that are naturally more discrete, such as mathematics number combinations or procedural computations, or early literacy skills such as letter–sound correspondence. Across this chapter, we will discuss the situations in which it may be useful to consider a measure of specific subskills, in addition to a GOM, for monitoring progress.
Making Your Own Progress Monitoring Probes Is Not Necessary in Most Situations Earlier in this text, we discussed why it is no longer necessary to develop progress monitoring measures specifically from the student’s specific curriculum or program. There are several vendors that offer CBM tools as well as Web-based software options for storing data and generating reports, including Acadience, AIMSweb, Dynamic Indicators of Basic Early Literacy Skills (DIBELS), easyCBM, and FastBridge. Some of these tools are available free of charge for use with individual students, and several also have options for computer-based administration and/or scoring. Whenever possible, we recommend using published CBM tools. In addition to saving time and effort from having to create measures, practitioners can have more confidence in their overall psychometric properties (i.e., reliability, validity, including equivalence difficulty of the probes in the set) for progress monitoring decisions, compared to measures they would create themselves. All of the vendors noted above provide GOMs, and some offer measures that can be used in a SSMM role. Nevertheless, there may still be situations in which it might prove necessary to create probes, such as resource limitations that prevent access to a commercial publisher, or specific skills one wishes to monitor that are not reflected in existing tools. For these reasons, Appendix 4A retains a procedure for creating progress monitoring probes for situations in which published measures are unavailable. In this chapter, a step-by-step process for monitoring progress in reading, spelling, mathematics, and written language is described. First, we discuss the selection of progress monitoring measures, indicating available and useful measures in each domain. Next, we discuss procedures and considerations for setting progress monitoring goals and determining measurement frequency. Then, we discuss how to interpret the data for making instructional decisions, and finally we discuss procedures and considerations for adjusting instruction.
STEP 1: SELECTING A PROGRESS MONITORING MEASURE The first step in progress monitoring is to select a measure. The measure should be directly connected to the skills the student needs to improve and what is targeted by the intervention.
How Many Measures to Use? One question we hear often is how many measures to use for progress monitoring. We (as well as others in the field) strongly recommend limiting progress monitoring to just
334
Academic Skills Problems
one measure that will capture the student’s progress in the skill domain targeted by the intervention. Because progress monitoring involves administering measures frequently (i.e., one to two times per week), assessment must be efficient so that it does not take too much time away from instruction. With that said, however, there may be situations in which adding a measure may provide valuable information about a student’s responsiveness to the intervention. These situations might involve adding an SSMM for monitoring with a GOM. However, we emphasize that progress monitoring must be kept efficient. Its purpose will be defeated if you spend unnecessary time assessing. Therefore, adding an additional measure for progress monitoring should only be done if it is believed that another measure will add significant value for informing decision making. We also note that progress monitoring data are not the only source of information used in making instructional decisions; therefore, practitioners should not feel the need to add other progress monitoring measures just for the sake of getting more information. As we discuss in the decision-making section later in this chapter, progress monitoring data are important, but not the only source of information in determining when, if, and how to adjust instruction. In addition to progress monitoring data, information and data sources such as a student’s recent work (i.e., permanent products like quizzes, worksheets, writing samples), any assessments that were recently collected, behavior data, and input from the teacher or interventionists can be used as part of the decision-making process.
Measure Selection: Early Reading Several progress monitoring measures are available for monitoring progress in early reading skills. These types of measures would be appropriate for students in basic stages of reading acquisition, and who are receiving intervention to improve skills in basic word reading (i.e., decoding). In these cases, monitoring progress in passage reading fluency (CBM-R) is usually not informative given their inability to read text. This may include students in kindergarten or first grade, or possibly students with significant reading difficulties beyond first grade. Common progress monitoring measures for beginning reading skills and their availability across vendors are summarized in Table 7.1. Given our discussion of the skills foundational to reading development in Chapters 2 and 4, readers should see the relevance of most of the measures listed here that would be useful for monitoring the progress of beginning reading skills. We also note that most of these measures are better viewed as SSMMs, but their importance for early reading development and status as targets of early reading curricula make them appropriate for consideration as GOMs in some situations. The table here reveals an almost dizzying array of available measures, which may pose challenges from a measure selection standpoint. In the following sections, we note several of the more well-studied and widely implemented early reading progress monitoring measures that are available across multiple vendors, followed by a review of the evidence supporting their use. We then recommend two measures with the strongest theoretical and evidence base.
Phoneme Segmentation Fluency Phonemic segmentation tasks measure a student’s ability to orally segment words that are spoken by the examiner. They are typically scored in terms of the number of unique sound segments the student is able to orally produce within 1 minute. These sound segments can include syllables (“jump/ing” = 2 points), onsent-rhyme (“b/ats” = 2 points), or phonemes (“b/a/t/s” = 4 points). Most measures are timed for 1 minute. Students who
Step 4: Progress Monitoring 335
TABLE 7.1. Examples of Early Reading Progress Monitoring Measures Vendor or Assessment Suite
Measures
Acadience
First Sound Fluency Phoneme Segmentation Fluency Nonsense Word Fluency (Correct Letter Sounds) Nonsense Word Fluency (Whole Words Read)
AIMSweb Plus
Print Concepts Initial Sounds Phoneme Segmentation (untimed) Letter-Naming Fluency Letter–Word Sound Fluency Word Reading Fluency Auditory Vocabulary
DIBELS 8th Edition
Letter-Naming Fluency Phoneme Segmentation Fluency Nonsense Word Fluency (Correct Letter Sounds) Nonsense Word Fluency (Words Recoded Correctly) Word Reading Fluency
easyCBM
Letter Names Letter Sounds Phoneme Segmenting Word Reading Fluency
FastBridge
Concepts of Print Onset Sounds Letter Names Letter Sounds Word Rhyming Word Blending Word Segmenting Sight Word Reading Decodable Word Reading Nonsense Word Reading—Sentence Reading
Fuchs Research Group
Letter–Sound Fluency Word Identification Fluency
iReady
Phoneme Segmentation Fluency Letter–Sound Fluency Word Recognition Fluency Pseudoword Decoding Fluency
Note. This table provides a sample of the measures available for monitoring progress in early reading. It is not a complete list, and there are other measures that, due to space constraints, could not be included.
336
Academic Skills Problems
are able to segment words at a phoneme level are more likely to earn higher scores, which usually reflects stronger phonemic awareness.
Alphabetic Knowledge LETTER‑NAMING FLUENCY
Alphabetic knowledge—the knowledge of letter names and sounds—is another critical foundational building block of learning to read. As discussed in Chapter 2, the ability to identify printed letters by name is associated with being able to learn the sounds those letters make. Learning letter names is also associated with increasing phonemic awareness and has positive influences on learning to read and spell words. Some vendors offer letter-n aming fluency (LNF) measures, in which the student is shown a list of randomly ordered letters and asked to name as many letters as they can. LNF is scored in terms of the number of letters correctly named in 1 minute. LETTER–SOUND FLUENCY
The ability to associate printed letters with sounds is even more important than letter names, and arguably the most important early literacy skill of all, because of how critical it is for learning to read words. Letter–sound fluency (LSF) measures use a list of randomly ordered letters (usually lowercase), and students are asked to identify the sound each letter makes. LSF is scored in terms of the number of correct letter sounds identified in 1 minute.
Word Reading and Pseudo‑Word Reading Across this text, we have emphasized the importance of accuracy and efficiency reading single words; it is one of the most essential, yet most challenging, hurdles for a beginning reader to overcome. Difficulties in learning to read words accurately and with efficiency (i.e., automatically) negatively impact reading development in profound ways. Word reading skills are a primary target of basic reading interventions, thus making measures that assess students’ progress in acquiring word reading skills helpful for monitoring basic reading progress. Two types of measures are commonly available from several vendors. WORD READING FLUENCY
Word reading fluency (WRF; also referred to as word identification fluency or sight words) measures involve real words. Measures typically include a list of randomly ordered high-frequency words, which would include words that are both phonetically regular and irregular. FastBridge includes a Decodable Words measure that consists of CVC words that are all phonetically regular (however, calling them “decodable” is a misnomer because all words can be decoded). On most measures, students read aloud from the list while the examiner scores the number of words read correctly in 1 minute. NONSENSE WORD FLUENCY
Nonsense word fluency (NWF) measures typically include a list of phonetically regular words in a VC or CVC pattern (e.g., ip, mup), although NWF measures in the most recent
Step 4: Progress Monitoring 337
DIBELS Eighth Edition include more diverse spelling patterns and words with up to six letters. Scoring rules typically award credit for the number of correct letter sounds the student identifies within each word in 1 minute. Letter sounds are counted as correct if the students says the sound of each letter, or reads them as part of a whole word. For example, if the word is mup, student responses that include “mm . . . uuu . . . p” (i.e., sounding out), “mmm . . . up” (i.e., onset-rime segmentation), or “mup” (i.e., reading the whole word correctly) would all receive a correct letter sounds score of 3 points because the three sounds in mup were correctly identified in some manner. Some NWF measures also award credit for the number of words read correctly as whole units within 1 minute, which is tallied as a separate score from the correct letter sounds score.
Progress Monitoring Measures for Basic Reading: Research Evidence Base and Recommended Options Across the most commonly studied basic reading progress monitoring measures (PSF, LNF, LSF, NWF, WRF), studies have reported adequate reliability (Goffreda & DiPerna, 2010; L. S. Fuchs et al., 2004). In terms of their validity (i.e., the extent to which scores on a measure are correlation with scores on another test of the same or similar construct), studies have observed moderate to strong correlations with tests of reading skills, when measured at the same time or on a predictive basis (e.g., Clemens, Shapiro, et al., 2014; Clemens et al., 2018; Clemens Hsiao, Simmons, et al., 2019; Clemens, Lee, et al., 2020; Clemens et al., 2022; L. S. Fuchs et al. 2004; Ritchey & Speece, 2006; Sáez et al., 2016; Speece et al., 2003; Stage et al., 2001). However, there are differences among measures that may make some better options for monitoring early reading progress. Based on research I (Clemens) as well as others have led, I recommend two types of measures for monitoring early reading progress, depending on the stage of the student’s reading skills and focus of the intervention. For students in the first half of kindergarten, or students in later kindergarten or early first grade with significant difficulties and the intervention is focused on foundational skills for decoding, I typically recommend LSF for monitoring progress. For later kindergarten, and students in first grade (or later) with significant reading difficulties and for whom decoding and word recognition are a focus of intervention, I recommend a word reading measure, such as WRF. There are several reasons for recommending LSF and WRF. WHY LSF?
Let’s start with why I recommend LSF for monitoring progress compared to other measures of basic literacy skills such as PSF and LNF. The first reason is based on their overall predictive validity. Although studies indicate that PSF, LNF, and LSF, sometimes referred to as measures of sublexical fluency (i.e., fluency with skills below the word level; Ritchey & Speece, 2006), are each predictive of future reading achievement when assessed at a single point in time (Clemens et al., 2012; Clemens, Soohoo, et al., 2018; Clemens, Lee, et al., 2020; Clemens, Lai, et al., 2017; Clemens, Simmons, et al., 2017; Burke et al., 2009; Catts et al., 2009; Elliott, Lee, et al., 2001; Goffreda & Clyde DiPerna, 2010; Johnson et al., 2009; Riedel, 2007; Kim, Petscher, Foorman, et al., 2010; Stage et al., 2001), LNF and LSF tend to be stronger predictors, and PSF tends to be less consistent. Clemens, Hsiao, et al. (2019) used dominance analysis, a form of multiple regression that indicates which variable is more dominant in its forecast of a subsequent outcome, to contrast the predictive validity of several early reading progress monitoring measures in kindergarten.
338
Academic Skills Problems
When compared to PSF and LNF (as well as a computer-adaptive test), LSF measured in December, January, February, and April was the most dominant predictor of reading skills at the end of kindergarten. LNF, in turn, was a more dominant predictor over PSF and the computer-adaptive test at those times. The relatively lower predictive validity of PSF is likely due to two factors. First, it is purely an auditory/verbal task—students respond to words spoken by the examiner and do not see any print—and evidence consistently indicates that print-based measures are stronger predictors of subsequent reading skills than oral phonemic tasks (Hammil, 2004; National Early Literacy Panel, 2008), especially after reading instruction has begun to target letter sounds and decoding (Schatschneider & Torgesen, 2004). Second, we noted in the description of PSF that points are typically awarded based on the number of unique sound segments produced (e.g., syllables, other word chunks, onsets, and phonemes). Therefore, it is possible for students with different levels of phonemic awareness to earn identical scores. The second reason for recommending LSF for monitoring early reading progress is based on the extent to which a student’s rate of growth observed on the measure (i.e., slope) is associated with reading outcomes. The meaningfulness of slope of improvement is an important factor in considering progress monitoring measures for instructional decision making (L. S. Fuchs, 2004; L. S. Fuchs & Fuchs, 1999). Of the studies that investigated the validity of slope of improvement of LSF, LNF, and PSF, evidence indicates that all are sensitive to growth, and growth on each is positively predictive of subsequent reading skills. Said differently, greater rate of growth in the progress monitoring measure is associated with stronger subsequent outcomes in word and text reading skills. However, with the same students, slope on LSF measures has been found to be more indicative of later reading outcomes than growth observed on LNF and PSF, particularly in kindergarten (Ritchey & Speece, 2006; Clemens, Soohoo, et al., 2018; Clemens, Lee, et al., 2020; Clemens et al., 2022; Sáez et al., 2016). With 426 kindergarten students at risk for reading difficulties, Clemens, Lee, et al. (2020) investigated the extent to which progress monitoring growth in PSF, LNF, and LSF, across the fall semester of kindergarten, was predictive of word reading fluency skills in January of kindergarten and growth in word reading across the second half of the school year (January to May). We used two different measures of word reading across the spring: FastBridge Decodable Words Fluency that consists entirely of phonetically regular consonant–vowel–consonant words, and easyCBM Word Reading Fluency that included phonetically regular and irregular words that varied in length. Results indicated that although the initial status and rate of growth across the fall of kindergarten on LNF, PSF, and LSF were all positively predictive of subsequent word reading, LSF slope across the fall of kindergarten was the strongest overall predictor of students’ word reading skills in January. Additionally, LSF fall slope was more predictive of word reading fluency growth across the spring on both word reading measures (regardless if words were phonetically regular or not). Very low growth in LSF across the fall was associated with very low progress in word reading fluency across the spring, whereas students who experienced very strong growth in LSF improved in word reading much faster. The predictive effects of LSF were uniquely predictive over the effects of LNF and PSF, meaning that LSF measured a skill not shared by LNF or PSF that was important for subsequent word reading. Put differently, how quickly students grew in LSF across the first half of kindergarten predicted how quickly they grew in word reading fluency across the second half of kindergarten, and LSF was a stronger predictor regardless if the words were phonetically
Step 4: Progress Monitoring 339
decodable or not (consistent with connectionist perspectives on word reading acquisition; see Chapter 2). The third reason for recommending LSF as an early reading progress monitoring measure is based on theory. Consider our discussion of reading development in Chapter 2 and how critically important knowing letter sounds is for learning to read words. Thus, monitoring growth in letter sounds reflects the development of a key foundational skill. An additional factor, however, is what goes into learning letter–sound correspondence. Basic phonemic awareness provides insight into the sound structure of language (and words more specifically), allowing students to isolate individual phonemes in words. Letters of the alphabet correspond to phonemes; thus, the ability to isolate phonemes facilitates the connection of sounds to printed letters. Put differently, letter– sound knowledge involves phonemic awareness. Additionally, we provided evidence that knowledge of letter names helps facilitate learning the sounds that letters make, both by providing clues to the sounds they make as well as providing an additional referent on which to attach new information. Therefore, LSF can be viewed as a skill existing somewhat “downstream” from, and “higher-order” than, PSF and LNF because its development is made possible by phonemic awareness and letter–name knowledge, which are acquired earlier. Higher-order skills are usually more predictive of outcomes than more basic skills. Thus, consistent with the recommendations of L. S. Fuchs and D. Fuchs (2004), LSF is recommended as a progress monitoring measure for students in basic stages of reading instruction in which they are learning the alphabetic principle and how to use it to read words. Readers are reminded that LSF is appropriate for monitoring progress if students are not yet reading words. If they are, a measure of word reading fluency would be a better choice (see below). We do not imply that PSF and LNF are useless as progress monitoring tools. There may be situations in which, depending on the needs of the student and goals of the intervention, PSF or LNF will provide useful information as a progress monitoring tool. However, what we have tried to communicate here is that evidence and theory point to LSF as a better index of students’ basic reading acquisition, and most predictive of where they are headed in terms of word reading acquisition. WHY WRF?
The second type of measure I recommend for monitoring the progress of students in early reading instruction or intervention are measures of WRF. They are appropriate when a goal of intervention is improving the student’s ability to read words, even if early lessons focus on foundational skills such as phonemic awareness and letter sounds. There are several research- and theory-based reasons why WRF measures are good options for monitoring early reading progress compared to other early literacy measures and, in some select situations, even better options than oral reading. First, when compared to other early reading measures in kindergarten and first grade, such as LSF, LNF, PSF, NWF, or a computer-adaptive test, WRF measures demonstrated stronger correlations with standardized and broader measures of reading skills, both on a concurrent and predictive basis (Clemens, Shapiro, et al., 2011, 2014; Clemens, Hagan-Burke, et al., 2015, 2019; Clemens et al., 2022; L. S. Fuchs et al., 2004; Smith et al., 2014). For example, in the dominance analysis study described above, measures of word reading fluency (either WRF or DWF) when assessed in January, February,
340
Academic Skills Problems
and April of kindergarten were the most dominant predictors of overall year-end reading skills over NWF, LSF, LNF, PSF, and a computer-adaptive test. It should not come as a big surprise that measures of word reading fluency are stronger predictors of subsequent reading skills than other measures of early reading; so much about reading is driven by accuracy and efficiency in reading individual words. Previous NWF measures, when scored for the number of correct letter sounds, are not as predictive as WRF measures. Only when NWF measures were scored in terms of the number of whole words read correctly do they approach the predictive validity of real word reading, but they are still not as strong (Clemens, Soohoo, et al., 2018; Clemens et al., 2022). This may change with subsequent studies, as the NWF in the more recent DIBELS Eighth Edition includes a broader corpus of nonsense words. As noted earlier, concurrent and predictive validity is one piece of evidence arguing for the value of a measure for monitoring early reading progress. Another aspect to consider is the extent to which slope of improvement is predictive of growth or later achievement in reading. Clemens, Lai, et al. (2017), studying struggling readers in kindergarten, found that slope of improvement on WRF was more strongly associated with growth in reading skills occurring across the same period of time than LNF, LSF, PSF, and NWF. Furthermore, WRF slope was more predictive of year-end kindergarten reading skills. In a similar but much larger study, again with kindergarten students at risk for reading disabilities, Clemens et al. (2022) determined that slope of improvement on WRF measures (which included easyCBM WRF and FastBridge DWF) across the second half of kindergarten was more strongly predictive of subsequent reading skills at the end of kindergarten, first, and second grades than slope on PSF, LNF, LSF, NWF, and a computer- adaptive test. Interestingly, slope on WRF was a slightly stronger predictor than slope on DWF, most likely because WRF uses a broader range of word types than DWF, which only includes phonetically regular words in a CVC pattern. Similar results have also been observed in the first grade: WRF slope was a stronger predictor of year-end reading skills than NWF (Clemens, Shapiro, et al., 2014; L. S. Fuchs et al., 2004). The third reason I recommend WRF measures is that, of all the basic reading progress monitoring measures, they are the closest to being a GOM. Although WRF measures can be viewed as measuring a specific skill (word reading), it must be remembered that word reading is the product of multiple skills, namely the interaction of phonemic awareness and alphabetic knowledge that together help facilitate decoding. It reflects students’ developing orthographic knowledge, which allows them to read words with various spelling patterns quickly, and the effectiveness of reading instruction and practice opportunities that students have been provided. WRF measures that include a range of word types (not just phonetically regular words or one type of spelling pattern) are even closer to being GOM-like. They reflect the diversity found in connected text, unlike measures that only assess phonetically regular words. It is also important to note that WRF measures can fill a role in monitoring the progress of students with significant word reading difficulties or disability in the early elementary grades. When a student’s reading skills may not be strong enough yet for passage reading fluency, WRF measures provide a potentially more sensitive option for monitoring progress. Furthermore, the equivalence of WRF probes can also be more readily controlled. WRF measures are sampled from a set of words, often the 100 or 200 most common words in printed English. It is far easier to construct a set of probes of equivalent difficulty when drawing individual words and arranging them in lists, compared to trying to write a set of meaningful passages that are similar in their level of difficulty.
Step 4: Progress Monitoring 341
The recommendation for WRF in monitoring basic reading progress does not imply that NWF should always be avoided. NWF may fill a role in monitoring progress for a student when basic decoding has been targeted. The expansions to the NWF measure in DIBELS Eighth Edition are intriguing and may improve its ability to reflect growth in the acquisition of word reading skills. However, considerations are warranted. Similar to PSF, NWF–CLS scoring rules make it possible for similar scores despite different patterns of responding. For example, two students may earn identical scores, yet one student may respond on a sound-by-sound basis while the other student reads fewer words but reads them as whole units. Identical scores between students despite very different patterns of responding obscure achievement differences. Therefore, using the correct letter sounds score for NWF may require additional attention to students’ patterns of responding for interpretation. In contrast to NWF–CLS scores, NWF Whole Words Read scores may offer better indices of students’ response to early reading intervention, especially when a goal of the intervention is decoding. Still, measures of reading real words, like WRF measures, offer stronger evidence as progress monitoring indices in kindergarten and the first grade, without the need for additional interpretation of response patterns.
Measure Selection: Reading Several types of measures are available for monitoring students’ reading progress; some examples are listed in Table 7.2.
TABLE 7.2. Examples of Reading Progress Monitoring Measures Vendor or Assessment Suite
Measure(s)
Grades
Acadience
Oral Reading Fluency
1–6
Maze
3–6
Oral Reading Fluency
1–8
Silent Reading Fluency
4–8
Oral Reading Fluency
1–8
Maze
2–8
Passage Reading Fluency
1–8
Multiple-Choice Reading Comprehension
2–8
Common Core State Standards Reading (multiple-choice comprehension)
3–8
CBMreading (oral reading)
1–12
CBMcomp
1–8
COMPefficiency
2–8
Fuchs Research Group
Passage Reading Fluency
1–7
iReady
Passage Reading Fluency
1–6
AIMSweb Plus
DIBELS 8th Edition
easyCBM
FastBridge
Note. This table provides a sample of the measures available for monitoring progress in reading. It is not a complete list, and there are other measures that, due to space constraints, could not be included.
342
Academic Skills Problems
Oral Reading Reading efficiency (i.e., reading fluency) refers to reading connected text efficiently, accurately, and with little conscious effort. It is an outcome of proficiency with basic early literacy and word reading skills, and the efficiency and automaticity with which words are read and connected to meaning free cognitive resources that can be devoted to understanding what is read. Thus, the rate at which a student reads connected text serves as an important indicator of overall reading skills as revealed in numerous studies (see Reschly et al., 2009). For an in-depth discussion of reading efficiency and how it serves as an index of reading proficiency, see Chapters 2 and 4. Oral reading is one of the most common forms of CBM. Depending on the vendor, oral reading progress monitoring measures may be referred to as oral reading fluency (ORF), passage reading fluency, or CBM-oral reading (CBM-R), but all versions are basically the same: Students read a passage of text aloud for 1 minute, while the examiner marks errors and omitted words. The final score is the number of words the student reads correctly in 1 minute. EVIDENCE BASE: ORAL READING
There is no other progress monitoring measure with an evidence base as extensive as there is for CBM oral reading. Numerous studies have established its technical adequacy and validity as a robust indicator of overall reading proficiency across grade levels, on a concurrent and longitudinal basis (L. S. Fuchs, D. Fuchs, Hosp, et al., 2001; Hintze et al., 2000; Reschly et al., 2009; Shinn et al., 1992; Wayman et al., 2007; Yeo, 2010). In addition to its utility for progress monitoring, it has also been extensively used as a universal screening measure (Kilgus et al., 2014). Its validity is a reflection of the importance of text reading efficiency as a pillar (and product) of reading proficiency. Although there is extensive evidence for the validity of oral reading measures in a progress monitoring role, one consideration with their use is that a student’s scores can vary across passages within a grade-level set (Ardoin & Christ, 2009; Betts et al., 2009; Cummings et al., 2013; Francis et al., 2008). The reason for this is that it is quite challenging to develop meaningful passages that are equivalent in difficulty. To do so, one must be mindful of word frequency, word length, sentence length, and sentence complexity, and other factors that interact with students’ knowledge and skills that make text more or less complex (and thereby easier or more difficult to read). This is one reason why it is advisable to use contemporary, published passage sets, like those listed in Table 7.2. Although variability in scores will occur, these passage sets have undergone considerable development and evaluation work to make them as equivalent as possible. Another implication of this variability, which will be discussed later in terms of decision making, is that instructional decisions should be based on a sufficient number of data points to be confident the observed trend is reflective of the student’s actual growth. An additional consideration is how scores are interpreted. As we have argued in various places in this text, CBM oral reading scores should be interpreted less in terms of reading speed, and more in terms of how efficiently the student reads. Rate of words per minute is indeed what is measured, but what is assessed is the ease at which those words were read. This subtle shift in understanding has important implications for what intervention looks like and how data are interpreted. Despite these considerations, CBM oral reading measures should be considered a primary option for monitoring progress in reading interventions from first grade and beyond.
Step 4: Progress Monitoring 343
Maze Maze is a measure of silent reading fluency and can serve as an alternative index of overall reading achievement. Students complete the measure independently, which makes it attractive from a feasibility standpoint because it can be administered on a group basis, unlike oral reading. A maze task typically consists of a passage of text in which words have been removed at a fixed interval (e.g., every seventh word) and replaced by a set of answer choices. Students read the passage silently and, upon arriving at each blank space, circle the answer choice that best completes the sentence. EVIDENCE BASE: MAZE
Maze tasks were developed because educators found it difficult to accept oral reading as an indicator of overall reading proficiency (L. S. Fuchs et al., 1988). Although they may appear to measure reading comprehension more than oral reading measures, studies have indicated that maze measures tend to demonstrate equivalent (or slightly weaker) correlations with reading comprehension measures than CBM oral reading (e.g., Ardoin et al., 2004; L. S. Fuchs et al., 1988; Graney et al., 2010; Marcotte & Hintze, 2009; Muijselaar et al., 2017). Nevertheless, research supports the use of maze as a progress monitoring tool in elementary, middle, and secondary grades (Chung et al., 2018; Shin et al., 2000; Tichá et al., 2009). These studies have typically revealed acceptable alternate forms of reliability, sensitivity to growth, and relations between growth and subsequent reading achievement outcomes. Additionally, there is evidence that although the correlation between oral reading and reading comprehension declines somewhat after elementary grades, correlations between maze and reading comprehension remain stable across the grades, suggesting that maze may be more suitable for upper elementary–aged students (Jenkins & Jewell, 1993). Overall, research supports the use of maze as an option for progress monitoring, and its efficiency in administration make it attractive. Issues with passage variability affect maze as well, which should be considered in interpreting progress monitoring data. Additionally, the fact that maze measures are scored in terms of the number of correct answer choices per minute must be accurately interpreted in relation to the target intervention. Ultimately, the decision to use maze should be driven primarily by whether the data will inform intervention decisions for the student.
“Reading Comprehension” Measures More recently, several types of measures have been developed for progress monitoring that aim to measure reading comprehension specifically. The motivation to develop such measures is likely still driven by the tendency for educators to doubt the validity of oral reading as an index of reading comprehension. However, there may be situations in which practitioners have a need to monitor progress in answering reading comprehension questions, such as for students with specific reading comprehension difficulties. These measures may also have more utility for students in the middle and secondary grades. As indicated in Table 7.2, vendors offer multiple-choice measures in which students read passages silently and answer questions about what they have read. The questions may assess literal and inferential understanding and are typically scored in terms of the number of questions answered correctly within a time limit. Other newer measures designed to formatively evaluate reading comprehension include Silent Reading Efficiency
344
Academic Skills Problems
(AIMSweb Plus), and COMPefficiency and CBMcomprehension from FastBridge. On Silent Reading Efficiency, students read passages separated into brief sections and answer multiple-choice questions after each section. COMPefficiency is computer-based; students read a passage, which is intermittently interrupted by true/false inferential questions that test their understanding of the passage as they read. After reading the passage, students answer a set of multiple-choice questions based on the passage. Scores include their accuracy in responding to the questions and the time needed to read the passage. CBMcomprehension supplements CBMreading in Fastbridge. First, after reading a CBMreading probe, students are asked to recall the most important parts of the passage (which are scored in terms of the number of items students recall). Second, a set of open-ended questions are asked by the teacher, who scores each student’s responses. EVIDENCE BASE: “READING COMPREHENSION” MEASURES
Progress monitoring measures aimed specifically at reading comprehension are the newest addition to field of reading measures, and the research base for validity and technical adequacy is in its infancy. Initial evidence for the easyCBM Multiple-Choice Reading Comprehension (MCRC) measures indicates adequate reliability and moderate criterion- related validity in grades 2–8 (Anderson et al., 2014; Alonzo & Anderson, 2018). However, it is interesting to note that correlations between the MCRC measures and standardized tests of reading comprehension were not appreciably stronger than those for CBM oral reading. Baker et al. (2015) found that easyCBM Passage Reading and MCRC were comparably predictive of seventh and eighth graders’ reading outcomes on a state accountability assessment (and more powerfully predictive when considered together). Research with the FastBridge CBMcomp and COMPefficiency measures is also just beginning. In grades 2–5, Diggs and Christ (2019) found that CBM oral reading scores were more strongly correlated with students’ performance on a broad measure of reading than the scores provided by CBMcomp. However, the addition of CBMcomp scores to oral reading accuracy and rate explained a small but unique and statistically significant amount of variance in overall reading scores compared to oral reading alone. Overall, use of comprehension-focused measures for progress monitoring should be guided by the type of data that will be most valuable for informing intervention for a given student. The nascent state of the evidence base for their technical adequacy indicates the need for caution. Comprehension-focused measures have not been shown to be more valid indices of reading comprehension than oral reading measures. It is also unclear how quickly students improve on MCRC measures, or the extent to which growth on these types of measures is predictive of subsequent outcomes. Additionally, the challenges related to form effects and passage equivalence that are common to measures of oral reading are also likely to affect MCRC measures. Nevertheless, there may be situations in which comprehension-specific reading measures are desired, such as students with adequate word reading skills but who struggle to understand and learn from text. This might include older students with reading difficulties, or students who are English learners.
Summary: Selecting Progress Monitoring Measures in Reading Although there are a number of different measures available for monitoring progress in reading, research has consistently revealed that oral reading measures provide the strongest and most straightforward option in most situations. This is especially true across elementary school. The situations in which it may be beneficial to consider an alternative
Step 4: Progress Monitoring 345
measure, or perhaps a supplemental measure in addition to oral reading, may be those in which the student has difficulties learning from print but demonstrates adequate text reading accuracy and fluency, and the intervention targets reading comprehension improvement. In these situations, continued monitoring of oral reading progress will likely reveal little gain, and any gains that occur are unlikely to result in gains in comprehension. A measure specific to reading comprehension may be more revealing as an index of responsiveness to intervention in these cases. We stress that this determination should not be made based on what “seems” like adequate fluency—be sure that the student’s oral reading falls within or above the average range compared to grade norms, and that their reading accuracy on grade-level oral reading passages is not lower than 90%. If either of these conditions is not satisfied, it may be insufficient to say that the student’s reading fluency is adequate. In short, oral reading measures will be the best option for monitoring reading progress in most situations, and maze or comprehension-specific measures can be considered on an individual basis.
Progress Monitoring in Reading: Selecting the Grade Level of the Probes Reading progress monitoring measures such as oral reading, maze, and reading comprehension are often grouped in sets that correspond to grade levels, and the grade level of the probes used to monitor students’ progress in reading should be carefully considered. Note that most early literacy and early reading progress monitoring measures are usually not controlled for grade-level difficulty. Progress monitoring is designed for students for whom reading is difficult. It is the reason they were referred for assessment and intervention in the first place. Therefore, they are likely to demonstrate difficulties in reading text at their grade level and, in some cases, significant difficulties, characterized by very high error rates and slow, arduous reading. When the passages are too difficult, any growth or progress will be difficult to detect. These probes are not sensitive to the instruction the student is receiving. Similarly, if the student’s progress is monitored with passages from a much lower grade level, it is possible that the text will be too easy, and therefore the passages will be similarly poor detectors of growth because the student is essentially at a ceiling level with the text (i.e., making very few or no errors) and little room to grow. This decision process can be simplified. In most situations, progress monitoring in reading is most straightforward when using goal-level reading material—the level and type of text in which the goals for the student are situated. In many situations, passages at the student’s grade level are goal-level material, because intervention is ultimately aimed at improving student’s reading skills in content used in core reading instruction. Also, monitoring with grade-level passages sets an ambitious bar for what intervention should be about: fostering considerable improvement and making every effort to narrow achievement gaps between the student and grade level expectations. When monitoring below a student’s grade level, additional considerations must be made. The student may show considerable progress in reading passages below their grade level, but it is not clear whether improvements extend to text that is consistent with gradelevel expectations. It is important to point out that this discussion regarding selecting the grade level of the probes is not the same as identifying the student’s instructional level, as discussed in Chapter 4. Progress monitoring is not an intervention. Materials used for reading instruction and practice should be situated within the student’s instruction level—not
346
Academic Skills Problems
too difficult and not too easy—so that progress is maximized. Progress monitoring, in contrast, is a test. It is there to inform whether the intervention results in meaningful improvements toward ambitious and important general outcomes. The most straightforward way to determine whether the student is making progress toward those ambitious general outcomes is to monitor progress with grade-level reading content. To determine if grade-level passages are appropriate, I recommend thinking more liberally compared to determining instructional level, and leaning toward the use of grade-level text as much as possible. Here, I believe considering reading accuracy is important because the goal of intervention for most struggling learners is about helping them become better readers by reducing reading errors and improving efficiency. Thus, for progress monitoring, a recommendation is to select the highest grade-level passages in which their reading accuracy is at or above 75% (i.e., one error every four words; 75 words correct for every 100 words). For example, if a third-grade student’s reading accuracy in third-grade passages was 79%, third-grade passages are likely appropriate for monitoring progress. Accuracy in this range ensures that there is considerable room to grow, but the text is readable enough to detect growth. But this is not a hard-and-fast rule. Adjustments should be made for individual students, as needed. There are certainly situations in which monitoring progress in grade-level passages is not relevant, desired, or appropriate. Students who are reading far below their grade of record may include students with severe reading disability, students who experienced significantly disrupted or ineffective instruction (e.g., students who have experienced extended absences from school, repeated school changes, or institutional placements), or students with intellectual disability. In these situations, highly individualized decisions should be made; these should be based on the initial assessment (including the survey- level assessment), data from any extant interventions, and the skills being targeted in the intervention. Considering this information may indicate that progress monitoring will be more informative with passages below the student’s grade level, perhaps consistent with their instructional level. It may also be the case that for students with severe reading difficulties and very low accuracy reading connected text, measures such as word reading fluency may be more appropriate until enough progress is made that connected text measures can be used.
Developing Reading Progress Monitoring Measures When commercial progress monitoring measures are not available, or if progress monitoring must use content derived from the student’s curriculum of instruction, users can develop their own set of passages. Once the material for assessment is selected, a set of passages are developed from that material in the same way as described in Appendix 4A. Each passage is randomly selected and retyped onto a separate page. Given that the readability of published texts can vary widely, it is recommended that the readability level of the passage be checked so that it remains within plus or minus one grade level of the material from which it was selected. A sufficient number of probes should be developed to allow for administration one to two times per week across 12 weeks. Repeat administration of a passage can occur as long as 12 weeks (i.e., 3 months) have passed since the same passage was last administered. Scoring and administration take place just like standard administration of CBM oral reading. Students are asked to read each passage aloud for 1 minute. If students struggle with a word, the examiner allows 3 seconds for a response, and then provides the word and scores it as an error. Errors are scored only for omissions and mispronunciations,
Step 4: Progress Monitoring 347
with self-corrections being scored as correct. The total number of words read correctly and incorrectly per minute is calculated. Again, we stress that creating oral reading probes is labor-intensive and prone to problems, most notably the likelihood that probes in the same passage set will vary considerably in their level of difficulty. This high degree of variability in the passages will result in progress monitoring data that are also highly variable. Consequently, determining responsiveness to intervention is more difficult. For these reasons, it is best to use a probe set from one of the options listed in Table 7.2 and consider creating your own probes only when no other options exist.
Progress Monitoring Measure Selection: Vocabulary Vocabulary knowledge is an essential aspect of understanding and using spoken language, written language, and mathematics. For young children, improving vocabulary knowledge bolsters oral language comprehension and use, reading acquisition, mathematics, and writing skills. As students get older, vocabulary instruction continues to be important for their academic achievement in all areas. Vocabulary knowledge is particularly important for considering the reading and overall academic achievement of emergent bilingual students. Vocabulary is an underappreciated area of progress monitoring compared to other academic skills, although frameworks for monitoring progress in vocabulary exist.
Progress Monitoring of Vocabulary Knowledge with Younger Students The Dynamic Indicators of Vocabulary Skills (DIVS; Parker, 2000) are an effort to extend progress monitoring in vocabulary downward to PreK and kindergarten. The DIVS include two individually administered subtests. In the Picture Naming Fluency subtest, students are shown a page with picture tiles arranged in rows and columns, each picture representing a common noun found in early childhood literature. Students move from left to right and name as many pictures as they can while being timed, and scores represent the number of pictures students can identify in 1 minute. On the Reverse Definition Fluency subtest, students are read a series of definitions and asked to name a word described by the definition. Scores include the number of correct responses in 1 minute. Vocabulary content for the DIVS was developed by identifying nouns commonly found in early childhood literature (Parker, 2000). Both subtests are individually administered and do not require the student to read. Limited research has investigated the technical properties of the DIVS measures, but what exists shows some promise. The DIVS demonstrated good reliability, criterion- related validity, and screening accuracy in two studies with PreK students (Marcotte et al., 2016; Marcotte et al., 2014), which included a high proportion of English learners. However, studies are needed on the DIVS’s sensitivity to growth when administered on a progress monitoring basis, as well as studies on its validity when used in kindergarten or first grade. It is also unclear if the adequate reliability and validity properties observed in the published studies of preschool students will extend to low-achieving students in elementary school.
Progress Monitoring of Vocabulary Knowledge with Older Students Students’ ability to read affords additional avenues for monitoring vocabulary knowledge. The most commonly studied framework for monitoring progress in vocabulary
348
Academic Skills Problems
is CBM vocabulary matching (Espin et al., 2001). Vocabulary-matching tasks are user- created and structured such that a set of words are listed on the left side of the page, and a set of definitions (including some distractor definitions) are listed on the right side (e.g., Espin et al., 2013). Students complete the task independently by matching the words on the left with the appropriate definition on the right, although the task can be structured such that the terms and definitions are read aloud to students. Vocabulary-matching measures are typically timed for 5–6 minutes and scored by tallying the number of correctly identified definitions. Reading the measure aloud should be considered for students whose reading difficulties will confound the assessment of their vocabulary knowledge. Studies have evaluated vocabulary-matching measures when educators read the items and answer choices aloud as the students completed them and observed similar technical adequacy to when the students took the measures independently (Borsuk, 2010; Espin et al., 2001). One challenge to monitoring progress in overall vocabulary knowledge is determining the breadth of vocabulary terms that should be included in the measure. English vocabulary represents over 171,000 words and roughly 10,000 in common use in oral language and text. Thus, it is extremely difficult to create vocabulary measures that adequately reflect generalized, overall vocabulary knowledge. For this reason, vocabulary progress monitoring measures have been used with a more specific set of vocabulary terms that are important for understanding a topic or content area. For example, vocabulary-matching measures have been created for monitoring progress in science and social studies (Conoyer et al., 2022; Espin, Shin, & Busch, 2005; Espin et al., 2013; Lembke et al., 2017), subjects in which vocabulary knowledge is critical for learning. Here, resources such as the glossary from the student’s textbook, teacher notes, class quizzes or tests, and other instructional materials can be used for identifying a set of vocabulary terms that will be important for understanding the content in a given unit or set of units (Borsuk, 2010; Espin et al., 2013). Academic Tier 2 vocabulary terms (see Chapter 6) may also be useful for inclusion. Once the corpus of vocabulary terms is identified, probes can be created by randomly selecting subsets of words from this list and arranging them and their definitions in a series of probes. From a technical standpoint, vocabulary- matching measures have demonstrated good reliability and validity for use on a progress monitoring basis (Beyers et al., 2013; Conoyer et al., 2019; Espin et al., 2001; Espin et al., 2013; Espin, Shin, & Busch, 2005; Lembke et al., 2017; Mooney et al., 2013), including moderate to strong convergent and predictive validity with standardized measures of content knowledge in social studies (Espin et al. 2001; Lembke et al., 2017) and science (Conoyer et al., 2019; Espin et al., 2013). Vocabulary-matching tasks in these content areas have also shown sensitivity to growth on a progress monitoring basis, and rates of improvement on these measures are predictive of students’ grades in their social studies or science courses, as well as performance on tests of relevant content knowledge (Borsuk, 2010; Espin, Shin, & Busch, 2005; Espin et al., 2013). Most work with vocabulary-matching measures has been conducted with students in middle school, and although the measures can easily be used in elementary or secondary grades, work is needed to evaluate their technical properties with other age groups. Multiple-choice vocabulary measures (Anderson et al., 2014) are general outcome measures of vocabulary knowledge that are not specific to a content area, topic, or curriculum. easyCBM provides such measures for grades 2–8. Students complete the measure during a timed administration, and on a series of items, students select a word or phrase that means the same as a target word from a set of answer choices.
Step 4: Progress Monitoring 349
The observed reliability of multiple-choice vocabulary measures in grades 2–8 is adequate for monitoring progress (Wray et al., 2014), and performance is moderately correlated with scores on standardized assessments of vocabulary and reading comprehension in grades 2–7 (Anderson et al., 2014). However, studies are needed on their sensitivity to growth, and the extent to which student growth is associated with change in overall language or reading comprehension proficiency. Work is also needed on the extent to which results on multiple-choice vocabulary measures provide insight for adjusting instruction. Because the measures were developed using a general set of vocabulary terms, students will only show progress if they had been exposed to terms included in the measure. Students do not generalize knowledge to unfamiliar vocabulary in the ways that they can read unfamiliar words using alphabetic and phonological knowledge. Thus, a lack of growth on a general vocabulary measure does not necessarily mean that their vocabulary skills did not improve. In summary, although additional work is needed on the role of vocabulary progress monitoring for instructional decision making, the availability of these options can fill an important gap in assessment and intervention that has not been adequately appreciated. Vocabulary is an essential aspect of oral and written language comprehension, writing skills, mathematics, and is closely tied to overall background knowledge (i.e., having background knowledge in a topic often depends on understanding the terms and their use in that area). Monitoring progress in vocabulary may be useful when intervention includes vocabulary instruction or is focused on content-area achievement, and may be particularly relevant for emergent bilingual students. Increasing awareness of the importance of vocabulary in mathematics also makes that an additional possible application.
Selecting Progress Monitoring Measures: Spelling Because spelling is an integral part of reading and writing, progress monitoring in spelling might be considered when students have specific goals for improving their spelling as part of reading or writing interventions. Monitoring spelling progress over time can provide information on how students are applying their alphabetic knowledge, orthographic knowledge of common spelling patterns, and morphological awareness to spell words (Ouellette & Sénéchal, 2008). The CBM process and procedures for spelling were part of the original CBM framework (Deno, 1985). Resources for CBM spelling are available in The ABCs of CBM by Hosp et al. (2014), as well as from Jim Wright (who maintains Intervention Central): www.jimwrightonline.com/pdfdocs/cbmresources/cbmdirections/cbmspell.pdf. Monitoring progress in spelling starts by gathering a set of relevant words, which could be taken from the student’s curriculum of instruction or drawn from a set of intervention units the student will receive in the upcoming weeks. Sampling from the set of words, a set of probes is developed using 12 words per list for grades 1 and 2, and 17 words per list for grades 3 and beyond. To administer a spelling probe, the teacher orally dictates each word at a constant pace of one word every 10 seconds for first- and second- grade students, and one word every 7 seconds for grades 3 and up. Dictating words at a constant rate helps to standardize the measure and allow scores to be compared over time. Students spell each word as it is dictated on a numbered page. If they are not finished spelling a word when the next word is dictated, they are instructed to leave the word partially spelled and write the next word. Several methods of scoring spelling CBM have been described, and multiple scores can be derived from a single spelling probe. First, words spelled correctly (WSC) simply
350
Academic Skills Problems
involves a tally of the number of words the student spelled correctly. Although very straightforward, WSC may obscure important information about the students’ spelling skills. In Chapter 4, we illustrated how different students might all spell a word incorrectly but demonstrate very different levels of literacy skill (see Table 4.8). WSC can indicate when overall spelling skills are improving, but it lacks nuance in revealing the more subtle gains a student may make over time. A scoring method that is more sensitive to smaller changes in spelling skills is the correct letter sequences (CLS) scoring approach. Here, a placeholder is added at the beginning and end of each word, and a carat is scored above each correctly sequenced pair of letters, including points for correct initial and final letters. Each carat counts as 1 point, and the number of possible points is always 1 greater than the number of letters. For instance, a three-letter word like cat includes 4 possible points (_^c^a^t^_). Using CLS in the example from Table 4.8 with the word cat, Jayla’s spelling of “kat” would receive 2 points (ka^t^_), Josh’s “ct” would receive 2 points (_^ct^_), Emily’s “c” would receive 1 point (_^c), and Max’s “ghr5” would result in 0 points. Thus, CLS is more sensitive to approximate spellings and is therefore more sensitive to changes in students’ alphabetic, phonological, and orthographic skills over time, making it appropriate for frequent monitoring. WSC scores will not change as quickly as CLS and can be scored once per month (L. S. Fuchs et al., 1993). Studies have indicated the reliability of spelling CBM using WSC or CLS scoring (Shinn, 1989a, 1989b). Across grade levels, spelling CBM measures have demonstrated sensitivity to change in spelling skills (Clemens, Soohoo, et al., 2018; L. S. Fuchs, Fuchs, et al., 1993; Ritchey et al., 2010), and CLS better discriminates among students based on their achievement and is more predictive of subsequent spelling and literacy outcomes than WSC (Clemens, Oslund, et al., 2014; L. S. Fuchs, D. Fuchs, et al., 1993). Also, L. S. Fuchs, D. Fuchs, Hamlett, et al. (1991) observed that when teachers conducted analysis of their students’ spelling responses in addition to monitoring progress, students’ spelling growth was greater than that of students whose teachers did not use skills analysis. Progress monitoring in spelling has also been carried out in kindergarten, although it looks a little different than the standard CBM approach. Ritchey’s (2008) kindergarten spelling progress monitoring task uses five 3-letter phonetically regular words such as cat, top, and sit. Words are dictated by an examiner on an untimed basis. Students’ spelling responses can be scored for WSC, CLS, or correct sounds, another scoring approach that provides credit for a phoneme represented by a correct letter or a phonologically permissible alternative. For example, Jayla’s spelling of cat as “kat” would receive all 3 possible points because k makes the first sound in cat and the other letters are correct. Other examples of phonologically permissible responses would be writing “f” for ph as in phone, or writing “ee” for ea as in bean. Several studies have observed that spelling measures and spelling scoring indices collected in kindergarten demonstrate moderate to strong relations to student achievement on measures of literacy, including word reading and standardized tests of spelling (Clemens, Oslund, et al., 2014; Clemens, Soohoo, et al., 2018; Ritchey, 2008; Ritchey et al., 2010). In summary, monitoring progress in spelling can provide insight into students’ literacy functioning and can be informative for measuring responsiveness in reading and writing interventions.
Progress Monitoring Measure Selection: Mathematics Historically, mathematics progress monitoring measures have typically fallen into three categories: early numeracy, computation, and concepts/problem solving. However, the
Step 4: Progress Monitoring 351
advent of standards for mathematics instruction, such as the standards from the National Council of Teachers of Mathematics (NCTM) and the more recent Common Core State Standards, has resulted in significant changes in how mathematics is taught across grade levels. There is now a greater focus on how mathematics skills are related and build on each other, and how skills can be applied to solve various types of problems. This has resulted in changes to progress monitoring measures, which now tend to cover a greater range of skills. Progress monitoring measures have been developed to capture skills across the 10 cross-cutting strands of the NCTM standards that apply to content (Number and Operations, Algebra, Geometry, Measurement, and Data Analysis and Probability) and processes (Problem Solving, Reasoning and Proof, Communication, Connections, and Representation). There are measures that now mix skills in number operations, algebra, word problems, and geometry, for example, because this integrated manner reflects how mathematics is taught using the new standards. Despite these changing perspectives, it is very difficult, if not impossible, to define a GOM in mathematics that covers the same breadth of skills that oral reading CBM does for reading proficiency. Some would argue that there are no GOMs in mathematics; rather, measures that reflect general outcomes in a specific domain of mathematics. From a practical standpoint, there are still many situations where measures in a specific skill domain offer the best information for monitoring responsiveness to an intervention. For this reason, there are still progress monitoring measures that focus on more specific skills, concepts, or operations within mathematics.
Measures for Early Numerical Competencies (Early Numeracy, Number Sense) In this text, we have used the term early numerical competencies (ENCs; Powell & L. S. Fuchs, 2012) to capture skills that have been referred to early numeracy and number sense. ENC refers to a student’s understanding of counting, numerical relationships, and quantity. Knowledge and skills in this domain play a fundamental role in the development of more sophisticated mathematics skills. ENC measures are typically designed for kindergarten and first grade; however, it is possible to use measures in other grades when intervention targets this area. Several types of measures have been developed for monitoring progress in ENCs. Table 7.3 provides examples. However, four types of measures have been most commonly investigated. Oral Counting (OC) asks students to count orally from 1, and scores include the highest number they count before making an error. In Number Identification (NI), students respond to a list of randomly ordered numbers for 1 minute and identify as many numbers as they can. Quantity Discrimination (QD) involves a series of comparisons of two numbers; for each comparison, students identify or circle the larger number, and scores include the number of correct responses in 1 minute. On Missing Number (MN), a series of number sequences are presented with a missing digit in varying places (e.g., 1 3; 7 8 ; 4 5). Originally developed by Clarke and Shinn (2004) and Clarke, Baker, et al. (2008), several of these tasks are presently available in the Assessing Proficiency in Early Number Sense (ASPENS; Clarke et al., 2011) set of screening and progress monitoring measures. Similar versions of these measures have been released by other vendors. Several studies have investigated the reliability, validity, and growth rates of ENC measures with students in kindergarten and first grade (Clarke et al., 2008; Conoyer et al., 2016; Hampton et al., 2012; Lembke & Foegen, 2009; Martinez et al., 2009). Across studies, measures demonstrate similar reliability with adequate alternate forms or test– retest reliability for progress monitoring. Although most measures have demonstrated
Academic Skills Problems
352
TABLE 7.3. Progress Monitoring Measures for Early Numerical Competencies Publisher or Assessment Suite
Measure
Acadience
Quantity Discrimination (beginning and advanced) Number Identification Next Number Fluency Missing Number
AIMSweb Plus
Number Naming Fluency Number Comparison Fluency (Pairs)
ASPENS (Sopris Learning)
Numeral Identification Magnitude Comparison Missing Number Basic Arithmetic Facts and Base 10
easyCBM
Numbers and Operations
FastBridge Early Math
Subitizing Counting Objects Match Quantity Number Sequence Numeral Identification Equal Partitioning Verbal Addition and Subtraction Visual Story Problems Place Value Composing and Decomposing Quantity Discrimination
mClass
Math
SpringMath
Multiple measures of early numerical competenciesa
Note. This table provides a sample of the measures available for monitoring progress in early numerical competencies. It is not a complete list, and there are other measures that, due to space constraints, could not be included. a SpringMath is an integrated screening, intervention, and progress monitoring system that includes progress monitoring measures covering a range of skills within the system.
the ability to predict mathematics skills assessed on other measures, QD and MN have tended to demonstrate the strongest predictive validity. One challenge to monitoring progress with ENC measures is that growth is often slow and may be less than 1 point gained every 4 weeks or more (Lembke & Foegen, 2009). Slow rates of growth are therefore a characteristic of how quickly students tend to grow in the skill overall, not necessarily a lack of student responsiveness to an intervention. Growth tends to be slowest on QD and MN; however, these measures have demonstrated the strongest ability to predict subsequent mathematics skills (Clarke et al., 2008; Conoyer et al., 2016). In summary, of the available early numeracy measures, MN and QD appear to have the strongest evidence for validity. Nonetheless, the other tools are viable options as well. As with any measure that assesses a specific skill, care should be taken to ensure that it will be sensitive to changes in skills targeted by the intended intervention and useful for
Step 4: Progress Monitoring 353
ongoing decision making. Of the four measures discussed, MN and QD both involve knowledge and skills in multiple areas, such as numeral recognition, counting, and quantity. Therefore, they can be viewed as reflective of skills that are more “downstream” than other measures and thus advantageous for monitoring progress.
Measure Selection: Mathematics Calculation and Procedural Computation Historically, mathematics computation measures have been developed using one of two approaches (Fuchs, 2004; Foegen et al., 2007) A robust indicators approach involves the measurement of a fairly specific mathematics skill or task that is strongly indicative of overall mathematics achievement. For example, fluency with number combinations (math facts) is an index of students’ achievement in mathematics computation overall. In contrast, a curriculum sampling approach involves identifying a large set of problems that are representative of the curriculum of instruction or grade-level standards, then sampling from this set to create a set of probes with various types of problems in each probe. As noted earlier, the advent of standards for mathematics instruction has blurred the lines between “computation” and other forms of “problem-solving” assessment because the new standards now frame mathematics instruction in a more integrated way. At present, computation skills may be included in a measure alongside items that measure students’ application of those skills to word problems or algebraic reasoning. However, there are still options available to target specific skills in calculation and procedural computation, if desired. Several types of mathematics calculation and procedural computation measures are available from various publishers, as reported in Table 7.4. Measures are available targeting relatively specific calculation skills, such as number combinations fluency, as well as measures that assess computation skills more broadly. Nearly all mathematics measures are completed independently by the student, which allows for group administration. All measures are timed and range from 2 to 6 minutes. Scoring methods vary by measure and can include tallying the number of correct problems answered, the number of correct digits across problems, or awarding a prespecified set of points per item. Foegen et al. (2007) reviewed studies of computation progress monitoring measures and found that measures developed using a robust indicators approach or a curriculum sampling approach both demonstrated reliability sufficient progress monitoring. However, measures developed using a curriculum sampling approach tended to demonstrate stronger correlations with tests of overall mathematics achievement, most likely because the measures sampled broadly from a range of skills. Measures with particular utility for progress monitoring are those that assess number combinations fluency (i.e., math facts). Given the importance of automaticity with number combinations for mathematics success and the fact that it is a consistent source of difficulty for students who struggle in mathematics (Geary et al., 2012), intervention is likely to frequently target this skill. Number combinations fluency measures typically target addition and subtraction combinations to 10 or 20, but some measures involve multiplication and division. As noted in Table 7.4, number combinations fluency measures are available from several vendors, and Math Facts Fluency, Basic Arithmetic Facts, CBMmath Automaticity. Although grade levels are indicated, it is possible to administer number combinations measures on a progress monitoring basis with students outside of these grade ranges if progress in that skill is of interest. Additionally, number combination
Academic Skills Problems
354
TABLE 7.4. Examples of Measures for Monitoring Progress in Mathematics Vendor
Measures
Grade levels Computation
Acadience
Computation
1–6
AIMSweb Plus
Math Facts Fluency
1
Number Sense Fluency
2–8
ASPENS
Basic Arithmetic Facts and Base 10
1
easyCBM
Numbers and Operations
K–6
FastBridge
CBMmath Automaticity
K–12
CBMmath Process
K–12
mClass
Math
K–3
SpringMath
Multiple measures of calculation/computation a
K–12
Word problems and concepts Acadience
Concepts and Applications
2–6
easyCBM
Geometry
K, 1, 3
Measurement
K, 2, 4
Numbers and Operations
K–6
Numbers Operations and Algebra
1–5, 7
Geometry Measurement and Algebra
5
Algebra
6, 8
Number Operations and Ratios
6
Number Operations Algebra and Geometry
7
Measurement Geometry and Algebra
7
Geometry and Measurement
8
Data Analysis Number Operations and Algebra
8
FastBridge
CBMmath Concepts and Applications
K–12
mClass
Math
K–3
SpringMath
Multiple measures of problem solving a
K–12
Note. This table provides a sample of the measures available for monitoring progress in mathematics. It is not a complete list, and there are other measures that, due to space constraints, could not be included. a SpringMath is an integrated screening, intervention, and progress monitoring system that includes progress monitoring measures covering a range of skills.
Step 4: Progress Monitoring 355
probes can be created by the user, either through probe creators available online (e.g., interventioncentral.org) or by creating a bank of all number combinations to 10 (or 20) and randomly sampling to create a set of probes. See the accompanying Academic Skills Problems Fifth Edition Workbook for additional resources to create computation probes.
Progress Monitoring Measures for Mathematics Concepts and Word‑Problem Solving Historically, progress monitoring measures in the domain of “concepts and problem solving” tended to focus on skills such as quantity comparison, measurement, money, word problems, interpreting charts, and basic geometry (L. S. Fuchs, Hamlet, & D. Fuchs, 1998), or measures that focused only on word problems (Jitendra et al., 2005). More recently developed measures include items involving pre-algebraic concepts and rational numbers integrated with word-problem solving, reflecting the contemporary instructional emphasis on developing skills foundational for algebra success. Pre-algebra and algebra items are included in several of the easyCBM measures, for instance. There may be situations in which practitioners wish to measure progress in word- problem solving specifically. For that, Jitendra and colleagues created progress monitoring measures for word-problem solving appropriate for middle elementary students (Jitendra et al., 2014; Leh et al., 2007). Their studies indicated that when administered on a biweekly basis across 12–16 weeks, the measures were sensitive to growth, and although growth rates were slow, slope of progress was predictive of year-end achievement on a test of overall mathematics achievement. In summary, there are numerous types of measures available for monitoring progress in mathematics, enough that decisions may be confusing. The most important factors to consider are the skill difficulties of the student aligned with the ultimate goals of the intervention. This should help in identifying a progress monitoring measure that will provide information on the student’s responsiveness and needs for instructional adjustment. As noted across this text, fluency with number combinations or with procedural computations are common areas of difficulty for students who struggle in mathematics, which naturally points to these types of measures for monitoring progress. There may also be situations to consider periodic monitoring with a measure that assesses these skills more broadly but complemented with a specific subskill measure, such as number combinations or computation fluency. As in most situations, this should be an individual decision with the need for information balanced against feasibility.
Progress Monitoring Measure Selection: Writing Writing is one area of progress monitoring in which vendor-developed measures generally do not exist (although this is likely to change in the coming years). Instead, there are several types of research-backed procedures and scoring metrics that can be used to measure students’ writing samples in a standardized way. Over time, scores from a chosen scoring metric (or set of metrics) can be used to track student progress and responsiveness to writing intervention. Revisiting our discussion in Chapters 2, 4, and 6 regarding writing skills and interventions, students with writing difficulties tend to demonstrate problems with transcription (i.e., low writing fluency and output, spelling), as well as composition (i.e., grammar, organization, and cohesion). Improving these areas is likely to lead to improvements in overall writing skills. Assessing and scoring writing for transcription skills are far more objective and straightforward, which is why several scoring techniques focus on
356
Academic Skills Problems
more readily quantifiable skills such as number of words written, spelling accuracy, and grammar.
CBM Approaches to Writing Measurement In Chapter 4, we described the traditional methods for writing assessment using the CBM framework, which involves obtaining a sample of writing in a standardized, repeatable format. First, the student is provided with a so-called “story starter” that may be a picture or one to two sentences designed to prompt ideas, for example, “One day I woke up and could not believe what I saw in front of my house. It was . . . ” The student is given 1 minute to think and 3 minutes to write (some implementations for older students use 5, 7, or 10 minutes), and the examiner asks the student to stop writing immediately when the timer sounds. After the time to respond is up, the writing sample is scored using one or more scoring metrics. Jewell and Maleki (2005) categorized writing scoring metrics in three ways. Production-dependent metrics are influenced by the amount of text the student writes. Total words written (TWW) is a tally of the total number of words written within the time limit, regardless of spelling, grammatical, or semantic accuracy. It is purely a metric of writing output. Because sparse, brief writing is a hallmark characteristic of students with writing difficulties. It is relevant for monitoring progress in interventions aimed at improving students’ transcription fluency, planning, persistence, and other skills that increase their writing output. Words spelled correctly (WSC) is the number of correct spellings in a student’s writing sample. WSC is again informed by the tendency for students to demonstrate spelling difficulties that often limit their writing output and quality. It is a good option if spelling skills are targeted as part of a writing intervention; if spelling skills are not targeted, the metric is unlikely to be sensitive to intervention response. Correct writing sequences (CWS) is a tally of the number of adjacent pairs of words that are correct in terms of spelling, punctuation, grammar, and syntax. Each pair of words is examined, and each correctly sequenced pair receives a point. This scoring metric is a good option if the intervention targets improving students’ spelling, grammatical, and semantic accuracy in their writing. Production-independent scoring metrics are based on the proportion of accurate writing features the student produces; therefore, they are not influenced by the amount of text the student produces. Percent of correct writing sequences (%CWS) is the percentage of adjacent pairs of words that are accurate in terms of spelling, grammar, syntax, and punctuation, out of the total number of adjacent word pairs. The same approach can be used to score percent of words spelled correctly, or percent of legible words. An accurate production scoring metric is the correct minus incorrect writing sequences (CMIWS) metric, which is scored by subtracting the number of incorrect writing sequences from the number of correct sequences. It is therefore sensitive to both the amount of text the student writes and the proportion of accurately sequenced words. Research has revealed that the metrics vary in terms of their adequacy for progress monitoring. Research reviews have found that production-dependent metrics like TWW, WSC, and CWS demonstrate weak to moderate reliability, indicating that students’ scores vary a great deal across measurement occasions (McMaster & Espin, 2007; McMaster et al., 2012). The validity of production-dependent metrics is stronger for younger students compared to older students because writing “quality” in the early grades is often characterized by writing output and technical accuracy. In the middle elementary grades and beyond, writing quality is more defined by cohesion, organization, clarity, and creativity;
Step 4: Progress Monitoring 357
thus, production-dependent metrics lose some of the ability to reflect students’ overall skills in written expression. In contrast, students’ scores on production-independent (e.g., %CWS) and accurate production (CMIWS) metrics tend to be more strongly correlated with the scores on standardized tests of writing, especially for students in later elementary grades and beyond (Amato & Watkins, 2011; Espin et al., 2008; Jewell & Malecki, 2007; Mercer, Martinez, et al., 2012; Romig et al., 2017; Truckenmiller et al., 2020). These metrics, and CMIWS in particular, have also demonstrated promise for use with students from many language backgrounds (Keller-Margulis et al., 2016). CBM writing scores are generally sensitive to growth (McMaster et al., 2017). McMaster and colleagues (2011) administered CBM writing tasks on a weekly basis with first-grade students and found that slopes stable enough for decision making were possible after 8–9 data points. Girls have demonstrated stronger CBM writing performance compared to boys at single points in time (consistent with other work in written expression); however, there are fewer gender differences in the rates of growth over time (e.g., Keller-Margulis et al., 2015; Truckenmiller et al., 2014).
Extensions to CBM Writing Approaches, and Alternatives The traditional CBM approach to writing measurement provides a good basis for monitoring progress; however, it has limitations. For one, assessment is based only on a brief sample of writing. Studies have investigated extending the length of time that students are given to write and have found that providing students with more time to write (i.e., 7–10 minutes) improves the reliability of repeated writing samples (i.e., scores across multiple probes will be more similar than if students are provided with only 3 minutes). However, extending the time limits has generally not indicated appreciable differences in the validity of writing samples across studies, including 3-, 5-, 7-, and 10-minute administration times, although there is a slight advantage to providing 10 or 15 minutes with students in the late elementary and middle school grades (e.g., Espin et al., 2000; Furey et al., 2016; McMaster & Campbell, 2008; Romig et al., 2021; Truckenmiller et al., 2020; Weissenburger & Espin, 2005). Additionally, although narrative story starters can be appropriate in early elementary grades, they are often inconsistent with the type of informational and expository writing expected of students in later grades. Extensions include expository prompts, which Espin, De La Paz, et al. (2005) found to result in writing samples that were strongly correlated with criterion measures of writing for middle school students. Similar results were observed by McMaster and Campbell (2008) for expository prompts with students in the fifth and seventh grades. The work of Truckenmiller and colleagues (2020) on the Writing Architect offers additional innovation in progress monitoring for writing. More reflective of the writing process and contemporary expectations for students in school, it shows strong potential as a tool for monitoring writing progress, as it captures skills in transcription and composition quality. The Writing Architect is described in more detail in Chapter 4. ALTERNATIVE SCORING AND RUBRICS
Other metrics and scoring strategies have been developed for monitoring progress in composition quality. Table 7.5 shows a list of 12 measures developed by Zipprich (1995) to score the narrative written essays of children 9–12 years old. She also developed a holistic scoring checklist and a scale for evaluating each student’s performance when scoring these essays (see Figure 7.1). Graham and Harris (1989) described a similar scale to assess
Academic Skills Problems
358
writing skills. In their scale, the presence or absence of eight elements of story grammar was assessed: main character, locale, time, starter event, goal, action, ending, and reaction. For each element, scores from 0 to 4 were assigned. Likewise, a holistic scale using a 1 (lowest quality) to 7 (highest quality) rating was employed. These measures were used to measure improvement in students’ writing performance over time. Troia (2018) developed an essay quality scoring rubric, also used in the Writing Architect (Truckenmiller et al., 2020), that scores a sample of writing on seven dimensions: purpose, logical coherence, concluding section or sentence, cohesion, supporting details from source materials,
TABLE 7.5. Target Behaviors Used to Score Narrative Essay Exams Behavior
Definition
Measurement
1. Planning time
Length of time, measured in minutes, a student requires to complete “My Web for Story Writing.”
Teacher records minutes on blackboard.
2. Words produced
Individual words within a story produced by a student.
Evaluator count.
3. Thought unit
A group of words that cannot be further divided without the disappearance of its essential meaning.
Evaluator count.
4. Fragment
An incomplete thought.
Evaluator count.
5. Simple
A sentence expressing a complete thought that contains a subject and predicate.
Evaluator count.
6. Compound
A sentence containing two or more simple sentences but no subordinate clauses.
Evaluator count.
7. Compound/ complex
A sentence containing two or more simple sentences and one or more subordinate clauses.
Evaluator count.
8. Holistic score
A quality-of-writing factor developed by this researcher.
A 14-item criteria checklist used to render a score that converts to a Likert scale score of 1.0–3.0.
9. Spelling
A subjective evaluation of overall spelling accuracy.
Scale developed by researcher.
10. Capitals
A subjective evaluation of correct use of capitalization as taught through standard English language text.
11. Punctuation
A subjective evaluation of correct use of punctuation as taught through standard English language text.
Scale developed by researcher.
12. Density factor
A quality factor developed by researcher to measure amount of information contained in each thought unit.
A criteria checklist used to render a score for number of ideas included in a thought unit.
Sentence types
Mechanics
Note. From Zipprich (1995, p. 7). Copyright 1995 by PRO-ED, Inc. Reprinted by permission.
Step 4: Progress Monitoring 359 Evaluator Yes
No
Checker Yes
Comparison
No
1. Did the student include a title? 2. Did the student have a clear introduction to the story (i.e., a statement of the problem or beginning of the story line?) 3. Did the student identify characters? 4. Did the student state the goal of the story? 5. Did the student add action to the story? 6. Did the student state an outcome? 7. Did the student write more than one paragraph? 8. Did each paragraph deal with only one topic? 9. Are the major points in each paragraph presented in a correct sequence? 10. Do the paragraphs have a beginning sentence that adequately introduces the idea discussed? 11. Is each paragraph started on a new line? 12. Did the student sequence the story appropriately? 13. Did the student include only relevant information? Total “YES” Conversion Score* *Holistic Score Conversion Scale Total = Score = Status 0–3 1.0 Unacceptable 4–7 1.5 Unacceptable/some improvement 8–10 2.0 Acceptable with errors/needs improvement 11–12 2.5 Acceptable with errors/showing improvement 13–14 3.0 Acceptable/meets criteria FIGURE 7.1. Holistic scoring device used for evaluating written language skills. From Zipprich (1995, p. 8). Copyright © 1995 PRO-ED, Inc. Reprinted by permission.
language and vocabulary choice, and grammar/usage/mechanics. Truckenmiller et al. (2020) observed moderate to strong correlations with students’ scores on a standardized measure of written expression and scores on a state accountability assessment for writing, especially in grades 5 and up. Troia’s (2018) rubric is provided in the Academic Skills Problems Fifth Edition Workbook. Although rubric scoring methods suggest the potential to provide a more holistic view of students’ composition quality, and thus enhance evaluation of students’ written responses over scoring metrics such as TWW, %CWS, and CMIWS, little research has evaluated their reliability, validity, and sensitivity to change in a progress monitoring role. Moreover, few studies have compared them to other scoring metrics. Gansle et al. (2006) collected writing data for elementary students with writing CBMs (including a
360
Academic Skills Problems
variety of scoring metrics), and a holistic rating rubric, the six traits of writing model (officially known as the 6 + 1 Trait Writing Model of Instruction and Assessment), that was widely used at the time across several states. Analyses found that CBM scores were more reliable than the six-trait ratings and more strongly related to scores on a standardized measure of writing. Results also revealed that ratings across the dimensions of the six-trait model were not distinct from each other. Overall, the findings led the authors to conclude that the holistic rating method offered no benefits to measuring writing over CBM approaches, including no observable benefits for its consideration as a supplement to CBM scores. Other rubrics described earlier may be different, but Gansle and colleagues’ findings suggest the need for caution, and certainly more research, on the use of scoring rubrics and holistic ratings in monitoring writing progress. COMPUTERIZED SCORING
Additional work has investigated the use of computer scoring of students’ writing responses. There are several potential advantages to this, most notably for efficiency, because scoring students’ written responses with metrics such as CMIWS can be time- consuming and error-prone. Additionally, computer software exists that can rapidly evaluate text characteristics, such as complexity and word use. Mercer et al. (2019) used the Coh-Metrix software (cohmetrix.com), a free text analysis program that can evaluate various aspects of text characteristics and complexity, including lexical (word) diversity and semantic (vocabulary) variation. With students in second through fifth grades, Mercer et al. found that Coh-Metrix computer scoring improved somewhat on CBM scoring metrics in terms of validity; however, the differences were not large. These results and others (e.g., Keller-Margulis et al., 2021) have indicated promise for the potential of computer scoring to automate the scoring process and potentially identify writing quality features that would be difficult and time-consuming to do by hand. ALTERNATIVE PROMPT TYPES FOR EARLY GRADES
Additional work has extended CBM measurement of writing downward to kindergarten and early elementary grades, and has examined procedures such as picture–word prompts, word dictation, word copying, and sentence copying (Coker & Ritchey, 2014; Keller-Margulis et al., 2019; Lembke et al., 2003; McMaster et al., 2009; Ritchey, 2006). A meta-analysis by Romig et al. (2021) revealed that all types of early writing prompts were associated with moderate validity and no appreciable differences among them (their review also included the types of prompts used with older learners). In summary, research has made significant advancements and innovations in progress monitoring for writing skills, with new types of prompting procedures and scoring methods. As with all of the academic domains, decisions on what to use for monitoring progress depend on the overall goals of the intervention. Nevertheless, production- independent and accurate production scoring metrics, such as CMIWS, appear to have the most support thus far, especially for students in middle elementary grades and beyond. Research is needed to determine if scoring rubrics or holistic ratings improve on or effectively complement scores provided by metrics like CMIWS. Increasing the time students are allotted to write to 10 or 15 minutes may improve reliability and validity for students in upper elementary grades and beyond. Expository and passage prompts are intriguing
Step 4: Progress Monitoring 361
advancements, and computer-based administration and scoring will be areas of innovation for progress monitoring in writing in the coming years.
Progress Monitoring Measure Selection: Behavior Across this text, we have indicated the importance of behaviors such as engagement, effort, self-regulation, and following rules and expectations for supporting academic achievement. For students whose behavior is part of their academic difficulties, components to support behavior should be integrated within the intervention, and progress monitoring can indicate whether these components are associated with improved target behaviors. Options are available for monitoring progress in behavior and are summarized by NCII (intensiveintervention.org; see the Behavior Progress Monitoring Chart). Conley et al. (2019) discussed methods and strategies for behavior progress monitoring. Additionally, any of the direct observation methods, such as the BOSS (or a subset of those behaviors) described in Chapter 3, can be used on a repeated basis for monitoring changes in students’ behavior over time. However, any method that relies on frequent direct observation has logistical limitations because they require the presence of a person to observe. An attractive alternative exists. Direct behavior ratings (DBRs; Chafouleas, 2011; Chafouleas et al., 2009) are flexible and efficient tools that combine the strengths of behavior rating scales and systematic direct observation. They were designed as a standardized framework for monitoring behavior change. Standard DBRs include a set of three keystone behaviors that are associated with most aspects of learning, successful achievement, and school functioning: academically engaged (i.e., on-task), respectful (i.e., compliant, follows directions), and disruptive (externalizing behavior). DBRs can also be adapted to focus on fewer behaviors, or include user-specific behaviors. Immediately following a specified time period, such as an academic activity, intervention session, or class period, the teacher or interventionist completes a DBR by rating the extent to which each behavior occurred during that time period on a scale of 0 to 10. The scale includes a visual reference resembling a number line. Ratings can take less than 30 seconds per student and are repeated on a regular basis during a time period of interest. These data can then be charted to evaluate behavior change over time. Led by Chafouleas, Riley-Tillman, and colleagues, an extensive program of research developed and evaluated DBRs across a series of studies (see Chafouleas [2011] for a review). Research examined everything from the target behaviors and their definitions to the size of the increments on the scale. DBRs demonstrate strong interrater reliability, and ratings on the behaviors are highly correlated with systematic direct observations of the same behaviors (Chafouleas, 2011; Christ et al., 2011). DBRs are also sensitive to changes in students’ behavior (Chafouleas et al., 2012). Thus, their strong technical properties combined with high efficiency make DBRs an excellent option for monitoring the progress of behavior. Additional information and access to DBR tools are available from the developers at dbr.education.uconn.edu.
What about Progress Monitoring with Computer‑Adaptive Tests? Computer-based assessments have proliferated in U.S. schools given the combination of technological advances, availability of high computing power in schools, and access to high-speed internet. The idea for administering and scoring CBM progress monitoring
362
Academic Skills Problems
assessments on computers has been around for quite some time (e.g., L. S. Fuchs & Fuchs, 1992). Additionally, computer-based scoring of CBM measures is available now from most CBM vendors; these computer-based forms of the measures are often identical to their paper-based versions and offer time-savings in terms of scoring and imputing data into the system. However, these parallel computer-based scoring adaptations of paper-based measures are not the focus of this section. Another form of computer-based assessment is computer-adaptive tests (CATs). As with a traditional test, the ultimate goal of a CAT is to establish a reliable and valid estimate of a student’s skills or ability. However, unlike traditional tests, which administer a large number of items to determine a student’s achievement or ability, CATs are designed to make that estimate with fewer items in less time. CATs are commonly used in college entrance and certification exams. The GRE, which was (until recently) commonly required for graduate school applications, is an example of a CAT. CATs are different from a traditional test, in which the examinee answers all items or answers as many items as possible in a given time frame. In contrast, a CAT contains a large bank of test items that have all been previously assigned a difficulty ranking and placed on a vertical scale. The CAT then draws from this item bank to present test items to the examinee. The difficulty of each subsequent item depends on the examinee’s accuracy in responding to the previous item. After each correct response, the test presents a more difficult item. After each incorrect response, the test presents an easier item. The test is therefore adaptive; it adapts the content provided to the test-taker by continuously adjusting the difficulty of items based on the performance of the individual taking the test. As a result, it is possible for two different test-takers to see mostly different test items. A higher-achieving examinee will receive more difficult items (provided they are responding correctly), and lower-achieving examinees will see easier items based on more frequent incorrect responses. The test ends once the CAT has administered enough items to estimate the test-taker’s score. The underlying idea is that a CAT estimates an individual’s ability or achievement with fewer items and in much less time than on a traditional paper-based assessment. CATs perform very well in the role of evaluating an individual’s performance, at one point in time, across a broad domain of knowledge or achievement. For K–12 education, CATs are available from various vendors in the areas of reading, early literacy, and mathematics. CATs can function effectively in a universal screening role. However, even though most were not originally designed for frequent assessment as in a progress monitoring basis, several vendors market CATs as options for replacing paper-based progress monitoring measures. Research indicates the need for considerable caution when considering the use of a CAT for progress monitoring. First, in reading and mathematics, CATs have not shown the ability to predict subsequent achievement better than much shorter paper-based CBM measures, and in some cases, CATs were weaker predictors than CBMs (Clemens et al., 2015; Clemens, Hsiao, et al., 2019; Clemens, Lee, et al., 2022; Klingbeil et al., 2017; Shapiro & Gebhardt, 2012; Shapiro et al., 2015). We (Clemens et al., 2015) found that although a CAT of early reading skills administered in kindergarten was predictive of reading skills at the end of kindergarten and first grade, it did not improve in the prediction of reading outcomes over paper-based CBM measures, and in some cases was a weaker predictor of later reading skills compared to a 1-minute measure of word identification fluency. In kindergarten, we (Clemens et al., 2019) conducted dominance analyses and found that other paper-based CBM measures such as LSF, NWF, and WRF were stronger predictors of kindergarten year-end reading skills when compared at single points in time in December,
Step 4: Progress Monitoring 363
January, February, and April. In fact, measures such as WRF tended to completely dominate the CAT in its prediction of year-end reading skills. The only occasion in which the CAT demonstrated stronger ability to predict outcomes over the CBM tools was when it was measured in October of kindergarten, perhaps owing to the fact that the CAT was able to administer items of more basic knowledge (i.e., it has a lower test “floor”). Second, studies have raised questions about the technical adequacy of CATs as progress monitoring options. CATs were originally designed to provide an ability estimate at one point in time, not originally designed to measure growth. Shapiro et al. (2015) observed very low rates of growth on a CAT measuring mathematics combined with a high standard error of measurement (SEM). The combination of these two factors meant that progress would need to be monitored for at least 3 months before any determination could be made about whether a student’s scores did or did not improve. Three months is a long time to be unable to make any instructional decisions. Van Norman, Nelson, and Parker (2017) examined data from a mathematics CAT collected on a progress monitoring basis with students in grades 4 to 6. Given the variability in scores on each measurement occasion that affect the stability of slope estimates (i.e., the precision and confidence that one has in whether a student’s rate of progress is “true” and not artifacts of variability associated with being tested), using the data to make instructional decisions was only possible after 14–18 weeks (i.e., 3–4 months) when data were collected weekly, and 20–24 weeks (i.e., 5–6 months) when data were collected every other week. Given that the average test administration was 30 minutes per occasion, the weekly schedule would represent at least 7 hours of assessment and any instructional decisions would have to wait upward of 4 months. This is not feasible or helpful in a progress monitoring context, where the very purpose of progress monitoring is to inform timely adjustments to instruction. Similar evidence of high variability in scores that impact confidence in decision making was observed by Nelson, Van Norman, and Klingbeil (2017). Other work has examined the validity of slope estimates from CATs, which refers to the extent to which students’ slope of improvement over time on the CAT is associated with subsequent outcomes. Clemens et al. (2022) administered a CAT across four occasions in kindergarten with a sample of students with early reading difficulties, along with a set of CBM early reading progress monitoring tools. First, reliability between measurement occasions of the CAT were lower than those of the CBM tools; however, the administrations were further apart. More importantly, growth was slow on the CAT compared to growth on the CBM measures, and slope of progress was a weaker predictor of later reading skills than slope of improvement on the CBM tools. In fact, rates of growth on the CAT across kindergarten were highly similar among students who later demonstrated stronger, middle, and weaker reading skills at the end of first grade (in contrast to some of the CBM tools, such as LSF and WRF, in which slope was much more discriminative of subsequent reading status). A closer look at the characteristics of CATs reveals why they may not be ideal choices for progress monitoring: 1. Since the methodology is better suited as a summative, single- point-in-time assessment as opposed to a formative assessment, some publishers do not recommend their administration more than once per month, and in some cases, not more frequently than four times per year. Based on the study by Van Norman, Nelson, and Parker (2017), once-a-month administration would require 20–28 weeks—roughly half a school year— for practitioners to be confident the feedback from the CAT is a valid index of students’ responsiveness to an intervention (or lack thereof).
364
Academic Skills Problems
2. Given the adaptive nature of the assessment, not all students may see items in all skill areas contained in the CAT. However, the software might estimate subscale scores even though students were not exposed to content (or perhaps saw very few items) in those skill areas. This fact is often not communicated in score reports generated by the software and is inconsistent with a model of direct assessment that we advocate in this text. 3. The measures are resource-intensive. Even though students take them independently, they can be more expensive to access than CBM tools, they require more time per student to administer, and often require sufficient computer hardware and a stable, high-speed internet connection in the school. 4. It is not clear if CATs provide information that is useful for instructional planning or identifying when intervention changes should be made. 5. There are additional concerns when considering their use with young students (i.e., kindergarten and first grade). Students must be comfortable using the response interface (e.g., computer mouse or touchscreen, depending on what the test makes available and hardware supports). Tests are designed to be taken independently, thus requiring sustained on-task behavior for at least 15 minutes (and up to 30 in some cases), which is a challenging requirement for kindergarten students, especially students with academic difficulties. 6. Most CATs utilize multiple-choice responding or some similar form of response selection; therefore, students receptively choose a correct response such as a letter–sound or correct word, rather than reading the word as in a natural context. In summary, CATs are an interesting advancement in academic assessment. However, educators must exercise considerable caution in how they are used. CATs are well suited for occasions in which educators want a global estimate of student performance within or across academic domains at a single point in time, such as summative assessment situations. Universal screening is another situation in which CATs are better suited, and there is evidence that CATs can serve as good universal screening tools beginning in middle elementary grades (January & Ardoin, 2015; Klingbeil, Nelson, et al., 2017). However, CATs are not well suited for progress monitoring roles, as revealed by research on their technical properties, characteristics that would be necessary for frequent and repeated administration, and data that can be interpreted for making instructional decisions. This may change with subsequent development and testing with CATs.
Should I Use a GOM, an SSMM, or Both? Today, given the diversity of measures and the skills they assess, it is not as easy to identify what is a “GOM” and what is an “SSMM” as it once was. In general, it is best to examine the breadth of the items and skills required to perform well on the measure. If the measure requires a broad range of skills, or performance depends on proficiency with multiple foundational subskills, it is closer to being a GOM. If performing well involves a narrower range of skills or skills that are more discrete and foundational, it is closer to being an SSMM.
Step 4: Progress Monitoring 365
Decisions regarding if and when to include an SSMM in a progress monitoring plan depend on several questions: 1. Will SSMM provide information over and above the general outcomes measure that will be necessary for understanding responsiveness and adjusting instruction? 2. If existing or published SSMM tools are not available, will users be able to create SSMM probes? 3. Will users be able to administer SSMM and the general outcomes measure across the same period, and will this be feasible? If the answer to all questions is “yes,” then an SSMM could be something to include in a progress monitoring plan. The academic domain and the students’ level of functioning can also play a role in the decision. Mathematics is an area in which measures more naturally fall toward the SSMM side of the spectrum—imagine trying to create a measure that captures even half the types of skills students are taught in a given grade. Measures within a mathematics skill area, such as procedural computation, can reflect a range of skills and thus represent a type of GOM for that skill. Early literacy is another area in which SSMMs are more common. However, even though measures like number combination (math facts) fluency, letter–sound fluency, or word reading fluency measure more specific skills, readers should not lose sight of the complexity of skills that make performance on those measures possible. All of these factors argue for a more nuanced, strategic view of GOMs and SSMMs in progress monitoring (VanDerHeyden & Burns, 2018). One type is not universally better than another. What matters most is the individual student and the intervention, and the two can most certainly work in tandem.
Summary: Selecting Measures for Progress Monitoring There is a wide array of measures available for monitoring progress in reading, mathematics, and writing. With the exception of CATs, many types of measures have evidence supporting their use in a progress monitoring role. It may seem overwhelming at first to try to identify a measure best suited for progress monitoring with an individual student. However, the important thing to remember is the hypothesis formed and revised across the assessment regarding the skill deficits that are most likely the cause of the student’s overall achievement difficulties, combined with the ultimate goals of the intervention selected to target those skills. Keeping the hypothesis in mind, and the aim of the intervention, will help identify a progress monitoring measure that will inform whether the intervention is having a desired effect on the skills that are of most interest, and provide data that can be easily translated into instructional decisions. Overall, measures that fall closer to the GOM side of the spectrum offer efficiency for monitoring progress; however, there are situations in which specific subskill mastery measures SSMM can provide valuable insight on a student’s responsiveness to intervention and the need to adjust instruction, and can be used in conjunction with a GOM. In short, the most important thing to remember is that the very goal of progress monitoring is to provide data that readily inform when a current program should continue, or when instruction needs to be adjusted to improve the student’s performance. Measure selection should proceed with that firmly in mind. Once the measure is selected, it is time to move on to setting a progress monitoring goal.
Academic Skills Problems
366
STEP 2: SETTING A PROGRESS MONITORING GOAL A goal in progress monitoring, and how we refer to the term across this chapter, pertains to the score on the progress monitoring measure that the student is expected to achieve at the end of a given period of time. Setting goals is the basis for making instructional decisions with the data; because (as will be discussed in the following section), data-based decision making involves examining whether the student is on track to meet their goal by the end of the time frame. After a goal has been established, the user sets up the student’s progress monitoring graph by plotting their performance on the start date (i.e., their baseline score), plotting the goal score for a date on the far right of the graph (e.g., the last progress monitoring assessment for the school year or other time frame), and drawing a line connecting the student’s baseline data point with the goal. This line then serves as a reference point, sometimes referred to as an aim line or goal line, as it reflects the rate of growth the student should demonstrate to achieve the goal within the time frame. Student performance below this aim line places them on a trajectory to not meet their goal in time, thereby prompting instructional decisions. Figure 7.2 illustrates the common features of a progress monitoring graph. Because instructional decisions are often based on the student’s observed rate of growth (i.e., slope) relative to the target growth rate, goal setting is an important aspect of progress monitoring. A guiding principle in goal setting is that goals should be ambitious but attainable. The need for the goal to be ambitious reflects the fact that the student is performing below expectations, needs to demonstrate a greater rate of growth than their peers in order to narrow the gap, and they will be receiving supplemental
45 40
Student’s current trendline
35
Goal
25
Scores
20
Goal line, or aim line
15 10
Baseline score
5
FIGURE 7.2. Example of a progress monitoring line graph.
20-May
6-May
13-May
29-Apr
22-Apr
8-Apr
15-Apr
1-Apr
25-Mar
18-Mar
4-Mar
11-Mar
25-Feb
18-Feb
4-Feb
11-Feb
28-Jan
21-Jan
7-Jan
14-Jan
31-Dec
17-Dec
24-Dec
3-Dec
10-Dec
19-Nov
26-Nov
5-Nov
12-Nov
29-Oct
22-Oct
8-Oct
15-Oct
0 1-Oct
Correct per Minute
30
Step 4: Progress Monitoring 367
intervention that is more intensive than typical instruction. At the same time, the goal must still be realistic and attainable.
Determining the Time Frame for Monitoring (the Goal End Date) Before discussing the different methods for goal setting, it is first important to talk about the time frame—or ending date—chosen for monitoring a student’s progress. The end date has implications for how a goal is set. The most straightforward approach is to use the end of the school year as the end date. In other words, the last day in which progress monitoring data will be collected for the school year is the date by which the goal should be achieved. A year-end goal can be set even if the intervention is not expected to be implemented for that long because student progress can be evaluated at any point in time by comparing the student’s observed rate of growth relative to an expected rate of growth. The advantages of choosing the end of the school year as the goal end date are as follows: (1) It permits the use of end-ofyear normative data or benchmarks for setting goals; (2) it provides a more “standard” approach to goal setting that can be used across students, and thus facilitates ease and familiarity with goal-setting procedures across staff; (3) it aids interpretation by more readily illustrating progress toward year-end expectations; and (4) it does not require recalculating a goal if what was initially believed to be a short-term intervention must be implemented for longer. Although year-end goals have several advantages, there may be situations in which the end of the school year may not make sense as the endpoint for monitoring a student’s progress. Other time frames such as the end of the period in which a student’s progress toward their IEP goals is evaluated may occur at some specific point during the year, and thus the end of that time frame might be desirable as the end of the progress monitoring period. Other situations in which a short-term progress monitoring goal is desired might be times when it is of interest to evaluate a student’s responsiveness to an intervention across a shorter time frame, such as an 8-week (2-month) period, as part of a psychoeducational evaluation. It should be noted, however, that a year-end goal date can still be used very effectively in these situations; at the IEP meeting, or at the specific time when responsiveness is going to be evaluated, the team would simply consider the student’s observed progress in relation to their expected rate of growth on any given day. The goal-setting time frame should also be informed by reasonable expectations for change. In other words, the time frame selected should be long enough that (1) the intervention has a chance to work, and (2) meaningful skills growth is possible. Gersten, Compton, et al. (2009) recommended that intervention should be implemented a minimum of 6 weeks before responsiveness is evaluated, but a longer period of time may be needed given the severity of the student’s difficulties, combined with the intensity and type of intervention support that can be provided. This is another reason why year-end goals are advantageous. In summary, setting a year-end goal is the most straightforward approach and may improve implementation and data interpretation. A student’s responsiveness can be evaluated at any point prior to the end of the year. Progress monitoring involves a set of decisions; therefore, any means of making the process simpler and more efficient will aid implementation on a repeated basis across students. Still, it is possible to use a shorter time frame as an end date for monitoring, but readers should be aware this may pose additional challenges to goal setting and interpreting the data.
368
Academic Skills Problems
Methods for Goal Setting There are several approaches to setting progress monitoring goals. Deciding on which method to use depends in part on the resources available to the user and their comfort level with the goal-setting method. In addition to the descriptions provided here, further detail on goal-setting methods is provided by the NCII (intensiveintervention. org; see their learning module resources under “Implementation Support”). We discuss four goal-setting methods here: benchmark, norm-referenced, rate-of-improvement, and intra-individual differences. We also discuss vendor-specific goal-setting methods that are provided to subscribers to their respective systems.
Benchmark Goal‑Setting Methods The term benchmarks, in a progress monitoring and response-to-intervention context, refers to target scores that have been established for specific progress monitoring tools (and are often specific to the vendor of that tool). These benchmark targets were established by the publisher, based on their research that found attainment of a score was associated with a high likelihood of success on a criterion test, or meeting the next benchmark target in a subsequent grade. Well-known examples of benchmark targets may be found in the DIBELS series of measures. In the Eighth Edition of DIBELS, the authors provide benchmark targets for each measure for the beginning, middle, and end of the year. The scores include risk categories scores indicating negligible risk, minimal risk, some risk, and at-risk levels. Acadience also indicates four levels: above benchmark, at benchmark, below benchmark, and well below benchmark. Using benchmark targets for setting progress monitoring goals simply involves selecting a benchmark target score associated with that measure and the end of the desired progress monitoring period (i.e., the end of the school year). In contrast to methods described later, they do not require calculation; the user simply plots the year-end benchmark score as the goal for the end of the year. Thus, benchmark scores offer a clear and easily communicated method for goal setting. Additionally, benchmark targets typically have empirical support as being predictive of successful achievement or attaining subsequent benchmarks in the future. Although they are simple and straightforward, benchmark targets may not always be the best choice for progress monitoring goal setting, especially when students demonstrate very low achievement. For students with very low baseline scores, achieving the grade-level benchmark target by the end of the school year may require a very high (and unrealistic) rate of growth. Additionally, if a student’s progress is being monitored out of grade level (i.e., the progress monitoring probes are from a lower grade level than the student’s grade of record given their functioning level), grade-level benchmark targets do not apply because the student is being monitored in material that is a different grade level than that used to empirically establish the goal. Additionally, some measures do not have any established benchmark targets. For these reasons, the benchmark method of goal setting may be appropriate for students who are achieving below the average range, but their performance is not severely discrepant from their peers. One advantage to having multiple target scores based on different risk levels, as provided by tools such as Acadience and DIBELS, is that the user could select a target score that reflects more realistic expectations. For example, for a student performing at very low levels compared to other students in grade-level material (i.e., the “at-risk”) range in the DIBELS targets, a goal might be to attain the target in the next risk level (i.e.,
Step 4: Progress Monitoring 369
“some-risk”), provided this is sufficiently ambitious of a goal. We discuss determining ambitiousness later.
Norm‑Referenced Goal Setting Norm-referenced goal setting uses normative data to select a target score associated with a desired percentile level of achievement. Normative data may be provided by the vendor of the progress monitoring tool, local data specific to the school or school district, or other national normative dataset. This method requires consulting a normative growth table, which is available from all major vendors of progress monitoring tools, as well as other sources. For example, a student receiving additional support in reading may currently have scores that place them at the 5th percentile relative to same-grade peers for that time of year. An objective of intervention would be to improve the student’s reading skills and thus narrow the gap with their peers. Therefore, the teacher might use a target score associated with the 25th percentile (i.e., lower bound of the average range) by the end of the school year, and thus meeting the goal improves the student’s relative standing. For another student currently performing at the 18th percentile, a goal might be to meet scores associated with the 40th percentile by the end of the school year. Like benchmark goals, setting norm-referenced goals tends to be simple and straightforward to understand, and does not require calculation. However, it may not always be clear to users what percentile to use as a “goal.” Should the 50th percentile represent an expectation? If not, is the 25th percentile a better expectation for a student performing well below grade level? Perhaps targeting something in between, such as the 40th percentile, might be more suitable as a target for some students? There are no answers to these questions that will be correct in all situations. However, we can provide some context that may aid these decisions. The 25th percentile is often considered the lower bound of the average range. Some might consider 1 standard deviation below the mean as the lower bound of “average.” –1 SD is associated with approximately the 16th percentile, so in that sense, the 25th percentile would be within but at the low end of the average range. For a student currently scoring at the 5th percentile (i.e., more than 1.5 standard deviations below the mean), which is considered well below average, a goal that gets them to at least a low-average level may be a good first step in a long-term plan for remediation. On the other hand, the 25th percentile would not be an ideal goal for a student who is currently scoring at the 18th percentile, as the goal would likely not be ambitious enough. For that student, a goal of the 40th percentile might be better. As with benchmark goals, a good practice when using norm-referenced goals is to check the resulting rate of improvement (ROI) that the student is expected to reach, the goal that was set to be ambitious while remaining realistic. We describe how to calculate ROI below. In some instances, schools or districts that have been collecting universal screening or progress monitoring data for some time will have large local normative datasets established. Some CBM vendors offer users the ability to view local norms for their school or district. This has advantages for goal setting because goals would be based on the achievement of large numbers of students from the same educational environment receiving similar programs of instruction. As such, local norms are likely to be representative of the performance levels expected of the student being assessed. Several sources are available for readers interested in the procedures involved in developing and maintaining local norms (Canter, 1995; Shinn, 1988, 1989a, 1998).
Academic Skills Problems
370
On the other hand, caution is needed when using local norms for goal setting. First, the normative dataset should be sufficiently large enough; at minimum, it should include data from several hundred students at each grade level (more is usually better). Second, users need to have confidence that the local data were generally collected with fidelity— problems or deviations from standardization in test administration or scoring, or if data were collected outside of the implied time period (e.g., if data entered for the “fall” of the school year were not collected by some schools until December), all raise concerns about the validity of the local norms. Third, for schools whose students achieve at the extreme ends of the achievement continuum, either very low or very high, a local norm may not adequately reflect what an individual student should be expected to achieve. For example, the 50th percentile in a district with historically low achievement may be closer to the 25th percentile of achievement observed nationally, and the 25th percentile observed in the district may be closer to the 10th percentile observed nationally. An illustration of the degree to which districts of varying SES levels differ in the normative performance of their students is shown in Table 7.6. Using the level of socioeconomic status reported by the districts, the performance across types of districts (Low = 59.5% economically disadvantaged, Middle = 33.8%, High = 6.3%) shows large discrepancies across and within grades. For example, a student reading at the 25th percentile of the district with the highest rate of economic disadvantage in the winter of the fourth
TABLE 7.6. Oral Reading Scores (Words Correct per Minute) for Grades 1 through 5 at the 25th Percentile across Three School Districts Whose Population Represented High, Middle, and Low Socioeconomic Status Fall
Winter
Spring
Grade 1 High Middle Low
4 4 1
29 15 2
59 26 6
Grade 2 High Middle Low
31 31 11
69 55 21
79 67 31
Grade 3 High Middle Low
68 62 36
86 77 38
102 89 44
Grade 4 High Middle Low
80 83 53
98 96 59
114 104 75
Grade 5 High Middle Low
103 90 77
113 101 83
121 115 88
Step 4: Progress Monitoring 371
grade was found to be reading at 59 WCPM. By comparison, students in the winter of the fourth grade reading at the 25th percentile of the high SES district were found to be reading at 98 words WCPM, a discrepancy with 25th percentile readers at the same grade reading grade-level material of 39 WCPM. Users must consider the relative achievement of the district in determining whether it makes sense to use local norms, and if so, what level of performance reflects a goal expectation. Given that not all users will have either national or local norms available, it is possible to refer to normative data reported in the literature. For example, Jan Hasbrouck and Jerry Tindal have been compiling large-scale normative data in oral reading since 1992. Their most recent effort (Hasbrouck & Tindal, 2017) is a technical report available in the ERIC system and in the Behavioral Research and Teaching technical reports page at www.brtprojects.org/publications/technical-reports. The 2017 report includes normative data collected with over 2 million students in grades 1–6 nationwide with DIBELS and easyCBM measures. Their normative growth table, provided in Table 7.7, reports the average oral reading scores at the 10th, 25th, 50th, 75th, and 90th percentiles in the fall, winter, and spring of each grade level, thus providing practitioners with an excellent resource for goal setting in oral reading fluency. A remarkable thing about the Hasbrouck and Tindal (2017) norms is how similar the scores are to the normative data they compiled in 2006 and 1992. Like how medicine has established growth charts for children, these types of oral reading norms provide growth charts in reading that have remained remarkably stable over the years. For example, by the end of first grade, students at the 50th percentile read roughly 50–60 words correctly per minute from a passage of text, and by the end of second grade, average performance is about 100 words per minute. Similar to the average height of children in data collected over the years, normative oral reading scores vary very little across oral reading measures and the years they were collected. Comparisons to published normative reports are only possible if the measures selected for progress monitoring are identical (or very close) to the measures used in collecting the norms. This is often the case for oral reading, maze, math facts fluency, writing scoring metrics, and some forms of basic/early reading measures like LSF, because the content across these measures is usually very similar (provided measures were administered the same way as in the normative sample). Other measures, such as contemporary measures of mathematics computation or concepts, can vary considerably across publishers and thus may not be comparable to norms collected with different measures. In summary, the norm-referenced method for goal setting shares a lot of similarities with the benchmark method. Both are relatively straightforward to understand and are based on commonly observed standards of achievement. However, there are also times that an ideal goal for progress monitoring might require more precision. For those situations, we can consider the following methods.
Rate of Improvement (ROI) Goal‑Setting Method The ROI method involves calculating a target or expected rate of improvement and using that to determine a progress monitoring goal. First, let’s define what we mean by rate of improvement—this refers to the number of points gained per week on a measure. In the case of oral reading, ROI refers to the number of words per minute gained per week. On a mathematics measure, it may refer to the number of correct problems or digits per minute gained per week. Students making faster progress will have a higher rate of improvement than students making slower growth.
Academic Skills Problems
372
TABLE 7.7. Aggregated National Oral Reading Normative Data (Hasbrouck & Tindal, 2017) Grade
%ile
Fall WCPM
Winter WCPM
Spring WCPM
97 59 29 16 9
116 91 60 34 18
Full-year ROI (32 weeks)
Fall to Winter ROI
Winter to Spring ROI
1
90 75 50 25 10
1.19 2.00 1.94 1.13 0.56
2
90 75 50 25 10
111 84 50 36 23
131 109 84 59 35
148 124 100 72 43
1.16 1.25 1.56 1.13 0.63
1.25 1.56 2.13 1.44 0.75
1.06 0.94 1.00 0.81 0.50
3
90 75 50 25 10
134 104 83 59 40
161 137 97 79 62
166 139 112 91 63
1.00 1.09 0.91 1.00 0.72
1.69 2.06 0.88 1.25 1.38
0.31 0.13 0.94 0.75 0.06
4
90 75 50 25 10
153 125 94 75 60
168 143 120 95 71
184 160 133 105 83
0.97 1.09 1.22 0.94 0.72
0.94 1.13 1.63 1.25 0.69
1.00 1.06 0.81 0.63 0.75
5
90 75 50 25 10
179 153 121 87 64
183 160 133 109 84
195 169 146 119 102
0.50 0.50 0.78 1.00 1.19
0.25 0.44 0.75 1.38 1.25
0.75 0.56 0.81 0.63 1.13
6
90 75 50 25 10
185 159 132 112 89
195 166 145 116 91
204 173 146 122 91
0.59 0.44 0.44 0.31 0.06
0.63 0.44 0.81 0.25 0.13
0.56 0.44 0.06 0.38 0.00
Note. Data originally reported by Hasbrouck and Tindal (2017), collected with over 2.3 million students, nationwide. Data reprinted with permission. WCPM = words correct per minute; ROI = rate of improvement (number of correct words gained per minute, per week). ROI based on 4 months (estimated 16 weeks) between data collection points (Fall = mid-September, Winter = mid-January, Spring = mid-May). Full-year ROI based on mid-September to mid-May (estimated 32 weeks).
The ROI method of goal setting requires access to ROI data demonstrated by students on a normative basis. Most normative data tables provided by CBM vendors will provide the ROI associated with scores between the beginning, middle, and end of the year, or rates of growth for the full year. If ROI is not provided in the table, estimates can be calculated by finding the difference of two scores and dividing by the number of weeks. For example, Hasbrouck and Tindal’s (2017) oral reading norms can be used to calculate the average ROI from the beginning to end of the year, noted under the “Fullyear ROI” column in Table 7.7. Consider the second grade, for example. To calculate the
Step 4: Progress Monitoring 373
average full-year ROI, subtract the fall score at the 50th percentile (50) from the spring score at the 50th percentile (100), which results in a difference of 50. Then, divide the difference by the number of weeks between the two assessments. The exact number of weeks is not available, but we can estimate that fall data were collected in mid-September, and spring data were collected in mid-May, resulting in approximately 32 weeks (8 months). Fifty divided by 32 is 1.56. This means that, on average, typically achieving second graders improve in oral reading at a rate of 1.56 words gained per minute per week. The same process is used to determine the average ROI demonstrated by students at lower levels of achievement, such as the 25th or 10th percentiles. Additionally, ROI is often faster in the fall compared to the spring, and the same process can be used to calculate the average ROI observed between fall and winter, or between winter and spring (just be sure to divide by the right number of weeks; winter data are often collected in January). If you have more precise information on when testing occurred, this will certainly improve the precision of ROI estimates. Nevertheless, this method can be used to estimate observed student growth rates between certain times of year or the full year. Once you have estimates of average ROI, it can be used to calculate a target ROI for an individual student and, subsequently, a goal. Calculating a target ROI (i.e., a rate of growth you expect the student to demonstrate with the intervention) involves multiplying the normative growth rate by a value between 1.5 and 2. Why multiply it? Because a supplemental intervention is designed to boost students’ performance and narrow achievement gaps with typically achieving peers; therefore, you should expect that a student will make faster growth than what students at their achievement level typically demonstrate. Multiplying by 1.5 means that the student’s target ROI will be 50% greater than the ROI typically observed. Let’s say, for example, that a student reads about 37 words correctly per minute in the fall of third grade, which falls at approximately the 10th percentile in Table 7.7. The table also indicates that third graders around the 10th percentile demonstrate an average rate of improvement of about 0.72 words per minute per week across the full school year. The student will be receiving an intensive, evidence-based intervention for 5 days per week at 30 minutes per day, and I expect that they will double the yearly growth rate typically demonstrated by students at the 10th percentile. Therefore, I multiply 0.72 (observed ROI) by 2 (i.e., an expectation that the student will double the ROI typically observed), which results in 1.44. This is the target ROI for the student. Next, I multiply the target ROI by the number of weeks I will monitor progress. Let’s say that the intervention will begin in early October, and I would like to set a goal for the end of the year (even though I may evaluate responsiveness at many points within that time period). School in my district ends in late May, which is about 28 weeks (7 months) away. My target ROI of 1.44 × 28 weeks = 40. This is my gain score. It means that given my accelerated target ROI and the number of weeks, I expect my student to gain 40 words per minute by the end of the school year. Now that I have the gain score, I am ready for the last step: adding the gain score to the student’s baseline (initial) score. The student’s ORF score in early October is 37. Adding the gain score of 40 to 37 results in 77. This is my progress monitoring goal. In other words, accounting for where the student is starting, and an intervention that I expect will at least double their rate of growth over that demonstrated by students at such an achievement level, I’ve targeted a goal of 77 words correct per minute for the end of the year. The value of the ROI method is its precision, and the ability of the user to set empirically based, ambitious goals. It also allows the user to adjust the level of ambitiousness
374
Academic Skills Problems
by changing the value with which the observed ROI is multiplied. Disadvantages are that it can be complex to calculate and therefore prone to error, especially for new users. However, with familiarity it becomes easy to calculate. Users can also create spreadsheet calculators to assist in determining ROI-based goals.
Intra‑Individual Framework Goal‑Setting Method The intra-individual framework method of goal setting uses the student’s current rate of improvement and targets a more ambitious one (NCII, 2013). Using this method would be appropriate in situations in which the student already has ROI data from ongoing progress monitoring efforts, the team plans to intensify intervention over what was previously implemented, and wishes to calculate a new goal based on the student’s current rate. Here, the student’s current ROI across the last 6–8 data points is calculated using the same methods as above: subtracting first from the last score and dividing by number of weeks, or another method such as calculating slope, which is then multiplied by a value between 1.5 to 2 to reflect a greater expected rate of growth in the new and more intensive intervention. Then, as with the ROI method, multiply the new expected ROI by the number of remaining weeks in which intervention and progress monitoring will occur, which results in the gain score. Add the gain score to the most recent score from the available progress monitoring data, resulting in the student’s new progress monitoring goal. The intra-individual method of goal setting, as the name implies, is highly individualized for the student, like the nature of specialized intervention supports. However, it requires current progress monitoring data for the student, which may not be available. It can also be challenging to calculate. However, it may be particularly relevant in situations where a student is moving to a more intensive intervention or form of support, such as special education. It also might include a situation where a student received Tier 2 support and is now moving to Tier 3 support. Or, it could be an additional intensification of an already individualized form of support. Data-based individualization relies on data and decisions that are highly specific to the student; thus, the intra-individual differences approach to goal setting may be particularly relevant for students receiving special education support services or other similarly intensive interventions.
Publisher‑ and Provider‑Specific Goal Setting Methods Several vendors of progress monitoring measures include imbedded goal-setting tools in their data systems; these can dramatically ease the progress monitoring set-up process for teachers and practitioners. For example, DIBELS Eighth Edition includes a tool called “Zones of Growth,” which provides users with a way to set goals with the support of the data system. The user selects their targeted rate of growth (average, above average, or ambitious), and the data system automatically calculates a goal and target rate of improvement based on the student’s baseline score. Other providers offer supports like this that enhance feasibility, reduce errors, and incorporate empirical data from the system (e.g., benchmark targets, normative data on scores and rates of improvement) as part of the goal-setting process.
Summary: Progress Monitoring Goal‑Setting Methods Regardless of what goal-setting method is chosen, a guiding principle in goal setting is “ambitious but attainable.” Ambitious goals are associated with stronger student
Step 4: Progress Monitoring 375
outcomes than less ambitious ones (L. S. Fuchs, D. Fuchs, et al., 1985, 1989), but the goal must still be realistic. Consulting normative data tables provides a good basis for setting goals that are ambitious and reflect a greater rate of growth than what students typically demonstrate, while not being significantly greater than what is observed. This is often an area where practitioners must balance research, student data, and their clinical judgment. Overall, we recommend utilizing a goal-setting method with a good rationale for the student and the time frame, and evaluate the goal based on the individual student’s needs, history, and the quality and intensity of the intervention that will be implemented. Student factors such as their attention, motivation, and behavior; and personnel and logistic factors such as the training of the interventionists, and how often intervention can be implemented, should also be considered in determining whether the progress monitoring goal is ambitious yet realistic. The selected goal-setting method should also be one in which the users feel confident and competent. Some involve more steps and calculation than others, and there are few things more problematic than an incorrectly calculated goal. Until research says otherwise, the best recommendation is to use the method best suited to the students’ situation and the user’s ability to accurately determine an appropriately ambitious goal. Additionally, goals can be adjusted later, and Step 5 in the process will discuss situations in which goals should be changed.
STEP 3: DETERMINING HOW OFTEN TO MONITOR PROGRESS The next step in the progress monitoring process is determining when and how frequently to administer measures. The very purpose of progress monitoring is to provide timely feedback for informing instruction; therefore, measurement should be frequent without unnecessarily sacrificing time for instruction. In general, recommendations on frequency range from as frequent as two times per week to up to once per month. The frequency of measurement has implications for decision making. Because several data points are usually needed to have confidence that the student’s slope (i.e., ROI) is an accurate indication of their instructional response and not significantly affected by measurement variability, assessment should be frequent enough to ensure that a sufficient number of data points are available when decisions regarding instructional responsiveness and instructional changes will be needed. A good rule of thumb is to consider weekly measurement; if teachers have supports or the ability to administer more frequently, then two times per week can be considered (but more frequently than that is not necessary as it would take time away from instruction). Conversely, if fewer supports are available or schedules do not allow for frequent measurement, then once every 2 weeks could be considered. It should be remembered, however, that a less frequent monitoring schedule will require longer periods of time between instructional decisions.
STEP 4: GRAPHING DATA Graphic depiction of data is an essential feature of progress monitoring. Without graphic displays of the data, decisions about a student’s progress are extremely difficult. Most people are not well equipped to look at a list of numbers and easily determine whether scores are increasing or decreasing, how quickly they are changing, how variable they are, and if scores are on track to meet a goal. For that, we need to see information
AcAdemic SkillS ProblemS
376
depicted visually. Graphed data help us interpret data, make decisions, and help communicate decisions to others. For most purposes, a simple line graph is used for progress monitoring (see Figure 7.2). Time is indicated on the x axis (i.e., horizontal axis), with performance scaled on the y axis (i.e., vertical axis). A good way to keep the x and y axes apart is to think “y to the sky.” As discussed earlier, setting and plotting progress toward year-end goals offer a straightforward, repeatable approach even if users do not plan for intervention to last the entire year because decisions based on the data can be made at any point beforehand. The graph contains the student’s baseline level performance, the goal, the goal line (the line of progress that a student needs to establish to reach their goal), the data collected during each progress monitoring period, and whenever possible, the student’s trend. Trend can easily be added in Microsoft Excel by left- clicking on the scores in the graph and clicking “add trendline.” The student’s graph should indicate when intervention changes take place. Decision making is also improved if the student’s slope within a period of progress monitoring is summarized separately from the other periods, which allows users to evaluate rate of progress in response to certain instructional conditions. An example is provided in Figure 7.3. The best approach for graphing data is to use an online graphing tool attached to a progress monitoring system. All major vendors of progress monitoring tools (Acadience, AIMsweb, easyCBM, DIBELS, and FastBridge) offer tools that automatically graph progress monitoring data and provide other features such as trendlines, the ability to add intervention or goal changes, and ways to print and share progress monitoring charts with a student’s parents or other teachers. ChartDog, another free graphing service available at the Intervention Central website (www.interventioncentral.org), provides graphing capability but does not offer data monitoring and storage capacity. Finally, any of
Phase-change line Word sort
Word sort + incremental rehearsal
FIGURE 7.3. Progress monitoring graph with phase change.
Step 4: Progress Monitoring 377
the commonly available spreadsheet programs can be used, such as Microsoft Excel or Google Sheets, and several preformatted versions specifically designed for graphing progress monitoring data are again available at the Intervention Central website (www.interventioncentral.org).
STEP 5: MAKING DATA‑BASED INSTRUCTIONAL DECISIONS When used correctly, progress monitoring is associated with improved student growth and outcomes (Stecker et al., 2005; Filderman et al., 2018; Jung et al., 2018), because teachers use the data to actively improve instruction when needed. The critical aspect, and what makes progress monitoring beneficial, is teachers’ use of a structured set of decision rules to evaluate students’ data and determine when to make changes (Stecker et al., 2005). In general, there are four types of decisions that may be prompted by a student’s progress monitoring data: (1) Continue a current intervention when the student’s progress is meeting expectations; (2) adjust, change, or otherwise intensify instruction when progress is not sufficient; (3) increase a goal when it has been met and a new one is appropriate; or (4) reduce the intensity, phase out, or end an intervention when it is no longer needed. To make these decisions, two types of decision rule frameworks that have been studied and recommended for progress monitoring decision making (Ardoin, Christ, et al., 2013; NCII, 2012): point rules and slope rules.
Point Rules The point rule approach bases instructional decisions on the relation of the most recent consecutive data points to the goal line (Mirkin et al., 1982; Stecker & Lembke, 2011). To reiterate, the goal line (also referred to as the aim line) is the line drawn from the initial baseline data point to the goal, which reflects the ideal trajectory to reach the goal. When using a point rule approach, the student’s most recent 3 or 4 consecutive data points are compared to the goal line. If you are monitoring progress once per week, use the 3 most recent data points (i.e., three most recent weeks). If you are monitoring progress twice per week, use the 4 most recent data points. If all 3 (or 4) data points fall below the student’s goal line, it suggests that an instructional change should be made. In contrast, if 1 or more of the data points are on or above the goal line, it suggests the current intervention program has been successful so far. The student’s graph would be examined each week, and each occasion would refer to the most recently collected 3 to 4 data points. The point rule approach assumes that there will be variability in scores on each occasion, but several consecutive data points below the expected trajectory indicate that the student’s progress is below where it needs to be.
Slope Rules The slope rule approach to decision making involves comparing the “steepness” of the student’s trendline (i.e., slope) to the goal line. To reiterate, the trendline is a straight line fitted through the student’s data points that indicates the student’s overall trajectory. A trendline that is less steep than the goal line (see Figure 7.4) suggests an instructional change is needed because the student’s rate of growth is such that they will fall short of their goal. On the other hand, when the student’s trendline is as steep or steeper than the goal line, it is an indication the current intervention is working as planned. A trendline
Academic Skills Problems
378 80 70 60 50 40 30 20
20-May
6-May
13-May
29-Apr
22-Apr
8-Apr
15-Apr
1-Apr
25-Mar
18-Mar
4-Mar
11-Mar
25-Feb
18-Feb
4-Feb
11-Feb
28-Jan
21-Jan
7-Jan
14-Jan
24-Dec
31-Dec
17-Dec
3-Dec
10-Dec
19-Nov
26-Nov
5-Nov
12-Nov
29-Oct
22-Oct
8-Oct
15-Oct
0
1-Oct
10
FIGURE 7.4. Progress monitoring data indicating the student’s trend is below the goal line, suggesting the need to adjust instruction.
that is steeper than the goal line suggests the need to raise the goal or consider reducing the intensity of the intervention if the goal has been met (more on this below). Considering both types of decision rules, point rules are a simpler approach—they simply involve looking at the 3 or 4 most recent data points and determining if all are below the goal line. However, several studies have indicated that point rules may be problematic for accurately determining when students are adequately or inadequately responsive to an intervention, and thus may suppress decisions to adjust instruction when that decision is warranted (Hintze et al., 2018; Parker et al., 2018; Van Norman & Christ, 2016a, 2016b; Van Norman & Parker, 2016). Examining the student’s slope tends to result in better decision making; Jenkins and Terjeson (2011) found that slope tended to generate greater responsiveness of teachers to progress monitoring data compared to point rules. The inclusion of student trendlines on graphs has been found to aid decision making (Van Norman et al., 2013). Certainly, both can be considered when evaluating a student’s graph (Van Norman & Christ, 2016b). The best recommendation we can offer for progress monitoring decision making is to consider both point and slope rules. It does not have to be a mutually exclusive decision. As illustrated in Figure 7.4, educators could consider both the student’s slope of improvement relative to their goal line, and whether their most recent data points are above or below the goal line. Using both frameworks may be especially helpful when there is ambiguity about “responsiveness.” For example, a student’s slope may be somewhat less steep than their goal line, but the practitioner may be uncertain it is less steep enough to consider an instructional change. In this case, they may check the most recent 3 or 4 data points and apply the point rule—if all data points are below the goal line, it would provide additional evidence supporting the need for an instructional change. On the other hand, 1 or more data points on or above the goal line would argue that the current intervention might continue for a bit and data could be reevaluated in the near future. The number of data points collected should also be part of the decision process, which we discuss next.
Step 4: Progress Monitoring 379
How Many Data Points Are Needed before Making a Decision? One aspect that has been the subject of debate is the number of progress monitoring data points that should be gathered before making a decision. This issue pertains directly to the variation in students’ scores over time. Variability in scores is normal in any situation, but some progress monitoring measures are associated with more variability than others. This is especially true of oral reading measures, as it is difficult to develop a set of meaningful passages that are equivalent in difficulty. Even if the probes in the set are as equated as possible, variability can be caused by individual student factors, such as a student’s greater familiarity with the topic of a passage (which can inflate scores), or the student’s mood or distractibility on a given day (which may depress scores). Variability in scores influences how “stable” a student’s trend estimate is—in other words, how confident a user can be that the slope reflected on the student’s graph is a true indication of their trend and not unduly influenced by score variability. One factor that increases confidence the student’s graphed trend is a good indication of their true trajectory is obtaining enough data points. An example is provided in Figure 7.5. Notice in the first three graphs how widely the student’s trendline moves with each new data point, going from increasing to decreasing with the addition of a single data point. This would be an instance of a very unstable trendline, and it is due to having only a few data points—there is not enough information to reveal a good overall trajectory; therefore, the trendline is influenced a lot by just one data point. However, as more data points are collected, the trendline begins to stabilize. It will still be affected up or down by subsequent data points, but it is not affected as drastically as when there were only a few data points. The trendline includes more information to use to estimate trend. The situation is similar to watching coverage of an approaching hurricane. Soon after the storm is detected, meteorologists may develop a general idea of the path it will take. In the coming hours and days, more data points are obtained on the storm’s movement, which allows meteorologists to get a better sense of the storm’s path and future location. Therefore, more data allows for more confidence in estimates of where it will make landfall. The majority of research on the recommended number of data points for making decisions has been conducted with oral reading measures in middle elementary grades. Recommendations have ranged from collecting 6 to 10 data points before making instructional decisions (Christ & Silberglitt, 2007; Shinn et al., 1989), or depending on certain factors such as frequency of data collection and the stability of the probe set, 13 data points or as many as 20 (Christ et al., 2013). More data points are needed when progress monitoring is less frequent, probe sets are less well controlled for difficulty, or conditions associated with the progress monitoring assessment sessions are less stable (Ardoin & Christ, 2009). On the other hand, considering progress monitoring data in terms of how often such information prompts teachers to make instructional changes, Jenkins and Terjeson (2011) observed that assessment as infrequent as once every 8 weeks may be sufficient for generating the data needed for making instructional decisions. When perspectives range so considerably, it can be difficult for practitioners to determine what to do. Our recommendation regarding the number of data points needed before making instructional decisions is to consider the student and the context. Multiple variables and pieces of information should be used. We understand that readers were probably hoping for recommendations to “collect X number of data points, then make a decision.” Unfortunately, decisions are more complex than that, and there are other variables to consider
380
3
1
4
FIGURE 7.5. Illustration of trendline (dashed line) instability with few data points.
2
Step 4: Progress Monitoring 381
beyond the quality of the probe set and number of assessment points per week. We offer the following set of recommendations: 1. Consider the duration of intervention, not just the number of data points. Just as important as the number of data points collected are considerations that enough intervention sessions have occurred to expect appreciable change on the progress monitoring measure. Practitioners must consider the extent of the student’s difficulties, the skills involved in the progress monitoring measure, and how quickly the intervention is expected affect those target skills. Consider also how much content the intervention has covered so far. Take reading, for instance—improving students’ oral reading, especially for students with significant reading difficulties, is not an easy task. Extensive word- and text-level skill improvements are needed for those gains to transfer to generalized improvement in reading connected text with better accuracy and fluency. Similar questions are relevant in other areas, such as mathematics and written expression. In other situations, effects may be evident sooner. In general, scholars have suggested a minimum of 6 weeks of intervention before evaluating students’ responsiveness (Gersten, Compton, et al., 2009), which would mean 6 data points if data are collected weekly and 12 if twice per week. Some students will need more time, and that time frame should be extended if there are questions about the intensity of the intervention or its implementation. 2. Consider other available data, in addition to progress monitoring data, for instructional decisions. Studies that estimate the number of data points needed before making instructional decisions often imply that the progress monitoring data are the only source of information. This should not be the case. Other data can be considered, such as recent tests, students’ work on materials as part of the intervention, error analysis on recent progress monitoring probes, any behavior data being collected at the same time (e.g., behavior report cards, daily behavior reports, behavior observations), student absences, and breaks in the intervention or lack of intervention fidelity. Although progress monitoring data may be the primary source of information, other information can (and should) be considered especially when there is ambiguity about whether enough data points have been collected, or there is uncertainty about whether the data indicate a change is needed. It is important to consider objective data sources (i.e., quantifiable data collected on student academic performance or behavior, not someone’s subjective opinion), such as the examples provided above. Sometimes, data from these other sources contribute significantly to clarifying decisions on when to continue intervention or make a change. 3. Ignore extreme outlying data points. Students will occasionally have scores that are dramatically lower or higher than previous data points. Extreme data points are called outliers. An outlier will pull the student’s trendline up or down. Users new to progress monitoring may often believe an instructional change is needed given an extremely low data point; however, it is very likely that the next score will be closer to the previous scores. The occurrence of an outlier should prompt decisions to collect a little more data, not an instructional decision. However, several extremely low or high data points should be reasons to look more closely at measure administration and scoring fidelity, student factors, or other variables. 4. When in doubt, collect additional data points. What this means is that, if users are unsure if the student’s current trend indicates that progress is sufficient or a change
382
Academic Skills Problems
is needed because the data are highly variable (i.e., a lot of “bounce” exists in the data points across occasions), collect an additional 1–2 data points. More data points usually result in more confidence that the student’s slope (trendline) is a good estimate of their trajectory. 5. Consider the quality of the data source(s) and administration fidelity. Practitioners can have more confidence in the data when high-quality progress monitoring measures are used, measures are consistently administered and scored according to standardized procedures, interventions are implemented regularly (i.e., few student absences or missed sessions), and interventions are implemented with a high level of fidelity.
When Progress Monitoring Data Indicate the Need for an Instructional Adjustment Improving the academic skills of struggling learners is never easy. Even with the most powerful types of interventions, practitioners will frequently find themselves needing to make an instructional adjustment or some sort of change to improve students’ progress. This decision may be based on the slope rules, point rules, or both. The progress monitoring data collected to that point have indicated that the student is not progressing at the rate expected to reach their goal, and a change is warranted.
Information That Informs Changes and Adjustments Once it is determined that an instructional adjustment is needed, follow-up assessment can help inform what the changes should be. This is referred to as diagnostic assessment in the DBI framework, and in many cases additional assessment may be warranted. However, additional assessment may not always be necessary. The key to this stage is determining why the intervention was insufficient, and what needs to change moving forward. In some cases, it will be very clear why the student has not made expected growth. In other cases, additional data may help determine the skill areas the intervention has not sufficiently addressed, as well as those that can be informative. The following are activities to consider at this stage. Some of them involve collecting additional data for diagnostic purposes, but others involve looking more generally at the intervention, the instructional environment, and student behavior. 1. Review data on the student’s behavior, engagement, and motivation. Previous studies have found that attention difficulties and challenging behavior are the most common characteristics of students who are inadequately responsive to academic interventions (Al Otaiba & Fuchs, 2002; Nelson et al., 2003). Therefore, an important first step is determining whether the student’s behavior is playing a role in their lack of progress. If so, adding strategies (or improving existing ones) to support students’ engagement, increase their motivation, and reduce challenging behavior would be warranted. 2. Consider intervention fidelity. Before changing instruction, consider whether the intervention was implemented as designed, with quality, and with the anticipated frequency. Reviewing intervention materials or observing implementation can indicate whether a lack of fidelity may be part of the reason for inadequate student response. Staff retraining or other supports to improve fidelity may be needed. Sometimes consistency is an issue; intervention may not be occurring at the planned frequency due to student
Step 4: Progress Monitoring 383
absences or refusal, or school staff normally responsible for implementing the intervention may be repeatedly asked to perform other tasks during the allotted time. 3. Consider whether the intervention environment is conducive to learning. Often, space limitations in schools require interventions to be implemented in non-classroom settings such as portions of the hallway, the library, cafeteria, or locations in which other interventions are occurring. Distractions in these environments should be considered. If an intervention is conducted in small groups or with peer-tutors, consider whether social dynamics are interfering with instruction or student concentration. 4. Examine students’ patterns of responses and errors on recent progress monitoring probes. Students’ recent progress monitoring probes can be a source of diagnostic information in determining how to adjust instruction. For example, practitioners may examine students’ recent oral reading probes to see what types of words they consistently miss. Mathematics probes can be reviewed to examine the kinds of problems that prove difficult and the types of errors students typically make (e.g., determining if computation errors are due to inaccurate addition or subtraction, or not paying attention to the operation sign). Readers interested in more detailed examples and exercises to analyze common sources of errors in mathematics should see the texts by Howell and Nolet (1999), Salvia and Hughes (1990), and Rosenfield and Gravois (1995). Writing is another area in which analysis of students’ recent work is informative. Reviewing recent writing probes can indicate the extent to which writing quantity or quality is impacted by transcription issues, or other aspects of composition students have not effectively acquired. 5. Use inventories and informal assessments. Brief and informal assessments can be utilized to provide information on students’ strengths and weaknesses in specific skill areas, which may be helpful for determining instructional adjustments. In reading, several phonics and word reading inventories are available that can help indicate the letter sounds, letter combinations, word types, and spelling patterns in which students are more or less accurate, thereby providing information on what subsequent instruction should target. Examples include the Core Phonics Survey (Consortium on Reading Excellence, 2008) and the Informal Decoding Inventory (McKenna et al., 2017). In mathematics, Ketterlin- Geller and Yovanoff (2009) described techniques for conducting diagnostic assessments to support instructional decisions. The NCII provides other resources for diagnostic assessment in reading, mathematics, and behavior (https://intensiveintervention.org/ intensive-intervention/diagnostic-d ata/example-diagnostic-tools). Several mathematics worksheet generators are available online in which the user can specify the types of problems and even digits included in the problems based on the diagnostic information they hope to obtain. In spelling, the student might be asked to spell a set of words with a specific spelling pattern to determine where errors are occurring. In writing, students might be asked to respond to a specific type of writing prompt or demonstrate part of the writing process. They might also be asked to “think aloud” as they plan, draft, or revise their writing, which can provide insight into their thought processes and writing decisions. A think-aloud process might also be informative for observing a student solve mathematics word problems or procedural computations. Users can also create informal diagnostic assessments themselves, as long as the information is being used for instructional planning purposes and not placement decisions. Flashcards or listing letters and letter combinations in random order can be used to determine what letter names or letter
384
Academic Skills Problems
sounds are unknown for a student. The same can be done for determining what number combinations are unknown for a student and should be targeted. 6. Ask the interventionist and the student. Often, the individual responsible for implementing the intervention recognizes portions of the program or activities that the student does not understand, are creating confusion, or are otherwise unhelpful. The interventionist may also have insight into why progress is insufficient. Additionally, consider talking with the student about what they like about the intervention, what they do not like, what aspects are confusing, or if certain aspects can be changed to help their motivation. To reiterate, gathering additional diagnostic information is often helpful but may not be necessary for determining how intervention should be adjusted, given the numerous sources of information that may already be available. This does not mean that practitioners should “go with their gut” in making decisions because there should always be data to back up and confirm decisions. Maintaining this standard helps ensure that decisions are data-driven, rather than based on subjective opinions or inference. As we have advocated across the text, strive for parsimony and use additional testing only if the data are necessary and will directly inform intervention changes.
What Might Instructional Adjustments Look Like? The reader may notice that across this chapter we have tended to refer to instructional adjustments rather than changes. In general, it is best to consider the most minimally invasive adjustments to intervention that will be needed. Consider a knee injury; knee replacement surgery should not be prescribed when a simpler, out-patient procedure would be just as beneficial, and similarly, out-patient surgery would not be appropriate when a less costly and less invasive therapy would be just as effective. Interventions use a lot of resources in terms of staff and time. Making changes that unnecessarily require additional time and human resources, or intensifying intervention in other expensive but superfluous ways, burdens a school system and subtracts resources from other important uses. The extent and intensity of the instructional adjustments should match the intensity of the student’s need. Completely changing an intervention is a very intensive and resource-demanding change because it may involve acquisition of a new program and materials, retraining staff (thus introducing the possibility of poor fidelity of implementation until staff are fluent with the intervention), and reorienting students to new procedures. Often, a complete change to a new intervention is not necessary when changes to how the intervention is implemented, or the addition of a supplemental component or practice strategy, may be just as effective (if not more so) and less expensive. To be clear, intensifications and program changes will be necessary in some situations and should be made, but only when the data say (1) they are needed and (2) less intensive adjustments were insufficient. Think in terms of changes being appropriately intensive. Although instructional adjustments should be individualized and data-driven, there are options to consider that may be relevant for a given situation, and also may help practitioners consider instructional changes that are appropriately intensive. What follows is a nonexhaustive list of possible areas of adjustment:
• Make instruction more systematic and explicit. The most effective forms of instruction for students with academic difficulties are those that teach skills explicitly
Step 4: Progress Monitoring 385
and directly, with extensive modeling and scaffolds for correct responding, and follow a strategically determined plan. Sometimes an intervention may need to be bolstered in one or more of these areas.
• Add more practice opportunities and increase student opportunities to respond. Explicit instruction is important, but it must provide students with extensive opportunities to practice targeted skills with immediate feedback from the teacher. There are times when some interventions, or interventionists, provide too much teacher-led instruction, and do not provide students with enough opportunities to practice. Dedicating more time for practice each session, adding systematic practice strategies (e.g., cover–copy– compare, PALS, incremental rehearsal), or including games that involve the target skills should be common considerations for adjustment. • Improve feedback. This goes hand-in-hand with explicit instruction and opportunities to respond, but it is remarkable how often interventionists fail to provide sufficient feedback for student responses. Feedback is essential for learning, and it should include corrective feedback for incorrect responses and affirmative feedback when answers are correct. Feedback should be more descriptive, frequent, and immediate, and error correction ought to be more involved when skills are new. Working with interventionists to improve the quality and consistency of their feedback may be an important intervention adjustment. • Improve the instructional environment. If needed, work to reduce distractions or change locations. Improving the environment may also mean reducing any peer conflicts that disrupt instruction by restructuring groups, or implementing behavior support strategies such as group contingencies or point systems (see Chapter 5). • Add or improve individual behavior management. If challenging behavior or lack of academic engagement is a reason for inadequate progress, consider adding a behavior management system or improving an existing one. Options for behavior management strategies are discussed in Chapter 5. • Increase systematic review. Students with academic difficulties require frequent review of previously taught skills to maintain what they have learned. Review should be frequent, systematic, and cumulative, such that more recently taught skills are continuously added to review content and previously taught skills are systematically rotated though. Procedures for reteaching (and knowing when to reteach) should be in place. Lack of maintenance of previously taught skills may also be an indication that new content is being introduced before previous content is sufficiently acquired. For students who are not retaining previously targeted skills and knowledge, increasing the frequency of skill review, changing how skills are reviewed, or slowing down the introduction of new content should be considered. • Program for generalization. If part of a student’s difficulty is an inability to demonstrate a skill outside of the material used for intervention, work to build in more supported opportunities to transfer targeted skills to new content or situations. This may mean adjusting the material provided for practice so it is more similar to how the skill will be demonstrated in day-to-day situations. Supports, cues, and prompts can be used to help students apply newly learned skills in new content or situations (see Chapter 5), and these supports can be faded when they are no longer necessary.
386
Academic Skills Problems
• Increase dosage. An intervention may need more time per session or days per week. This option uses more resources than others and thus should be considered carefully. Dosage might be increased in situations where the intervention appears to be effective, but the student just needs more of it. • Reduce group size (but consider this carefully). Intervention research in reading and mathematics indicates that, overall, reducing group size or implementing intervention one-on-one has inconsistent effects on student outcomes (Clarke, Doabler, Kosty, et al., 2017; Clarke, Doabler, Tutura, et al., 2020; Vaughn et al., 2010; Wanzek et al., 2013). On the other hand, Wanzek and colleagues (2018) observed a slight benefit for one-onone instruction versus small groups for intensive early reading interventions in kindergarten through grade 3. A meta-analysis by Jitendra et al. (2020) on the effects of mathematics interventions found somewhat larger effects for interventions implemented with groups of two to three students compared to groups of four or more. Although reducing group size does not appear to result in particularly strong effects, there are situations in which it may be beneficial on an individual student basis. These situations might include the need to reduce a student’s academically related anxiety that is magnified when peers are present, to increase practice opportunities and provide more focused feedback for a student, or to eliminate peer conflicts. Reducing group sizes or implementing one-on-one instruction is resource-intensive and therefore should only be considered when student circumstances are unique. We emphasize that the options listed above are just that—options. There are also many others. Intervention decisions will vary considerably from student to student. The important thing to remember is that smaller, parsimonious adjustments are easier to make than larger ones, and larger changes may not be any more effective. Further intensification can occur if subsequent progress monitoring data indicate that additional changes are needed.
Noting Intervention Changes on the Student’s Graph Any changes to a student’s intervention, regardless of how small or large those changes may be, should be noted on their progress monitoring graph. As shown in Figure 7.3, these changes simply involve adding a vertical line to the graph on the date on which the change took place, with text added on the chart or in a table below. Changes added to the graph should include anything that may have an effect on student progress and do not need to be limited to intervention changes. For example, the date on which a student starts or changes a medication could be added to the graph. Or, the date when a new schoolwide intervention system was implemented could be noted. Ideally, intervention changes added to the graph should be accompanied by a new trendline for the student for that period of time, as shown in Figure 7.3. This allows the team to evaluate whether the change resulted in improved student progress, or changed the relationship of the student’s trendline to their goal line. After making an intervention adjustment, progress monitoring proceeds, and decisions are considered on a regular basis as before. New decisions are considered when a sufficient number of intervention sessions have occurred, and a sufficient number of data points collected to be confident that the student’s trendline can be trusted. Additional adjustments or intensification to instruction are considered when progress fails to meet expectations using the same process described in the steps above. A different set of
Step 4: Progress Monitoring 387
decisions is considered when student progress exceeds expectations, which we discuss next.
Decisions When Student Progress Exceeds Expected Growth There are some situations in which an effective intervention results not only in improvement, but also in performance that exceeds the student’s targeted trajectory or their progress monitoring goal. When students are on a path to meet their goal, this indicates that a program is effective and should continue. When students are on a path to exceed their goal (i.e., they are on track to score above the goal by the end of the progress monitoring period), or have already met it, a different set of decisions is required. These decisions include continuing instruction, raising a goal, or decreasing the intensity of an intervention. Some resources on progress monitoring, including previous editions of this book, have recommended raising a student’s goal when their trendline exceeds the goal line. I (Clemens) believe this is premature. These recommendations have not considered whether the student has actually achieved their goal at that point in time, which to me seems like the more important aspect when considering whether to raise a goal. Just because a student is on track to exceed their goal does not guarantee they will. If an ambitious goal was set at the outset, and meeting that goal represents meaningful, important improvement in achievement, then a better approach to the decision is to determine if the student has actually accomplished their goal. See graph 1 in Figure 7.6, for instance. The student’s goal is 75. After the first 9 data points, the student’s trendline is well above the goal line, but the maximum score they have achieved is 52. Is this really the time to raise a goal? Now consider graph 2 in Figure 7.6. In this case, after 13 data points, the student has exceeded their goal of 75 with at least 2 data points (77 and 79, respectively). The student has already demonstrated goallevel achievement, which suggests it is a situation in which a goal could be raised. Raising a goal assumes there is a reason to raise it. For example, if the goal was set at the 25th percentile (the lower end of the average range), there may be a rationale to raise the goal to the 40th percentile so it is more solidly within the average range and to thus eliminate an achievement gap with typically achieving peers (in which case, intervention might be faded). On the other hand, if the goal was set at the 50th percentile, there is less justification for continuing to use resources for intervention. There are two situations in which I would recommend raising (or changing) a goal before the student actually met it. The first is when the team determines that they made an error in calculating or setting a goal in the first place. In this case, a new goal should be calculated that is appropriately ambitious for the student, and this information should be added to the student’s graph with a phase change line, as described earlier. The second situation is when the team calculated the initial goal correctly, but later determines that the original goal was not ambitious enough. For example, a goal set at the 20th percentile for a student starting at the 15th percentile may have seemed right at the time (requiring ROI of 1.5 times growth), but after a few weeks of intervention and seeing the student’s progress, the team decides that the goal set initially was not ambitious enough (requiring ROI 2 times growth). In summary, if an ambitious goal was set for the student, and that goal represented meaningfully important change and achievement, I recommend waiting until the student actually meets that goal on at least two occasions (remember, performance will fluctuate across probes) before raising a goal. Raising a goal simply based on a trendline that
Academic Skills Problems
388 100
1
90 80 70 60 50 40 30 20
6-May
13-May
20-May 20-May
29-Apr
13-May
22-Apr 22-Apr
29-Apr
6-May
8-Apr
1-Apr 1-Apr
15-Apr
25-Mar 25-Mar
15-Apr
18-Mar 18-Mar
8-Apr
4-Mar
11-Mar
25-Feb 25-Feb
11-Mar
18-Feb 18-Feb
4-Mar
4-Feb
11-Feb
28-Jan 28-Jan
11-Feb
21-Jan 21-Jan
4-Feb
7-Jan
14-Jan 14-Jan
31-Dec 31-Dec
7-Jan
17-Dec
24-Dec
17-Dec
24-Dec
3-Dec
10-Dec
3-Dec
10-Dec
19-Nov
26-Nov
19-Nov
26-Nov
5-Nov
12-Nov 12-Nov
29-Oct
5-Nov
22-Oct
8-Oct
15-Oct
0
1-Oct
10
100
2
90 80 70 60 50 40 30 20
29-Oct
22-Oct
15-Oct
8-Oct
0
1-Oct
10
FIGURE 7.6. Progress monitoring data indicating (1) the student’s trendline is above the goal line but goal not yet achieved, and (2) the student’s trendline is above goal line and goal achieved.
presently exceeds a goal line seems premature, given that many students may demonstrate more rapid growth early in an intervention, compared to later.
Progress Exceeding Expectations: When and Why to Reduce the Intensity of Intervention The most satisfying decisions of all are those that involve reducing the intensity of the intervention, or eliminating the intervention when it is no longer needed. However, the decision depends on several factors, the most important being that the goal the student has accomplished represents general outcomes (not specific subskills), the student has meaningfully narrowed (or closed) the achievement gap with their typically achieving peers, or other information indicates the intervention is no longer needed. Simply moving on to a new skill in need of remediation would not be a reason to decrease intervention intensity. Rather, improvement and meeting a goal in the overall general outcome domain
Step 4: Progress Monitoring 389
are the key considerations for determining whether it makes sense to reduce intervention intensity. If so, the team might discuss whether intervention is still needed at the same intensity, or if intervention is still needed at all. This should be carefully considered with other available data and input from multiple members of the team. Reducing intervention intensity might mean reducing the time and frequency of the intervention, or adjusting to focus more on review, maintenance, and generalization of skills learned rather than targeting new content. Reducing the intensity of intervention can also be accompanied by a reduction in the intensity of progress monitoring. However, practitioners should be attentive to student data in the weeks following a reduction in intervention intensity to ensure that skills are being maintained. For this reason, the team might wish to maintain the same frequency of progress monitoring for a period of time until they are confident that the student’s skills are stable.
SUMMARY AND CONCLUSIONS: ONGOING PROGRESS MONITORING AND DATA‑BASED DECISION MAKING Monitoring progress during intervention represents perhaps the most important assessment activity for students with academic difficulties. Even when an initial assessment was less than ideal, progress monitoring data are there to determine if the selected intervention was appropriate in moving the student forward. Indeed, much time and effort can be saved—and students will benefit more—when an initial assessment is good enough to identify an appropriate, evidence-based intervention and get the student learning rather than spending time and resources with exhaustive assessment activities. Using the recursive process in which an evidence-based intervention is implemented, progress is monitored, data are evaluated frequently, and the program is adjusted as needed, practitioners can focus on the delivery of data-driven instruction that is responsive to students’ needs over time. This, by far, is the best application of assessment to support student achievement.
CHAPTER 8
Academic Assessment within Response‑to‑Intervention and Multi‑Tiered Systems of Support Frameworks
T
he 2004 reauthorization of the Individuals with Disabilities Education Act (IDEA) included groundbreaking provisions for revising the procedures that local education agencies (LEAs) can use to identify students with specific learning disabilities (SLD). Under the provisions, students are considered as potentially eligible to be identified as having an SLD if they do not adequately respond to empirically supported interventions, which became known as response to intervention (RTI). Specifically, IDEA states, “In determining whether a child has a specific learning disability, a local educational agency may use a process that determines if the child responds to scientific, research-based intervention as a part of the evaluation procedures” (Public Law 108-446, Part B, Sec 614 [b][6][b]). When using this method for determining SLD, one needs to show substantial differences between the referred student and similar-age/grade peers in both their level of performance and the rate at which they improved when provided with instructional interventions known to be highly effective if implemented with fidelity. The purpose of RTI is to improve the way in which students with SLD are identified (Fletcher & Vaughn, 2009; D. Fuchs & Deschler, 2007). Educators recognized that if RTI were used to identify SLD, its processes and key components would form a framework for schoolwide implementation of effective services and data-based decisions for all students. Therefore, RTI later became associated with a tiered model of prevention and support in which RTI decisions could be made. Over time, RTI as a determination process for special education eligibility became conflated with the school- or districtwide framework of instruction and assessment that made RTI decisions possible. Subsequently, to better distinguish the eligibility decision- making process from the school and district systems that support it, the term multi-tiered systems of support (MTSS) was adopted. Additionally, the MTSS term helped better integrate tiered support systems for academic skills and behavior. The purpose of MTSS is to provide a system of prevention and intervention to improve student outcomes in academic skills and behavior. In particular, the processes
390
Assessment within RTI and MTSS Frameworks 391
of an MTSS model are consistent with efforts at prevention and early intervention of academic skills and behavior problems. Having this system in place makes it possible and more feasible to use RTI as a way to identify students with SLD because it provides a better context to determine when academic difficulties are the result of a learning disability or inadequate or ineffective instruction. Schools that choose not to use the RTI process for the identification of SLD could still implement an MTSS model as a means for delivering effective instructional services to all children in both general and special education (L. S. Fuchs, 2003; National Association of State Directors of Special Education, 2006; Vaughn et al., 2007). Models of MTSS include a common set of characteristics:
• Multiple tiers of increasingly intensive support. Overall, the “multi-tiered” aspect of the term refers to a framework of increasingly intensive intervention support options for academic skills and behavior. Levels of supports within the system are referred to as tiers. The makeup of the supports within the tiers are structured based on the overall needs of students in the school as well as those of individual students, but intensify in the type of supports they provide. Tier 1 generally refers to school- and classwide supports, including evidence-based core instruction and systems that promote positive behavior. Tiers 2 and 3 refer to increasingly more intensive intervention supports provided to students that require more than Tier 1, which may include supplemental small-group interventions or individualized support (Marston et al., 2003; Brown-Chidsey & Steege, 2005; Brown-Chidsey et al., 2009; Vaughn et al., 2003). • Universal screening. Universal screening of all students is a key feature of MTSS (L. S. Fuchs, 2003; Gresham, 2002), and is conducted to identify students in need of additional support beyond Tier 1. • Data-driven decision making. Team structures consisting of teachers, administrators, and support staff meet regularly to manage and analyze data collected through the process. The data are used to guide decisions on designing school instruction and supports. Data analysis includes universal screening data, in which school teams determine which students are successful with Tier 1 support, and which students are in need of additional support beyond Tier 1. For students receiving additional academic or behavior interventions in Tier 2 or 3, teams may periodically review ongoing progress monitoring data to guide decisions for individual students as well as evaluate the overall effectiveness of the interventions provided (Marston et al., 2003; Shapiro, Hilt-Panahon, et al., 2010). In addition, MTSS includes important provisions for ongoing professional development related to instructional delivery, managing data through the team process, identifying evidence-based programs, facilitating school leadership, as well as the key inclusion of parents as partners in the educational process. Figure 8.1 illustrates an example of a three-tiered MTSS model; however, we note that some implementations may look somewhat different based on the resources available and the needs of the school or district. As shown in the model, all students are provided with effective core instruction (Tier 1) that represents the base on which all supplemental instruction is derived. A major assumption of RTI-based decisions for special education eligibility is that academic difficulties are the result of a student’s lack of response to empirically supported instruction, and not a function of insufficient or ineffective teaching (L. S. Fuchs
Academic Skills Problems rce Pe a cre De rts po
Inc rea nd Da ta ity ,a ten s of Tim e, In tin uu m
p Su ses
Tier I: Foundation Core Instruction for All Students Benchmark
e siv
Tier 2: Strategic Interventions for Some Students
en Int ing
Co n
ir qu Re ts en tud
Tier 3: Intensive Interventions for a Few Students
S of ge nta
ses
392
FIGURE 8.1. General MTSS framework.
& D. Fuchs, 1998; Vellutino et al., 1996). If a large percentage of students is not making sufficient progress within the core instructional program, then improvements to core instruction are needed. Although the anticipated percentage of students expected to make sufficient progress from core instruction has been debated in the literature (Shapiro & Clemens, 2009), many advocates have set goals of between 75 and 85% of students in all grades (National Association of State Directors of Special Education, 2006). The key aspect is that if most students in the classroom are academically or behaviorally successful, children who are not making the expected level of progress may require supplemental instruction or support to address their difficulties. Students whose academic skills or behavior are below expectations require support that is supplemental to core instruction. Students showing moderate levels of difficulties, or risk factors for more intensive problems later, likely require instruction that is supplemental to and somewhat more intensive than core support, which is often provided in the form of small-group interventions and referred to as Tier 2 or strategic support. Students with more significant difficulties, whose level of performance is far below expectations, require more intensive and individualized forms of supplemental intervention, sometimes referred to as Tier 3. In some schools, Tier 3 may represent special education, and in other schools, Tier 3 supports are an option separate from special education. Interventions provided at Tiers 2 and 3 are supplemental to the core instruction program, and decisions on what those interventions look like are driven by data from universal screening, state- or schoolwide testing, and individual progress monitoring. Providing more intensive interventions should not require a student to demonstrate an additional period of failure to respond to Tier 1 or 2 supports. Instead, a “direct route” to more intensive intervention should be provided for students who demonstrate significant difficulties, and for whom less intensive supports will clearly be insufficient.
Assessment within RTI and MTSS Frameworks 393
Tier 1 consists of general education reflecting evidence-based programs, or otherwise sound and effective instructional techniques with strong empirical support indicating their effectiveness for academic skills and behavior. Universal screening data, consisting of brief assessments administered to all students, are designed to identify students with or at risk for difficulties in reading, mathematics, writing, or behavior, who may need more intensive forms of support. Students in need of supplemental Tier 2 instruction in reading, mathematics, or writing often receive this instruction in small groups (i.e., between five and nine students) during a predetermined time (e.g., between 20 to 40 minutes per session, in most cases) in which the skill deficits and instructional needs of the students are targeted. In terms of behavior support, Tier 2 may involve supplemental strategies and supports provided to a subgroup of students in a school, such as the CheckIn/Check-Out program (Simonsen et al., 2011; Swoszowski, 2014). Such supplemental instruction and support are provided for a period of weeks during which progress monitoring data are collected to determine the impact of the supplemental instruction on the students’ targeted academic problems. Some students for whom Tier 2 supports are not sufficient may receive additionally intensified and individualized Tier 3 interventions. In this tier, intervention groups are smaller (e.g., four or fewer students) or implemented on a one-on-one basis, supplemental instruction and assessment are more individualized and intensive, and the frequency of progress monitoring increases. Under the RTI approach to disability identification, lack of response to core instruction and intensive intervention is considered evidence of eligibility for special education. However, it is important to stress that students should not be required to “fail” across all three tiers to be considered eligible for special education. Students who demonstrate significant skill deficits or behavior difficulties on the universal screening measures (or other data sources) can be moved directly to Tier 2 or Tier 3 support depending on their level of need. There is no justifiable reason in these cases to let a student struggle though core instruction when it is clearly evident that they are in need of additional forms of support. Additionally, referral for a special education eligibility evaluation can occur at any time and should not be delayed to collect data on non-response to Tier 2 or Tier 3 interventions. Numerous publications are available for readers interested in much more detailed descriptions of the MTSS framework and process in action (see Burns & Gibbons, 2008; Chidsey-Brown & Steege, 2005; Freeman et al., 2017; Haager et al., 2007; Jimerson et al., 2007; Shapiro, Hilt-Panahon, et al., 2010). Readers are reminded that the MTSS framework of tiered support was referred to as RTI in older publications. Within MTSS models there are two key elements of assessment: universal screening and progress monitoring. Although there are many assessment methods that can be used for universal screening and progress monitoring, measures developed under the CBM framework have experienced wide acceptance and adoption across many types of MTSS models (e.g., Bollman et al., 2007; L. S. Fuchs, 2003; Marston et al., 2007; Peterson et al., 2007; Witt & VanDerHeyden, 2007), especially in reading (January & Klingbeil, 2020; Kilgus et al., 2014). The methods for conducting both universal screening and progress monitoring based on CBM have been described in detail in previous chapters. The purpose of this chapter is to offer details and case examples on the use of universal screening and progress monitoring processes when applied within an MTSS framework. Overall, more research has been conducted on the assessment and decision-making processes for universal screening. Relatively less work has focused on decisions made at the Tier 2 and 3 levels of support. Therefore, greater attention to screening and risk identification in the sections that follow is a reflection of more research being available in this area.
394
Academic Skills Problems
ASSESSMENT FOR UNIVERSAL SCREENING An essential aspect of an MTSS model is a process for identifying students whose current academic or behavioral functioning is either well below expectations, or at risk of being significantly well below expectations in the future. In educational contexts, universal screening refers to a procedure in which all students at each grade are evaluated at the beginning of the school year (at minimum), and possibly on one or two occasions after that (e.g., middle of the year, end of year). Screening consists of brief measures that are efficient to administer to large groups of students, are highly predictive of outcomes in the domain they measure, and demonstrate acceptable accuracy in identifying students who need additional support (i.e., classification accuracy). The underlying conceptual basis of universal screening comes from models of public health. Screening all individuals at a certain time allows preventative treatments to be provided to individuals demonstrating risk factors for a disorder or disease, and more intensive evaluation and treatments for individuals demonstrating significant problems. For example, several screens are conducted by obstetricians and pediatricians at birth and across early childhood to identify health concerns or developmental disabilities. Forms of universal screening have long existed in schools, such as routine screenings for difficulties with hearing and vision. The advent of tiered models beginning in the early/mid-2000s resulted in the significant expansion of universal screening to academic skills and behavior. A substantial literature base has identified skills that, when present in early grades, are highly indicative of subsequent overall success in reading, mathematics, writing, and behavior and, when absent, are indicative of subsequent difficulties in attaining proficiency. These skills, discussed comprehensively in Chapter 2, have formed the basis for several screening measures. For example, in reading, alphabetic knowledge (especially letter-naming fluency and letter–sound fluency) and phonemic awareness are key indicators that can be assessed in kindergarten, while word reading fluency and oral read serve as strong indicators when measured in first grade and up. In mathematics, counting skills and number combinations fluency measured in early grades, and computation skills measured after that, are predictive of subsequent mathematics achievement. Handwriting, spelling, and writing fluency are indicative of writing skills development. Difficulties maintaining attention, establishing positive relationships with peers, and disruptive behavior are risk factors for long-term behavioral difficulties. The screening process allows schools to quickly identify students who are struggling or may be likely to struggle, and to intervene at the earliest point possible in their school careers (Francis et al., 1996). Although improvement in academic skills can be achieved at any age (including adulthood), the earlier one intervenes, the higher the possibility of improving outcomes across areas (Clements & Samara, 2011; Lovett et al., 2017; McMaster et al., 2018). Universal screening has multiple purposes and objectives. Its primary purpose is to identify students for whom core instruction alone is not sufficient. A universal screening measure might indicate the level of student performance in relation to other students and predetermined cut-scores. The cut-scores are empirically derived based on their ability to predict future success. Those students who score at or above the cut-score would be considered to have a high probability of success in that learning domain. Those who fall below the cut-score have a higher probability of not succeeding in the future and therefore would be considered candidates for supplemental support. Some universal screening measures contain multiple cut-scores that can be used to discriminate students in terms
Assessment within RTI and MTSS Frameworks 395
of most risk, some risk, and low risk of failure. See VanDerHeyden and Burns (2010) for a detailed discussion on the development of cut-scores within MTSS models. In addition to its primary purpose in identifying students in need of supplementary support, universal screening can also accomplish some secondary objectives. The assessment of all students within a grade offers the opportunity to identify gradewide deficits in instructional outcomes. Data obtained on universal screening measures are usually compared against normative data obtained across a local or national sample of samegrade students. When the performance of an entire grade level is found to be substantially below expectations, the results point to the need to improve elements of instruction that are being provided to all students. Another secondary use of universal screening is that it offers the opportunity to obtain local normative data. These data can be used to examine the relative standing of individual students in relation to their same-grade peers in the same school, school district, or state. They also can be used for goal-setting purposes, such as for progress monitoring as discussed in Chapter 7. In addition to comparisons to national databases, local normative data can reveal the relative standing of students in one school compared to other schools in the district, or the district as a whole. These data can help further evaluate the need for grade- or school-level improvements to instruction or intervention programs. In all, universal screening data offer schools an opportunity to see how individual students are doing in relation to national norms, as well as local norms. For example, students who score low in the local normative context may be viewed within the school as in need of supplemental (Tier 2 or Tier 3) services. However, when comparing these same students against national normative data, it may become evident that although their performance in the local context is behind their peers, they are actually achieving at levels commensurate with same-age/grade students around the country. Similarly, in a district where a large percentage of students falls below national averages in achievement, a student who scores similarly to their peers in the local context may really be struggling when compared to national peers. In these situations, it is important to consider empirically derived cut-scores on screening measures in which indications of risk are not based on norms, but rather students’ actual scores in relation to the cut-point. Yet another use for universal screening is to provide a baseline for setting gradelevel goals for achievement. One of the key assumptions of MTSS models is strong core instruction (McMaster & Wagner, 2007). Schools at the early stages of an MTSS implementation process often find that the average performance of students within grades falls below the expected level of achievement. Data collected through the process of universal screening offer schools empirically derived baseline levels from which gradewide goals for performance can be set. These goals can also provide a basis for setting progress monitoring goals for individual students receiving Tier 2 or Tier 3 instruction. In addition to a score on a measure or set of measures, goals can involve reducing the proportion of students who score within “high risk” or “some risk” categories on a screening measure. Using such goals, schools can work to improve the performance of all students through improvements in core instruction and behavior support, and to establish reasonable but challenging goals for gradewide evaluation. In summary, universal screening plays a very important role in an MTSS model. It provides the basis for identifying students in need of additional support. The screening measures can yield data that may be used to judge the overall success of gradewide instructional programs, as well as provide ways to establish individual-, grade-, and schoolwide goals for performance.
396
Academic Skills Problems
Selecting Universal Screening Measures Several aspects should be considered if a measure is to be used for purposes of universal screening. First, screening should be conducted only when it is needed. The resource expenses of adding a universal screening measure are only justified if it provides data for making decisions that are unique to the data that schools have already compiled. For example, students in the United States in fourth grade and beyond typically have extensive histories of achievement, particularly the results of state accountability tests, which provide detailed information on their achievement. In these cases, a universal screener may add very little unique information about risk from what the prior test results indicate. We discuss this in more detail in the “Screen Smarter” section below. Second, the measure must be brief and easily administered. Instructional time in school is a precious commodity, and the use of any part of the instructional day for assessment must be carefully considered. A measure selected for screening needs to be efficient and take as little time as possible to administer (no more than a few minutes per student, or ideally, classwide), and it should be closely linked to making instructional decisions. It also must be affordable for schools, in terms of both the financial and human resources required to administer and score it. Third, measures used for universal screening need to have a strong, well-established evidence base for their technical adequacy. Although a measure may appear intuitively relevant to instruction or the skills in question, it may perform poorly in accurately identifying students in need of support. The technical adequacy of a screening measure requires consideration. Like any educational assessment, the measure should demonstrate adequate reliability (indicated by its internal consistency and/or test–retest reliability, and if alternate forms will be used, their reliability). Measures should also demonstrate strong criterion-related validity, which refers to correlations between the screening measure and criterion measures of the skill domain the screening tool is designed to measure. However, there is another aspect of technical adequacy that is unique to screening contexts, and arguably the most important, which is the measure’s classification accuracy (also referred to diagnostic accuracy). Classification accuracy is the extent to which a screening tool accurately classifies students according to risk status. Put differently, this refers to the percentage of students truly not at risk and not in need of support who are identified as “not at-risk” by the screen. Likewise, it also refers to the percentage of students truly in need of support who are identified as such by the screen. Later in this section, we describe the screening tools charts maintained by the National Center for Intensive Intervention, which are a helpful resource for selecting screening tools with independently verified evidence for their classification accuracy. Readers interested in more information and commentary on classification accuracy and how it is analyzed in educational settings should consult Clemens et al. (2016), Kettler et al. (2014), Kilgus and Ecklund (2016), VanDerHeyden (2013), VanDerHeyden et al. (2018), and Van Norman, Klingbeil, and Nelson (2017). A fourth consideration for universal screening measures is that their results must be readily translated into actionable information by school staff. A screening tool is worthless if users cannot make sense of its results or be able to discriminate students clearly not in need of additional support from students who are. The tasks of assessing all students in a school, scoring measures, entering data, and generating reports all in a short span of time are critical aspects that determine the feasibility and actual use of the data. Technological support, such as computer administration and automated score reports,
Assessment within RTI and MTSS Frameworks 397
can greatly assist the process of reporting and analyzing screening data, and offers frameworks for consistent interpretation. Good universal screening measures provide reporting capability that allows users to evaluate critical outcomes of the measure and obtain data that reflect both the level of performance and change(s) over time. Critical components of the data reporting process are the degree to which the measures are used by the educational staff to drive instructional decisions. Consequently, key characteristics that need to be considered when selecting screening measures include whether materials are available for training; the type of scoring capabilities that are available (paper-and- pencil vs. available technological enhancements); whether there is an integrated data management system that facilitates scoring, data entry, and data management; the kinds of reports that can be generated from the data management system and whether they will be interpretable by school staff; and the technical expertise needed to manage the system. Additional questions about the expense and logistics of measures are also important to consider, such as the amount of time and costs (both one time and recurring) for administration, as well as the options for administration that are available (individual-, group-, computer-administered). Some screening tools now require users to administer a set of measures, rather than just one, to generate risk indicators in score reports. Administering multiple measures adds exponentially to screening time and resources, and this requirement may not be immediately apparent in the vendors’ marketing materials. School administrators will often be approached by vendors of screening tools and other measures and must be able to critically evaluate the validity, feasibility, costs, and utility of a measure despite the very favorable impression conveyed by a vendor. School psychologists are in a good position to help with these decisions.
Resources for Identifying and Evaluating Screening Tools The best resource for identifying universal screening tools is the website maintained by the National Center for Intensive Intervention (NCII; intensiveintervention.org). We have referred to it elsewhere in this book, such as for the selection of progress monitoring measures, intervention programs, and other training materials. For universal screening, under the Tools Charts menu, the NCII maintains evaluations of academic and behavior screening tools. The Screening Tools Chart provides summary reports in which a panel of experts reviewed evidence submitted by the vendors or creators of the measures. Clicking the tabs at the top of the chart will reveal ratings in areas such as classification accuracy (i.e., a screener’s accuracy in identifying students who truly are or are not at risk), technical standards (i.e., reliability, validity, sample representativeness, and bias analysis), and usability features such as administration format, time required for administration and scoring, and other factors that indicate how feasible the measure is to administer and interpret.
Universal Screening in Action Several steps are involved in implementing a universal screening data collection process. Schools select the measures to be administered, decide on the process for collecting the data, score and enter the data into a database, generate reports to interpret the data, and use the data for purposes of instructional decision making. In the area of reading, a large number of schools using an MTSS model have selected CBM measures for universal screening. More commonly used products in kindergarten
398
Academic Skills Problems
and early first grade include early literacy measures from publishers such Acadience, AIMSweb, DIBELS, easyCBM, and FastBridge. CBM oral reading measures associated with these same publishers are commonly administered. Computer-administered and computer-adaptive tests have also increased in popularity as screening tools. Screening options for mathematics have grown considerably in the past 10 years, as evidenced by the NCII Tools Charts. Several evidence-based options for behavior screening have also emerged over the last decade (e.g., Cook et al., 2011; Dowdy et al., 2015; Dever et al., 2015; Kilgus et al., 2015; K. D. Kilpatrick et al., 2018; Lane et al., 2019). Other options, such as teacher rating scales for academic screening, are not used frequently but, as reviewed below, show considerable promise as cost-effective options for universal screening. After a screening measure is selected, it is administered at the beginning of the school year (i.e., fall, for schools on a traditional schedule), preferably after a few weeks of school have passed and students have reacclimated following the summer break. Screening the first week back to school is typically not recommended. Additional screening can be implemented midyear (typically in January) and end of year (typically during the last few weeks of school). However, schools should consider the costs of additional screening administrations relative to the expected benefits and anticipated use of the data. The idea of screening at the beginning, middle, and end of the school year has become a commonly accepted schedule when considering universal screening, but it does not have to occur on three occasions if data from the middle or end of the year will not be of use. There are potential benefits to midyear and end-of-year screening, but the value and applicability of each should be considered in specific school context. First, midyear (winter) screening provides an opportunity to identify students who may have been missed by the fall screen or who moved to the school after the assessment. Second, it can provide data on overall achievement trends for all students from the beginning to the middle of the year, thereby permitting analyses of the effects of changes implemented to core instruction or overall grade- or schoolwide intervention systems. Third, midyear screening can help determine whether any changes in core instruction or intervention supports have reduced the number of “at-risk” students identified by the screen. Fourth, midyear screening can provide data for planning or reorganizing intervention supports for the remainder of the school year. The need for an end-of-year (spring) screening should be carefully weighed by a school, especially if screening will take place again in the fall following a summer break. One use of year-end screening is that it provides another data point, thus allowing for analyses of fall-to-spring and full-year achievement trends. It also permits analyses of whether the instruction and intervention efforts reduced the percentage of students identified as at risk, which aids planning for the following year. For schools that offer summer intervention support options, a year-end screen can help identify students who might benefit from them. It should be reiterated that the costs of an end-of-year screening may not justify its benefits for some schools. This is especially true in grades 3 and up, where the administration of state-mandated achievement tests (often in the later part of the year) provides extensive data on student achievement that can be used the following fall. Later in this section, we discuss situations in which research has found that, for students in fourth grade and beyond, universal screening adds very little to decision making and student identification over and above the consideration of the prior year’s state achievement test data.
Assessment within RTI and MTSS Frameworks 399
Data Collection Methods There are several choices on how screening data might be collected. In some schools, general education teachers are trained to administer the measures and assigned to assess students in their own classes. Measures that are group-administered (such as the maze in reading, most mathematics measures beyond kindergarten, and written expression), teacher- completed rating scales, and computer- adaptive tests have obvious logistical advantages. Most early literacy and early mathematics measures, as well as oral reading measures, require individual administration. Thus, arrangements to support teachers who are asked to administer the measures must be considered. This could mean hiring substitutes for those days or providing some other mechanism so that the instructional process is minimally disrupted. An advantage of using the teachers of students to conduct the assessments is that teachers will have a better opportunity to experience an individual student’s performance under the assessment conditions, and the informal, qualitative data obtained through these observations may facilitate the clear recognition of a student’s instructional need. At the same time, having teachers assess their own students has the potential to introduce deviations from standardization in the administration of the measures—which would be problematic in the interpretation of the data. Another approach often used in schools is to bring in trained educators who are assigned to conduct the assessment, which might include retired or substitute teachers. A school’s proximity to a university with a school psychology or special education graduate training program would also be an opportunity for support in screening; many faculty for these programs would value the opportunity for their students to obtain experience administering universal screening measures and could provide this service at low (or no) cost to the school. Students rotate to the assigned assessment location, the measures are administered, and the students then return to class. This approach has several advantages over having teachers assess their own students. The approach involves the least disruption to instruction since teachers continue teaching while students are being assessed one at a time. Second, because fewer individuals are collecting data, there is less chance for wide variation in standardized administration processes. Training a small group of experts in the administration and scoring of the measures allows the process to be managed more easily than having large numbers of teachers conducting the assessments. Of course, the disadvantage of this approach is the minimal information teachers will have regarding actual student performance on the measures. Teachers later learn their students’ scores, but lose access to the informal, qualitative data that they often use in focusing their instructional programs.
Scoring and Entering the Data into a Database Once the measures are administered and scored, the data must be entered into a database for analysis and reporting. All major providers of screening tools have technological supports available that automate the data entry process and offer additional advantages in the analysis of screening data (see the next section) and maintaining the security of the data; however, some may involve additional costs. Also, some providers offer options in which measures can be administered to students while being scored on a laptop or tablet, and scores are automatically uploaded. Computer-adaptive tests and other measures that students take individually on a computer offer advantages in that student data are immediately uploaded. Screening is a situation to which computer-adaptive tests are well suited. For those relying on manual entry and local data organization, the data can
400
Academic Skills Problems
be input using any spreadsheet program. Depending on the size of the school, given the scope of the data management needed for universal screening, the use of technological supports for the process and safeguarding the data may be essential.
Generating Reports for Analysis and Decision Making The crux of universal screening comes in the interpretability of the reports that are generated. Data collection is pointless if the results cannot be translated into accurate decisions. Data systems may generate many different kinds of reports, allowing schools to examine their data from various perspectives. A key to the interpretation of the measures is developing an understanding of the conceptual basis underlying how each measure reports the data. As noted earlier, most measures designed for screening have established cut-scores that offer empirically supported predictions of future outcomes, and these provide the most straightforward way of interpreting screening data. Depending on the measure and the vendor, multiple cut-scores may be utilized that indicate different categories of risk for academic difficulties. These categories may be referred to in different ways by vendors. For example, DIBELS Eighth Edition reports four categories: negligible risk, minimal risk, some risk, at-risk. EasyCBM reports three categories: low risk, some risk, high risk. These categories, and the scores used to place students in categories, are based on analyses that determined students scoring at or above a certain score, or within a score range, are likely to be successful in meeting a subsequent benchmark. Some may also base the categories on studies that examined the proportion of students that demonstrated proficiency on a criterion measure of achievement. For example, in DIBELS Eighth Edition, the negligible risk category is based on prior analyses that found that nearly all students scoring at or above a certain score on the screening measure scored above the 40th percentile on a criterion test. For the minimal risk category, 80% of students scoring within a certain score range on the screening measure scored above the 40th percentile on the criterion measure. The some risk category is based on analyses that revealed that 80% of students scoring in this range on the screen scored below the 40th percentile on the criterion test. Finally, for the at-risk category, 80% of students scoring in this range on the screen scored below the 20th percentile on the criterion test. Information regarding how each vendor established their cut-scores, including the process and descriptions of the data analysis samples, is usually available on their website. Information about how risk categories were established is also available in reports on the NCII Screening Tools Chart. Vendors can also be contacted for information on how risk categories were devised. It is increasingly common for vendors to base screening decisions on multiple measures, which are referred to in some cases as composites. The rationale is that risk status is better estimated using multiple measures rather than just one. Some vendors may not provide risk indicators or benchmark status unless a certain number of measures are administered, which can add considerably to the testing burden for schools. When considering the adoption of a system for screening, users must examine the vendor’s materials carefully to determine how many measures must be administered to generate risk category reports. It must also be remembered that risk categories identified by a screening system are estimates of risk status, not guarantees. Although they are based on analyses of likelihood, users must use caution in how much confidence is placed in the report. The most difficult predictions occur in the middle of the achievement distribution. Predictions are
Assessment within RTI and MTSS Frameworks 401
much easier at the extreme ends; very high achievers are extremely likely to meet subsequent achievement targets without additional support, whereas very low achievers are highly unlikely to meet those goals and are therefore in need of intervention. However, outcomes for the students who score in close proximity to the cut-score, either just above or just below it, are much more difficult to predict. Additional sources of information may be needed to determine the need for more support for students who score in the middle ranges on a screening measure. Data generated from the measures show the distribution of student performance and the recommended level of intervention. For example, Figure 8.2A displays the outcomes of a fall assessment of DIBELS Oral Reading Fluency (D-ORF) for the second grade at Airy School. These data used the DIBELS Sixth Edition measures and risk categories. Using the DIBELS cut-scores and Sixth Edition risk categories, a total of 70% of students were found to be at or above the benchmark level of reading 44 WCPM, and 9% of students were reading 25 or fewer WCPM on these same passages. In addition to the data derived by the D-ORF measure, teams added other available data sources, such as weekly or unit tests on the measures embedded within the instructional program, outcomes of the end of the previous grade’s performance, and additional data-based information that might be available from other sources. Together, these data support decisions made within an MTSS model as to which students were currently achieving at or above expected benchmark levels, who may need strategic levels of intervention (Tier 2), or who may need more intensive intervention (Tier 3). When the universal screening data are repeated in the winter of the year, one can examine the impact of the MTSS implementation process. For example, Figure 8.2B shows the same second grade at Airy School in the winter when five additional students had moved into the grade. Between the fall and winter, 6% additional students moved into the low-risk category, which was now 68 WCPM on the D-ORF measure. At the same time, only one student moved from the some risk category into the at-risk category. Oral Reading Fluency
70% (n = 41) Low Risk 21% (n = 12) Some Risk 9% (n = 5) At Risk
FIGURE 8.2A. Distribution of D-ORF scores for Airy School, grade 2, in fall.
402
Academic Skills Problems
76% (n = 48) Low Risk 14% (n = 19) Some Risk 10% (n = 6) At Risk
FIGURE 8.2B. Distribution of D-ORF scores for Airy School, grade 2, in winter.
Comparing these two histograms provides an indication to the education staff that the MTSS process was positively impacting the overall performance of students in this particular grade. Figure 8.3 shows a similar type of display generated by AIMSweb, which provides additional information below the graphic, illustrating the actual movement between fall and winter of students across the tiers at the Beatty School. Using these data, teams within the MTSS model can begin to discern which students had responded to the interventions provided at Tier 2 and perhaps no longer required that level of intervention, and which students continued to show less progress than desired. Of course, far more data than the results of D-ORF alone would be used to decide which students should be assigned to tiered interventions. Additional types of reports can be used to fully understand the universal screening data. As seen in Figure 8.4, the results of the D-ORF of Ms. Richey’s second-grade class at the Airy School are displayed. The data show which students’ performance remained the same and which changed from one risk category to another between the fall and winter assessments. For example, looking at those students who were at benchmark (low risk) in the fall, 8 out of 10 students remained in that category in the winter. Likewise, of the 4 students in Ms. Richey’s class who received strategic levels of support (Tier 2) during this time, 3 of them had now achieved a level of performance at or above the winter benchmark and would be considered at low risk for subsequent academic problems. Also, none of the students receiving intensive support following the fall benchmark reached the midyear score of 68 needed to be categorized with the strategic group. However, of the 3 students, 2 of them exhibited substantial gains over their fall scores, scoring just slightly below 68. The outcomes here suggest that when considering “responsiveness to intervention,” only 1 of these students is responding so slowly that consideration of eligibility for special education services might be examined. Decision-making processes are similar regardless of which measure is selected for universal screening. Each of the data sources provides similar reports for individual students, or reports that combine all students in a class, grade, or school. Reports can also be
Assessment within RTI and MTSS Frameworks 403
generated that categorize the students into groups according to their predicted likelihood of future success and facilitate the team decision-making process. For example, STARTM (Renaissance Learning; www.renlearn.com) is a computer-adaptive test taken individually by students on computers. Each assessment takes approximately 15–20 minutes. The report in Figure 8.5 shows the distribution of students in grade 1 at West Elementary School. STAR Early Literacy uses the 40th percentile of its national database of users as the key benchmark point. The vendors selected the 40th percentile to represent the differentiation between those “on track” (low risk) and those “on watch” (some risk), based on the fact that most states use approximately the 40th percentile of the distribution to differentiate students as proficient from nonproficient on the statewide assessments. STAR identifies those below the 10th percentile as high risk and urgently needing intervention (Tier 3), and those between the 10th and 25th percentile as in need of intervention (Tier 2). In addition, STAR identifies a category of students between the 25th and 40th percentile as on watch, meaning they, although currently below the benchmark, may need only focused attention during core instruction (Tier 1) to improve their performance to benchmark levels. One of the other major requirements within MTSS models is for data decision-making teams to determine which students would benefit from supplemental interventions (Tier 2
Note: Unscored also includes any students who may have been transferred.
Fall
Transition
Winter
13 Deficient
15 (12%)
2
18 (15%)
0
8 (7%)
7 8
90 (73%)
0
99 (80%)
1 87
New Student
3
Unscored
3
Total Students
123
0 (0%)
0 0
0 Established
0 (0%)
0 0
2 Emerging
Spring
0
16 (13%)
0
Transition
0 (0%)
0 0 0 0
123
0
FIGURE 8.3. AIMSweb example of second grade at Beatty Elementary School, from fall to winter, on R-CBM.
404 No
0/3 0%
61
46
Count:
Percent:
24
23
No
No
65
25
Goal: 68
ORF
ORF
77
33
Percent:
Count:
52
69
87
ORF
28
33
43
ORF
75%
3/4
Yes
No
Yes
Yes
Goal: 68
The above students were at the Strategic Instructional Recommendation level at the beginning of the year.
Name
Middle Score:
71 89 155 161 65 73
44 86 103 128 44 67
Percent:
Count:
67
203
112
95
ORF
52
156
66
70
ORF
Middle Score:
80%
8 / 10
Yes
No
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Goal: 68
Reached Middle ORF
The above students were at the Benchmark Instructional Recommendation level at the beginning of the year.
Name
Beginning Score:
Effectiveness of Core Curriculum & Instruction
FIGURE 8.4. Example of midyear results for Ms. Richey’s second-grade class at Airy School.
The above students were at the Intensive Instructional Recommendation level at the beginning of the year.
Name
Beginning Score:
Reached Middle ORF
Middle Score:
Reached Middle ORF
Beginning Score:
Effectiveness of Strategic Support Program
Effectiveness of Intensive Support Program
Assessment within RTI and MTSS Frameworks 405
or 3). Universal screening data collected at each benchmark assessment period provide a partial window into student performance that can assist teams in making such a determination. When the teams meet, data displays from the management system they are using provide a visual indicator of the recommendations based on the outcomes of the measure. Figure 8.6 offers an example of the individual data for Mrs. Ruiz’s second-grade classroom at Beatty Elementary. As evident from the figure, 16 of the 19 students in Mrs. Ruiz’s class were recommended, based on their universal screening scores, for assignment to the benchmark level (Tier 1), whereas 3 students were assessed as in need of Tier 2 or Tier 3 interventions. Teams of teachers and staff from the school would examine these data, in addition to other available data sources, such as scores from unit or weekly assessments from core instruction, rate of improvement from progress monitoring data (see below) for those students who had been in Tier 2 or Tier 3 instructional groups, to determine if any students needed additional intervention. Teams would match students, based on their skill needs, with designated research-based interventions. Universal screening represents only a single data source. Combining data sources offers a richer opportunity to determine the nature and level of a student’s needs. In particular, progress monitoring conducted for students in Tiers 2 and Tier 3 provides data for fully understanding the degree to which students are indeed responding to the tiered instruction. Additional sources of information, such as behavior data, teachers’ input, and classroom assessments, can contribute to these decisions.
FIGURE 8.5. Screening outcomes displayed by STAR Early Literacy for grade 1 at West Elementary School. Reprinted by permission.
406
Student
Intensive - Needs Substantial Intervention
Intensive - Needs Substantial Intervention
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Strategic - Additional Intervention
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Benchmark - At Grade Level
Instructional Recommendation
FIGURE 8.6. Outcomes for Ms. Ruiz’s second-grade class in winter at Beatty Elementary.
Low Risk
5.1
Low Risk
0
7.4
13.1
Some Risk
9
31
27 17
Some Risk
7.4
21.3
26.2
2.5
35.2
43.4
43.4
48.4
48.4
50
54.1
64.8
75.6
86.1
91
95.1
95.9
Percentile
DORF
At Risk
45.1
64 28
35
43
45
48
57
57
65
65
67
71
77
83
98
108
110
112
Score
At Risk
56.6 29.5
77 54
32.8 42.6
56
6.6
27
63
73.8
105
79.5
125 42.6
19.7
47
42.6
68.9
94
63
75.4
108
63
95.9 74.6
142
106
82 83.6
132
Percentile
NWF
134
Score
Assessment within RTI and MTSS Frameworks 407
Data decision-making teams in MTSS models have been found to generally agree with the decisions rendered through universal screening alone. Shapiro’s research on 2 years of decision making in a high-quality implementation of an MTSS model in three schools found that teams in the winter of grade 2 generally agreed with the recommendations of the universal screening measures between 90 and 95.2% of the time (Shapiro et al., 2009). In the small number of cases where teams disagreed with the recommendations of the universal screening, they examined the actual status of the students when tested at the next benchmark period. For example, at the fall benchmark meeting, a total of 8 students (across 2 years) were assessed by DIBELS to be at the benchmark level but were placed in the strategic (Tier 2) category by the teams. In all of these cases, the teams used additional data sources (e.g., weekly and unit tests from the publisher of the reading series being used by the school) to decide that they needed additional, supplemental instruction in reading. When these students were assessed midyear, all were found to score at the benchmark level. Thus, in these cases, the teams using additional data disagreed with the decision rendered by universal screening and provided additional support to the students, all of whom scored at the benchmark level at midyear. Of course, it is not possible to know whether these students would have remained at benchmark without the additional support. In general, it was determined that when teams disagreed with the initial recommendation from the universal screening, in 75–80% of cases the student scored at the subsequent benchmark assessment at the level recommended by the universal screening measure, rather than that recommended by the team (Hilt-Panahon et al., 2010).
Additional Considerations in Universal Screening Screen Smarter: When Universal Screening May Not Be Needed Universal screening is most useful when (1) there are few objective, quantifiable data available for students, and (2) when academic skills are at early stages of development, in which it is challenging to identify students who are at risk for subsequent academic difficulties. On the other hand, there are situations in which collecting universal screening data may be of little benefit over the use of data that have already been collected by the school (Gersten et al., 2009). In third grade and beyond, most students in public schools in the United States take state accountability tests in reading, mathematics, and in some grades, writing. Denton et al. (2011) found that with students in grades 6–8, the best predictor of performance on a state accountability reading assessment was students’ scores on the same test the previous year. Considering other screening measures with prior state test results, such as those for tests of oral reading and of verbal knowledge, showed only a 1% improvement in overall accuracy. However, Denton and colleagues determined that considering oral reading scores for students who scored low on the prior year’s state test added to diagnostic utility and decision making by helping to identify a subgroup of students for whom poor decoding and reading fluency were the reasons behind their poor reading comprehension (see the next section on two-stage screening), thereby aiding instructional decision making and allocation of intervention support. These findings were echoed by Vaughn and Fletcher (2012) for secondary grade students. More recently, Paly et al. (2021) investigated the relative benefits of using universal screening measures over state test results for students in grades 4 through 8 in a large suburban school district. Cost-effectiveness analyses, which determine the ratio of the costs of an approach (i.e., fiscal, personnel, and instructional time resources) to its value (i.e.,
408
Academic Skills Problems
ability to accurately identify students), found that the use of CBM universal screeners was the least cost-effective—it was the most expensive and the least accurate. Both prior-year state test data, and a combination of universal screening data with students’ prior-year state test results, demonstrated acceptable accuracy in identifying at-risk students. However, the use of prior-year state test data alone was the most cost-effective; it was accurate while being least expensive. Information was not available to evaluate whether the CBM data improved decisions beyond simply identifying risk status, such as what supplemental support should look like for students identified as being at risk. In mathematics, VanDerHeyden et al. (2017) found that using scores from the prior year’s state test was a most efficient, cost-effective option for identifying the risk status of fourth and fifth graders compared to administering a CBM mathematics measure for universal screening. Also in mathematics, Van Norman, Nelson, and Klingbeil (2017) examined whether a computer-adaptive test used in a gated screening procedure with prior-year state test results added value to decision making over the state test results. They found that the state test demonstrated stronger classification accuracy for predicting mathematics outcomes than the computer-adaptive test; the addition of the universal screening resulted in only minor improvements in accuracy over the use of the state test alone. Nelson, Van Norman, and VanDerHeyden (2017) described a process in which state test data were used to create local cut-scores, which were then utilized to predict risk status for future cohorts of students. Overall, the results of this work raise considerable questions regarding the value, relative to the costs, of administering universal screening for academic skills after third grade (i.e., when significant extant data exist). The longer students are in school and the more test data they accumulate, additional universal screening measures are less likely to tell educators something they do not already know about students’ risk status. Indeed, the studies reviewed above indicate that students’ scores on the prior-year state test are often sufficiently accurate, and additional screening measures are unlikely to justify their expense in fiscal resources and instructional time. There are exceptions, however. States may change their accountability test or the cut-scores for proficiency, in which case schools might benefit from having universal screening data until cut scores on the new test are established (Klingbeil et al., 2018). Screening measures may have value for indicating risk in skill areas that are less informed by extant data, such as pre-algebra (Ketterlin-Geller et al., 2019). Students may move into the district so data might not be available for them, in which case a school may wish to have screening measures on hand to students individually. Another area in need of research is determining whether additional screening measures help educators better align students to interventions after risk is identified. In short, contemporary evidence indicates that the priority grades for universal screening are kindergarten through third grades, when data are needed for identifying students in need of supplemental support and for informing the effectiveness of early reading, mathematics, and writing instruction. However, when weighing risk identification in the middle elementary grades and beyond, schools should carefully consider whether administering universal screening measures holds any value over simply using student test data that already exist.
Behavior Screening Universal screening in behavior is important because it can identify students needing support who would have been missed if a school was only relying on teacher referrals
Assessment within RTI and MTSS Frameworks 409
(Eklund et al., 2009). Behavior is less stable than academic difficulties, and thus updated annual behavior screening is recommended rather than relying only on behavior data from a previous year (Dever et al., 2015). Externalizing behavior is influenced by one’s immediate environment and thus behavior can change when environments change. Internalizing forms of behavior and emotional difficulty (i.e., anxiety, depression) are particularly likely to change over time (Dever et al., 2015), and their covert nature requires screening measures to help educators identify anxiety and depression among students that may otherwise go unnoticed. Additionally, adolescence is a time during which internalizing behaviors, if unaddressed, can become serious issues for a student’s long-term achievement and well-being. Behavior screening multiple times per year may not be necessary, however (Dowdy et al., 2015). At the time of writing, the NCII Screening Tools Chart indicates one behavior screening measure, the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS; Kilgus et al., 2014), that shows strong evidence for its classification accuracy and other technical properties. However, in addition to the SAEBRS, there are several other evidence-based options for externalizing and internalizing behavior screening, including the Student Risk Screening Scale for Internalizing and Externalizing Behaviors (Lane et al., 2012, 2016), Behavior Screening Checklist (Muyskens et al., 2007), Student Internalizing Behavior Screener (Cook et al., 2011), and Youth Internalizing Problems Screener (Renshaw & Cook, 2018). For a review of the reliability and validity of various behavior screening tools, see Allen et al. (2019).
Summary: Universal Screening in MTSS Universal screening holds a very important place in an MTSS model. It is the mechanism to identify students in need of supplemental support, as well as students for whom core instruction will likely be satisfactory. Regardless of whether screening measures are administered or schools use extent test data, some form of high-quality, objective data should be used to reliably identify students in need of intervention. Screening data help schools allocate resources and identify where coaching and professional development are needed. Universal screening can also be used to evaluate the effects of core instruction, and the impact of intervention supports in reducing the proportion of students identified as at risk. Universal screening serves as a key underlying component of MTSS, but it is just one of the assessment methods that are central to the model.
PROGRESS MONITORING WITHIN MTSS In MTSS models, progress monitoring plays an important role in providing ongoing data for (1) students who are receiving supplemental interventions through Tier 2 or Tier 3 supports (i.e., the primary and most common use of progress monitoring in MTSS), or (2) a subset of students in Tier 1 for whom school staff would like additional data, which might include the students for whom a decision on intervention or placement has not yet been made. This may include students for whom screening data were not definitive or students that had recently moved out of intervention support and educators would like ongoing data to ensure that their skills are being maintained. Data collected through progress monitoring are used to provide frequent feedback to teachers on the effectiveness of instruction and to inform instructional decisions. These data indicate when effective interventions should continue, or when adjustments
Academic Skills Problems
410
to interventions are needed. Progress monitoring data can inform decisions to move students in and out of tiers; however, the data should be considered with other sources of information to determine tier placement. The purpose and implementation steps of progress monitoring are fully described in Chapter 7. What follows is a discussion on the integration of progress monitoring decision making within MTSS. Table 8.1 provides a list of the key items that need to be considered when using progress monitoring within an MTSS model.
Deciding Who Receives Progress Monitoring Progress monitoring data are collected at frequent intervals when students are assigned to receive supplemental instructional intervention beyond Tier 1. The purpose of these data is to provide frequent feedback about the impact of the tiered interventions. The data offer teachers and evaluation staff an indication, between universal screening assessments, of how students are responding to the interventions that have been implemented. The data also offer indications to teachers when adjustments to current interventions are needed, when intervention intensification should be considered, and when intervention can be faded. This process is described in detail in Chapter 7. Although it is expected that students assigned to Tier 2 or Tier 3 interventions receive progress monitoring, there are times when teams recommend that students within Tier 1 should be monitored in this way. For example, consider a student who had received supplemental intervention at Tier 2, showed significant improvements, and was therefore
TABLE 8.1. Important Considerations in Progress Monitoring within an RTI Model Who? Students assigned to Tier 2 or Tier 3 interventions. Frequency? Tier 2 = approximately once every 2 weeks. Tier 3 = approximately once every week. Measures? Reading Grades K and 1: Early reading measures (letter–sound fluency, word reading fluency). Grades 2+: Oral reading, comprehension questions (when relevant). Math Grades K and 1: Early Numerical Competencies and Number Combinations Fluency. Grade 2+: Number Combinations Fluency, Computation, or Concepts–Applications. Level of monitoring? Grade level typically used. Instructional level (below grade level) used for students whose identified instructional level is at least two levels below chronological grade. Data obtained? Target rate of improvement. Attained rate of improvement.
Assessment within RTI and MTSS Frameworks 411
moved out of Tier 2 intervention and received only Tier 1 instruction. Teachers often express some trepidation about students’ ability to maintain their successful performance without the additional support of Tier 2 instruction, despite the data suggesting that their performance is consistent with that of students who are not in need of supplemental instruction. In such cases, a team might recommend that the student who is moved from Tier 2 to only Tier 1 instruction receive progress monitoring for a period of time to verify that they maintain their successful performance. Another perspective that might lead a team to conduct progress monitoring with students in Tier 1 is based on a recommendation made by Compton et al. (2006), who found that they could improve the predictability of outcomes for first graders by including a brief period of progress monitoring for all students following the universal screening process. This may be particularly important for students for whom the universal screening data were not completely clear, such as students who scored either closely below or above a screening cut-point. Although it is not typical to conduct progress monitoring for students assigned to Tier 1, it is always a choice of the team if doing so will improve the data decision-making process.
Team‑Based Decisions One of the hallmarks of MTSS is its focus on using data to make decisions about instruction and resource allocation. Data-based decision making is enhanced when teachers and other school staff discuss the data together, and school psychologists and other evaluation staff can play key roles on these teams. In addition to evaluating universal screening data, school data-decision teams can meet periodically to review the progress monitoring data of students currently receiving tiered interventions. Further improving decision making are frameworks that help school staff use the right data, evaluate the right information from the data, and make decisions efficiently. The Academic Skills Problems Fifth Edition Workbook that accompanies this text includes several examples of forms designed to help school- and grade-level data teams with various types of decisions, many of which we created on the MTSS implementation projects that Shapiro led in Pennsylvania. One of the forms is the Grade-Level Meeting Progress Monitoring Data-Based Decision Form, which helps grade-level data teams (typically, the teachers of a grade level plus any staff that support intervention or assessment with those students, and administrators as needed) review students’ progress monitoring data and make instructional decisions. The form includes places to indicate each student and their current tier. Discussing each student’s progress monitoring graph, the team determines whether the data indicate the student is above, near, or below their targeted rate of improvement, and checks the appropriate box. This informs discussions on whether to continue the intervention, raise the goal, or make an adjustment to the intervention, and the team considers both the progress monitoring data as well as other information in making this decision. For example, the data may indicate a student is below target, but a team member might note that the student has had frequent absences from school recently; therefore, an instructional change might be premature. A place for comments on the decision is also provided. In our MTSS implementation, grade-level teams met monthly and used this form in making decisions. In addition to providing a structure for discussions and decision making, the forms also offer record keeping on the decisions and the reasons they were made. Several other publications provide discussions on databased teaming in MTSS models, and the valuable role that school psychologists play in these teams (e.g., Eagle et al., 2015; Kittelman et al., 2021; McIntosh & Goodman, 2016; Shapiro, Hilt-Panahon, & Gischlar, 2010).
412
Academic Skills Problems
Considering Progress Monitoring Data in Special Education Eligibility Evaluations Another secondary purpose of progress monitoring within an MTSS model is to contribute data to evaluations for eligibility for special education. Here, the progress monitoring data would be a key source of information because they provide data on students’ response (or lack thereof) to interventions they received. There are several considerations that practitioners should be mindful of when considering progress monitoring data as part of an eligibility evaluation: 1. Progress monitoring data should never be the sole source of information for special education eligibility decisions. Although there are different models for disability identification, one of the strongest (both in terms of its theoretical basis, its logic, and the research evidence) is Fletcher and colleagues’ model of learning disability identification (Fletcher et al., 2019). In this model, progress monitoring data collected as part of supplemental interventions are considered as indices of responsiveness to intervention, and used in conjunction with assessment results from standardized, norm-referenced measures of relevant academic skills and consideration of exclusionary factors. See Fletcher et al. (2019) and Miciak and Fletcher (2020) for further details on this model of learning disability identification. The RTI Action Network provides additional resources for using RTI data to inform learning disability identification: www.rtinetwork.org/toolkit. 2. A formal evaluation for special education eligibility, especially when requested by a parent or guardian, should never be unnecessarily delayed to collect progress monitoring data, or to “force” students through tiered intervention to acquire evidence of non- response. When there is a parent or guardian request for an evaluation, or when there is substantial evidence of significant academic or behavioral difficulties, delaying an evaluation unnecessarily can place a school in a very problematic legal position.
Progress Monitoring in Action in an MTSS Context Here, we provide some examples of progress monitoring conducted within an MTSS model and the types of decisions that might be made based on the data. We again refer readers to Chapter 7, where we describe the entire process for implementing progress monitoring, including measure selection, goal setting, determining the frequency of monitoring, and making instructional decisions based on the data.
Example 1: Dan For students being monitored with material from their assigned grade level, one usually sets a goal consistent with achieving year-end benchmark levels of performance. However, when monitoring below grade level, one needs a more systematic method for setting goals and knowing when to move a student to the next highest grade-level monitoring materials. One approach to the process for goal setting when students are monitored in belowgrade-level materials involves setting a target for students to reach the 50th percentile within the below-grade-level material by the next benchmark assessment period. For example, Figure 8.7 shows the results of the survey-level assessment for Dan, a thirdgrade student. Dan was shown to be substantially below the 25th percentile in reading for third-grade students, reading 35 WCPM when the 25th percentile for the grade was 49
Assessment within RTI and MTSS Frameworks 413
FIGURE 8.7. Results of survey-level assessment of Dan in the fall.
WCPM. When he was assessed in second- and first-grade material, it was found that his instructional level in reading was at the second grade, which in this case is where the team decided to start his progress monitoring while he received instruction in the MTSS model operating at their school. In selecting a goal for Dan, the team examined the expected performance for students reading second-grade material at the 50th percentile at the winter benchmark period. As shown in Figure 8.8, the team identified a score of 75 WCPM as the appropriate goal for Dan. The rate of improvement expected for Dan to meet this goal was 1.7 WCPM per week: 75 WCPM – 45 WCPM/18 weeks. At the middle of the school year, Dan’s progress monitoring data showed that he had reached this goal, reading at 77 WCPM across the last three progress monitoring data points prior to the winter benchmark assessment. Dan’s performance was then reassessed in third-grade material at the middle of the year, when he was found to be reading at 70 WCPM, just above the 25th percentile for midyear third graders. As seen in Figure 8.9, the goal for Dan from midyear to the end of the year was reset to 90 WCPM, a rate of 1.0 WCPM, which would allow him to maintain his performance level consistent with reading at the third-grade level of instruction. Had he not attained reading performance at the 25th percentile of the third-grade level, the team would have continued monitoring him in second-grade material, reset his goal to remain between the 50th and 75th percentile of instructional-level material, but would have reassessed his performance at third-grade-level material once again as soon as his progress monitoring data reflected ongoing success in meeting the new reset goal.
Example 2: Jerry Suppose another second-grade student, Jerry, is assigned to Tier 2 interventions based on data collected at the winter benchmark, midway through the 36-week school year. At
414
Academic Skills Problems
FIGURE 8.8. Example of below-grade-level goal setting for Dan in the fall.
FIGURE 8.9. Example of goal setting at grade level for Dan from winter to spring.
Assessment within RTI and MTSS Frameworks 415
this point, Jerry’s performance on the oral reading measure was found to be 55 WCPM, below the expected benchmark of 68 WCPM (based on the DIBELS Sixth Edition tools). Given Jerry’s midyear performance, a criterion-referenced goal is established for Jerry to reach the benchmark level of 90 WCPM by the end of the year. The targeted rate of improvement would be calculated by subtracting the end-of-the-year score from Jerry’s current score and dividing by the number of weeks remaining in the school year, that is, 90 WCPM – 55 WCPM/18 weeks = 1.94 WCPM/week. Thus, for Jerry to narrow the gap between himself and his classmates, he needs to improve his reading, via the Tier 2 instructional program, by approximately two words per minute per week. In contrast to typical rates of improvement (i.e., rates of improvement observed for typically achieving second graders), an examination of the DIBELS database shows that students at the benchmark score of 68 WCPM at the middle of the year finish the school year at 90 WCPM, which translates to a typical second-grade rate of 1.22 WCPM/week between the middle and end of the year. Thus, Jerry’s targeted rate of improvement is greater than what is typically observed, which is consistent with the goal that supplemental intervention be more intensive than core instruction, and the need for struggling students to make faster progress than what is typically observed in order to close achievement gaps. Finally, one must determine Jerry’s attained rate of improvement. The calculation of the attained rate can be made in several ways. Calculating the slope across a series of data points can certainly be done by examining the end data point in the series, subtracting from the starting data point, and dividing by the number of weeks. Such a 2-point slope calculation is the simplest way to consider the rate of growth. Figure 8.10 shows the results of Jerry’s progress monitoring from January through the end of the school year, at 2-week intervals consistent with intervention at Tier 2 of an MTSS model. As can be seen, Jerry’s last score in the series, on May 14, was 81 WCPM. Using a 2-point calculator, Jerry’s attained ROI would be the same as his target, 1.44 WCPM/week (i.e., 81 – 55/18 weeks). Looking at this value alone, one would conclude
FIGURE 8.10. Jerry’s progress monitoring.
416
Academic Skills Problems
that although Jerry certainly made substantial progress over the 18 weeks of intervention, he fell below his targeted rate of improvement of 1.94. Because a single data point can vary due to many factors, using a 2-point slope calculator can lead to wide variations in outcomes. As such, several alternatives to 2-point slope calculations have been suggested. One approach is to examine the mean of the last three data points in the series and compare these against the mean of the starting three data points. In Jerry’s case, the mean of the last three points is 88 and the first three was 58. A calculation of slope based on these values would place the attained ROI at 1.67 (88 – 58/18) WCPM/week, a value substantially higher than the 2-point calculated value. Among methods for calculating slope, one of the most precise is to use the ordinary least squares (OLS) regression of all the data points in the series. This method provides a statistical average of the data over time and offers a score that is more representative of the entire trend across time. In Jerry’s case, this value is 2.16 WCPM per week, a value reflecting the truer outcome of Jerry’s performance and showing that Jerry’s overall trend was higher than both the targeted ROI of 1.94 and the typical ROI of 1.22. The OLS calculation is completed and provided for the reader on graphs generated by most CBM vendors, and also readily available through most database management systems such as Microsoft Excel. The interpretation in Jerry’s case is that he not only was moving at a rate higher than would be expected of typically performing grade-level peers, he was also moving at a rate higher than what was expected based on his own targeted performance. Clearly, Jerry is an example of a student who is very responsive to Tier 2 instruction. Teachers examining these data would likely conclude that the tiered intervention was successful in changing Jerry’s trajectory of learning and that he would be moved out of tiered instruction at the start of the next school year
Example 3: Emily The following case illustrates how an MTSS model can facilitate better decisions about special education eligibility. Emily is a first grader at Kennedy Elementary School, a school that had been implementing an MTSS model for several years. She was found to fall within the at-risk range in reading based on a winter universal screening assessment, in which she scored low on measures of letter–sound fluency and word reading fluency. Based on a review of previous data and input from her kindergarten teacher, Emily had acquired some letter–sound correspondences for single letters and some letter combinations, but not all. She has struggled to acquire word reading skills; she has some success decoding VC and CVC words but is often slow, and has only learned a small number of short high-frequency words (and, the, I, and is). She is experiencing difficulty learning to read words beyond a CVC pattern. Her word reading difficulties are becoming increasingly apparent in first grade, where she is making a lot of reading errors and her fluency is very low. As a result, the school’s MTSS data-decision team decides to place Emily in one of the school’s Tier 2 evidence-based reading interventions that is focused on remediating word-level reading skills and providing practice to build accuracy and efficiency in reading words and text. The intervention takes place in groups of four students for 20 minutes per day, 4 days per week. Emily’s reading progress is monitored on a weekly basis using first-grade word reading fluency and CBM oral reading probes. After 6 weeks of intervention, the MTSS data-decision team examines Emily’s progress, as well as that of other students receiving tiered intervention, as part of their periodic
Assessment within RTI and MTSS Frameworks 417
data review routine. Although her word reading had improved slightly (as indicated by a modest upward trend), her progress on both measures was below expectations both in terms of point and slope decision rules, which meant that her three most recent data points were below the goal line, and her trendline was well below the goal line. At this point, the team decided to intensify the intervention for Emily based on an error analysis on recent progress monitoring probes, feedback from the teacher responsible for implementing the intervention, and a brief word reading diagnostic measure. This information was used to determine how to intensify and individualize the intervention, which in some schools might be viewed as a Tier 3 intervention. At Emily’s school, there were two other first graders with similar skill needs who would benefit from an intensified word reading intervention, in which case a smaller, more targeted intervention group was formed that included Emily. The group met for 20 minutes per day, but another day was added so that intervention occurs 5 days per week. Progress monitoring continued using the same goal, but the team increased the frequency of data collection to two times per week to enhance the quality of subsequent decisions. At this point, while Emily is receiving the intensified intervention, the MTSS data- decision team might consider starting a formal referral process and evaluation for special education eligibility. Multiple data sources have shown that Emily has demonstrated an inadequate response to core instruction and a well-implemented Tier 2 intervention. The referral process might begin, but it is important that she is receiving intensified intervention directly targeting her skill needs while the referral is being processed and while she is waiting for a formal evaluation by the school psychologist. The school psychologist is part of the school’s MTSS data-decision team and is already familiar with Emily’s history and her data from universal screening and progress monitoring. If Emily is moved forward to an evaluation, the school psychologist has a head start in the process. Using Emily’s progress monitoring data showing her response to intervention, the school psychologist might assess her with a set of standardized tests of academic skills relevant to Emily’s area of difficulty. Because her reading problems occur at the word level, these measures would likely include phonological awareness, alphabetic knowledge, and word and pseudo-word reading efficiency. The school psychologist would also consider exclusionary factors as part of the evaluation1 (Fletcher et al., 2019). The most important point of this scenario is not whether Emily qualified for special education services, but that the MTSS model provided the framework for improving the process and the quality of the resulting decisions. The screening, intervention, and progress monitoring components of the MTSS model contributed key data for evaluating Emily’s learning throughout instruction and interventions, which can help a school psychologist better focus an evaluation on a small but important set of relevant academic skills. It provided a system for providing Emily with intervention quickly, while decisions were considered, and while a referral was being processed. Additionally, evidence indicates that intervention for significant reading difficulties beginning in first grade is associated with stronger outcomes than when it begins in the second or third grades (Lovett et al., 2017); thus, the MTSS system also provided a way to potentially identify Emily for intensive, individualized support in first grade, much sooner than students are commonly referred for a suspected learning disability (i.e., third grade). Overall, the MTSS model established a system for effective instruction, early identification of risk, 1 For
more information on considering exclusionary factors and learning disability identification within an MTSS/RTI framework, see the RTI Action Network Toolkit at www.rtinetwork.org/toolkit and Fletcher et al. (2019).
418
Academic Skills Problems
progress monitoring, intervention intensification, and data-driven decision making to improve outcomes.
COMPREHENSIVE MTSS PROGRAMS Programs have been developed that integrate elements of MTSS in a comprehensive package including core instruction, supplemental intervention, and assessment. One example is SpringMath (sourcewelltech.org). Developed by Amanda VanDerHeyden, SpringMath is a comprehensive K–12 mathematics intervention system that links student assessment data to tailored mathematics instruction and practice activities. The system first uses universal screening to identify students who have or are at risk for math difficulties. Then, using a decision tree framework, the system utilizes the results of students’ assessments to recommend either (1) classwide interventions to address skill deficits demonstrated by most students in the class, or (2) individual interventions to struggling students. When classwide intervention strategies are indicated (i.e., when 50% or more of the class are identified as at risk), SpringMath generates lesson packets for skills identified by the assessment. Students with math difficulties are administered more specific assessments of critical math subskills (the specific measures are recommended by the decision tree system based on students’ performance on the universal screen). Based on students’ performance on the specific assessments, SpringMath generates packets for intervention, in either one-to-one or small-group formats, identified through the assessment process. Lesson packets include instruction and example problems for direct instruction, activities and think-alouds for building conceptual understanding, and guided practice activities that often include CCC materials, guided practice, and math games. Student progress during the intervention is also monitored, and SpringMath indicates when students have mastered a target skill and are ready for the next skill in the sequence. The foundations of SpringMath rest on research in universal screening, supplemental intervention, subskill mastery measurement relevant to designing individualized mathematics instruction, integrating assessment to intervention recommendations, and progress monitoring. The development process for the mathematics subskill measures is described by VanDerHeyden and colleagues (2019), and evidence supporting the technical adequacy of the classwide screening process is reported in VanDerHeyden et al. (2019) and Solomon et al. (2022). Studies to date on the classwide and small-group intervention components indicate improvements in student’s mathematics skills (Codding et al., 2016; VanDerHeyden & Codding, 2015; VanDerHeyden et al., 2012), and studies on the overall efficacy of SpringMath as a package for improving mathematics outcomes are underway. Other programs, such as Enhanced Core Reading Instruction (ECRI; Fien et al., 2020), discussed in Chapter 5, are designed to improve Tier 1 reading instruction and provide an integrated framework for Tier 2 support. Programs like SpringMath and ECRI offer ways for schools to manage the multiple components of MTSS more feasibly. More programs like them will likely emerge in the coming years.
SUMMARY AND CONCLUSIONS We have presented an “idealized” vision in this chapter of what MTSS can be. We fully acknowledge the challenge that schools face in implementing systems change, and many schools struggle to implement MTSS with fidelity and quality (Balu et al., 2015; D. Fuchs
Assessment within RTI and MTSS Frameworks 419
& L. S. Fuchs, 2017; Ruffini et al., 2016). There are needs to simplify and streamline MTSS components. However, we have witnessed schools implement MTSS with remarkable quality, with the right combination of principal buy-in, school district support, and state-level technical assistance. Readers are encouraged to look at the work that some state departments of education have done in supporting MTSS in their state, such as Pennsylvania, Michigan, Colorado, Kansas, North Carolina, and others. Both universal screening and progress monitoring play crucial roles in an MTSS framework and are central aspects to decisions that would be made using an RTI approach. These assessment methods offer the foundations of data-based decision making that is central to MTSS. The two forms of assessment work together to provide a very powerful set of tools that manage, direct, and facilitate the decisions that help schools provide effective instruction for all students, as well as guiding intervention decisions for individual students. Technological enhancements and commercially available tools make both forms of assessment more feasible and manageable. Screening may include collecting measures implemented specifically for that purpose, or considering existing test data. Although the data collection process is a critical piece underlying effective MTSS models, the data are only as good as their interpretations and use in informing instruction. Too often, schools become enthralled with the collection of data and forget that the essential reason for collecting all that information is to facilitate better instructional decision making. Reliance on “clinical impression” or educator judgment alone, while a component of educational decision making, is insufficient to fully manage the detailed and systematic processes of MTSS. MTSS and RTI decision-making models are still developing. Although we have excellent tools and measures for our current implementation processes, exciting advancements and new methodologies are on the horizon. Evidence-based programs in reading, mathematics, and writing are accumulating. The importance of scientifically based core instruction and implementation fidelity are being increasingly realized, as are evidence- based methods to train, coach, and support teachers in improving implementation. Greater integration of Tier 2 intervention with Tier 1 core instruction will be a characteristic of new intervention programs, such as ECRI (Fien et al., 2020) and others (Stevens et al., 2020). Improvements to screening measures and processes to quickly and efficiently identify students at risk, along with measures that provide deeper diagnostic information, are likely to emerge. Advancements in the judicious strategic use of specific subskill mastery measurement will enhance decisions for aligning and adapting interventions with students’ needs (e.g., VanDerHeyden et al., 2019). The linking of universal screening and progress monitoring measures to specific instructional programs is likely to increase as publishers of materials recognize the important values of such measurement tools within MTSS contexts and for RTI-based decisions. Shapiro often ended his trainings on MTSS and RTI with a quote on the importance of the assessment data for decision-making purposes: “Without data, it’s only an opinion.” This statement elegantly captures the importance of the assessment and intervention processes presented in this text.
CHAPTER 9
Case Illustrations
I
n this chapter, examples illustrating the use of integrated assessment and intervention procedures described across this text are presented. Obviously, it would be impossible to offer examples of all of the intervention procedures described in Chapters 5 and 6. For a wider variety of examples across academic areas, interested readers are directed to articles by Lemons et al. (2014), Powell and L. S. Fuchs (2018), Powell and Stecker (2014), and Lembke et al. (2018); and texts by Zumeta Edmonds et al. (2019), Howell and Nolet (1999), Mayer (2002), Hosp et al. (2014), Rathvon (2008), Rosenfield (1987), Shinn et al. (2002), and Shinn and Walker (2010).
CASE EXAMPLES FOR ACADEMIC ASSESSMENT The first two cases illustrate the use of CBA in assessing the instructional environment and instructional placement. The first case is George, a fourth-grade boy referred by his teacher because of academic problems in the areas of reading and writing, as well as a high degree of distractibility. The case illustrates the assessment of all four areas of basic skills development—reading, mathematics, written language, and spelling, particularly for someone whose first language was not English. At the time of evaluation, George’s school was not using an MTSS model to provide instructional support. The second case, Brittany, illustrates the evaluation of a first-grade girl referred because of poor performance in reading and math. The case illustrates the direct assessment of early literacy skills that becomes necessary when a student does not reach first-grade instructional reading levels. Cases 3 and 4, Earl and Jessi, are examples of academic intervention within CBA. Progress monitoring is used to evaluate student responsiveness. Cases 5 and 6 illustrate the four-step model of direct academic assessment and intervention described in this text. In the first case, Ace, an assessment leads to the implementation of a folding-in (incremental rehearsal) intervention. The second case, Leo, 420
Case Illustrations 421
demonstrates how standardized measures of relevant academic skills can be integrated with CBA and CBM methods to directly inform intervention development. Each case also provides examples of how SSMM and GOM can work together for monitoring progress. Case 7, Sabina, provides an example of how the assessment process looks within the context of an MTSS model. Each assessment case is presented in the form of a report. Completed teacher interview, observation, and assessment forms for most of the cases are also included, along with narrative material describing the forms. Each case is based on actual assessment activities conducted by graduate students in school psychology. Clarissa Henry, Kate Tresco, Hillary Strong, Rebecca Lemieux, Amy Kregel, and Melissa Ruiz were graduate students in Ed Shapiro’s Academic Assessment course in the School Psychology program at Lehigh University. An additional case is provided by Melina Cavazos and Zhiqing Zhou, who were graduate students in my (Clemens) Academic Assessment course in the School Psychology program at Texas A&M University.
Case 1: George Name: George Birth date: July 3, 1998 Chronological age: 10 years, 3 months Evaluator: Clarissa Henry1
Grade: 4 Teachers: Mrs. Pickney, Mrs. Bartram (ESL) School: Harriet Report date: May 9, 2009
Background Information George, a fourth-grade student, comes from both Chinese and Cambodian backgrounds. Although he has lived in the United States his entire life, his first language remains Mandarin, the predominant language spoken in his home. Over the past 5 years, he has attended six different elementary schools. Due to academic difficulties, he was retained in second grade and was also referred to the instructional support team (IST) while he was in second grade. On the referral request, the teacher reported that although George’s English vocabulary is limited (i.e., he does not know the meaning of many words he encounters), he is often able to decode the words fluently. George continues to struggle with learning the English language and has a limited knowledge of many English words and sayings. Reading comprehension and written expression are particular areas of difficulty for him. In addition, George’s teachers report that he is distracted easily and has difficulty concentrating. Currently, he works with the instructional support teacher twice per week, as well as with an English as Second Language (ESL) teacher (Mrs. Bartram) 5 days per week. Both of these teachers make a specific effort to focus on George’s writing, reading, and responses to literature.
Assessment Methods
• Structured teacher interview and Academic Performance Rating Scale (APRS) • Direct classroom observation—Behavioral Observation of Students in Schools (BOSS) 1 Many
thanks to Clarissa Henry, whose case report conducted as a doctoral student in the School Psychology program at Lehigh University was the basis for this example.
422
Academic Skills Problems
• Student interview • Review of student’s permanent products • Direct assessment of reading, mathematics, and spelling Assessment Results: Reading TEACHER INTERVIEW AND APRS
Prior to interviewing Mrs. Pickney, George’s primary teacher, the APRS was completed. Outcomes on this measure showed that, compared to other fourth-grade boys, George had significantly lower scores in academic productivity, academic success, and impulse control. According to his teacher, compared to his classmates, George was rated as completing 80–89% of his math assignments but only 50–69% of language arts assignments. Although his completion rate in math is acceptable, the accuracy of his work in both math and language arts is poor, rated as between 0 and 64%. The quality of his written work was also reported as poor and often completed in a hasty fashion. Reading is primarily instructed in work done with the ESL and IST teachers. When interviewed, Mrs. Pickney reported that in the general education class, reading is instructed primarily from novels for which grade levels have been established. George is currently placed in the level “N” books, which represent material that is at the third- and fourth-grade levels. Although Mrs. Pickney divides her class into three reading groups, George’s performance is so far below that of his peers that he receives directed reading instruction primarily from the ESL teacher every day and is not part of any of the classroom reading groups. The reading period within the class is divided among individual seatwork, small-group reading, and whole-group instruction. However, George’s instruction occurs mostly in one-to-one and small-group activities during ESL periods that are provided within the general education classroom while reading instruction is being delivered to the entire class. Some contingencies (receiving a “star”) for homework completion and other academic tasks are used regularly by Mrs. Pickney; those children earning 15 stars by the end of the week are permitted to select a prize from a grab bag. George participates in these activities along with his classmates. According to Mrs. Bartram, his ESL teacher, George’s oral reading skills are somewhat better than those of his peers. Comprehension, however, is much worse than that of others in his group. On a brief behavior rating scale, Mrs. Pickney indicates that George is above satisfactory for oral reading ability, attending to others when reading aloud, and knowing the place in the book. However, he does an unsatisfactory job of volunteering and often gives responses that are inaccurate and unrelated to the topic. DIRECT OBSERVATION
Data were collected during observations in the regular classroom and in the remedial reading program using the BOSS (see Table 9.1). Observations were conducted for a total of 20 minutes in the general education classroom, while George was engaged in smallgroup guided reading and independent seatwork at his desk or a computer. He was also observed for 18 minutes during his ESL reading group, when he was working individually on a report based on the group’s reading of a book on Native Americans. Comparison data were obtained by using a randomly selected peer once every minute. During the whole-group guided reading lesson in Mrs. Pickney’s class, children were expected to pay attention to the teacher, listen to others read, follow along in their books,
Case Illustrations 423
TABLE 9.1. Direct Observation Data from the BOSS Collected during Reading for George Percentage of intervals Behavior
George (total intervals = 64)
Peers (total intervals = 16)
Teacher (total intervals = 16)
Classroom Active Engagement
23.0
56.0
Passive Engagement
37.5
25.0
Off-Task Motor
28.0
6.3
Off-Task Verbal
0
6.3
Off-Task Passive
6.3
0
Teacher-Directed Instruction
18.8
ESL reading group Active Engagement
47.3
6.4
Passive Engagement
26.3
7.1
Off-Task Motor
5.2
7.1
Off-Task Verbal
0
14.2
Off-Task Passive
0
14.2
Teacher-Directed Instruction
50
and read aloud during their turn. Outcomes of the observation revealed that George’s peers were far more actively engaged then he was. For much of the time, George was off task, primarily during the computer time, during which he played with the mouse, rubbed his face, and put his head down rather than work on the assigned activity. In contrast to George’s behavior during the whole-group activity, he showed substantially higher active engagement than his peers when he was working within the ESL, small-group setting. Throughout the activity, Mrs. Bartram paid high levels of attention to George, as reflected in the percentage of teacher-directed instruction, and consistently checked with him to see that he was completing the assigned independent work. George showed minimal levels of off-task behavior—levels that were far lower than those of his peers. Overall, the observational data in reading support the teacher reports during the interview that George does much better in small-group and one-to-one instructional settings compared to whole-class instruction. STUDENT INTERVIEW
George was interviewed after the observations were conducted and was asked questions about his assignment. Results of the interview revealed that George understood the assignment and recognized the expectations of the teacher. However, George believed that he could not do the assignment when it was distributed. Although he really found the work interesting, he reported that he did not understand the importance of the assignment. When asked to rate how much time he got to complete his assignments on a scale from 1 (not enough) to 3 (too much), George reported that the amount of time given was usually too much. He did feel that his teacher tended to call on him about as much
424
Academic Skills Problems
as other students in class. George also reported that he understood what to do if he was confused and how to seek help if he could not answer parts of the assignment. In general, the interview with George suggested that he understood his difficulties in completing reading tasks. George also seemed to have a good understanding of the classroom rules and knew the means by which to access help when it was required. DIRECT ASSESSMENT
Administration of oral reading probes from books at the fourth- (starting point), third-, second-, and first-grade levels showed that George was at an instructional level for fluency within the fourth-grade book, where he was currently placed using instructional levels–based accuracy. George made relatively few word reading errors across all passages, resulting in a high reading accuracy rate. Although George attained acceptable oral fluency within third-grade material, his comprehension levels were below instructional levels with second-, third-, and fourth-grade material. While reading, George would stop and ask for the meaning of words that he was able to successfully read. Outcomes of the checklist of qualitative features of reading (see Figure 9.1) showed that George read at a steady pace. He attempted to decode unfamiliar words, using effective strategies for blending and synthesizing syllables. Throughout the reading, George read with minimal expression. He often lost his place while reading and began to use his pencil as a pointer. When answering questions, George had substantial difficulty with inferential questions. For those questions requiring recall of literal information, George did not use any strategies to retrieve or locate the information in the passage, although he had access to the passage when questions were asked. REVIEW OF WORK SAMPLES
George’s performance on worksheets linked to the state standards was consistent with teacher reports and the direct assessment. In the area of reading comprehension, George was unable to use ideas presented in stories to make predictions. He also struggled with vocabulary and had difficulty understanding the concepts of action verbs and contractions. Examination of these worksheets did show that George could distinguish between different types of literature.
Assessment Results: Mathematics TEACHER INTERVIEW
In the teacher interview for mathematics, Mrs. Pickney indicates that she focuses instruction on the state curriculum standards but uses the Scott Foresman/Addison-Wesley curriculum as well. All students in the class, including George, are taught at the same fourth-grade level and taught as a large group. Approximately 1 hour per day is allotted for math. Daily lessons include direct instruction, individual seatwork, and occasionally small-group work. No specific contingencies are used for completion of work, but students do receive stars for completing homework. Student progress is assessed about once per week using the district standards assessment. According to George’s teacher, the class does not focus largely on basic computational skills, but instead instruction in areas of mathematical problem solving, such as
Case Illustrations 425
Qualitative Features of Good Reading Student: George Date: 4/15/09 Grade: 4 Instructional Level Assessed:
3
Teacher: Mrs. Pickney School: Harriet 1. Is highly fluent (speed and accuracy). Yes
Mixed
No
2. Uses effective strategies to decode words. • Effective word attack
Yes
Mixed
No
• Context
Yes
Mixed
No
3. Adjusts pacing (i.e., slows down and speeds up according to level of text difficulty). • Of word(s)
Yes
Mixed
No
• Syntax (word order)
Yes
Mixed
No
• Semantics (word meaning)
Yes
Mixed
No
• Inflection (pause, voice goes up and down)
Yes
Mixed
No
• Reads with expression
Yes
Mixed
No
• Punctuation (commas, exclamation points, etc.) Yes
Mixed
No
• Predicts level of expression according to syntax Yes
Mixed
No
4. Attends to prosodic features.
5. Possesses prediction orientation. • Seems to look ahead when reading
Yes
Mixed
No
• Reads at a sentence or paragraph level
Yes
Mixed
No
• Self-corrects if makes meaning distortion errors Yes
Mixed
No
6. Self-monitors what they are reading.
7. Makes only meaning preservation errors. • More errors that preserve meaning (e.g., house for home)
Yes
Mixed
No
• Fewer meaning distortion errors (e.g., mouse for house)
Yes
Mixed
No
• Words that appear throughout text are Yes read automatically (e.g., become “sight words”)
Mixed
No
8. Automaticity on reread words.
FIGURE 9.1. Qualitative Features of Good Reading checklist for George.
Academic Skills Problems
426
fractions, positive and negative numbers, decimals, probability, and geometry. Mrs. Pickney did indicate that although she is unsure about George’s current computational skills, his performance in math instruction is quite variable. Mrs. Pickney rated George’s behavior in math as mixed. Although he has satisfactory performance in areas such as volunteering answers, giving correct answers, attending to others, and handing in homework on time, he is below satisfactory in many other areas related to attention, such as knowing his place, staying on task, and completing assignments on time. DIRECT OBSERVATION
Data were collected through direct observation in one 43-minute period of math instruction in the classroom (see Table 9.2). During the observation, the teacher reviewed the previous night’s homework, using a game-like activity of matching answer cards to their own problems, and provided direct instruction on fractions. Peer-comparison data were again collected by random selection of peers on every fifth observation interval. The direct observation data suggested that George demonstrated levels of active and passive engagement similar to those of his peers. However, he had much higher levels of off-task motor and passive behavior. His off-task motor behavior consisted of playing with his face, hands, clothes, and pencils. During the lesson, he continually placed his head on the table and gazed around the room. Mrs. Pickney had to remind him on several occasions to sit up straight and pay attention. When he prematurely put his work away, the teacher instructed a nearby peer to help George with the activity. In addition, the direct observation data revealed that George was somewhat more off task during math than reading. Mrs. Pickney noted that George’s behavior during this observation was typical of the type of behavior she has observed during most lessons. STUDENT INTERVIEW
Following completion of the mathematics lesson in which he was observed, George indicated that he was unsure whether he understood what was expected of him. In particular, George indicated that he had difficulty with division, and while he understood the
TABLE 9.2. Direct Observation Data from the BOSS Collected during Math for George Percentage of intervals Behavior
George (total intervals = 64)
Peers (total intervals = 16)
Teacher (total intervals = 16)
Percentage of intervals Active Engagement
10.0
8.8
Passive Engagement
47.0
55.9
Off-Task Motor
21.6
5.9
Off-Task Verbal
2.9
14.7
Off-Task Passive
23.7
8.8
Teacher-Directed Instruction
70.6
Case Illustrations 427
specific assignment given and liked math overall, he sometimes felt rushed in getting his work done. George did appreciate help and was willing to ask for it from a peer if he struggled with problems. DIRECT ASSESSMENT
Computation skills were assessed in addition, subtraction, multiplication, and division. Given that his teacher could not speculate on the computation skills that George did or did not know, the examiner began assessment at levels commensurate with those probes likely to be instructional for students at a fourth-grade level. In addition to computation, assessment of George’s skills at third- and fourth-grade levels for concepts–applications of mathematics were also administered. Results of the assessment are provided in Figure 9.2. Results of the assessment indicated that George had either mastered his skills or his performance fell within instructional range in a number of fourth- and fifth-grade objectives involving multiplication and division. However, subtraction and division involving regrouping were difficult for George. Watching George complete subtraction problems suggested that although he understood the process, he lacked fluency. On concepts and applications probes, he was unable to successfully complete any of the word problems, even when matched with graphic representations. He also did not correctly answer problems involving fractions as whole numbers, wrote cents using a “$” sign, and had problems with sequencing and questions related to graphs that required computation. REVIEW OF WORK SAMPLES
Examination of George’s recently completed mathematics worksheets from the curriculum revealed that George could write and represent decimals. He also appeared to understand the basic concepts of parallel and perpendicular lines, as well as shapes. More errors were observed in areas involving fractions, two- and three-dimensional shapes, and concepts of symmetry. On fractions and decimal worksheets, George did not know how to turn fractions into decimals.
Assessment Results: Writing TEACHER INTERVIEW
Mrs. Pickney periodically assigned writing, which included sentences and creative writing. According to his teacher, George has difficulty with written and oral forms and tends to use fragmented sentences. His teacher also reported that he has difficulties with writing mechanics, punctuation, grammar, spelling, handwriting, and capitalization. Mrs. Pickney noted that George often works very slowly compared to other children in his class. STUDENT INTERVIEW
When asked, George indicated that he enjoys writing. Although he understands assignments and feels he can do them, George indicated that he really did not know what his teacher expected of him in writing.
Academic Skills Problems
428
Data Summary Form for Academic Assessment Child’s name: George Teacher: Mrs. Pickney, Mrs. Bartram Grade: 4 School: Harriet School district: Bethel Date: 4/13/09 MATH—SKILLS Curriculum series used: Standards curriculum and SF/AW Specific problems in math: Fractions, word problems, geometry Mastery skill of target student: Unknown Mastery skill of average student: Unknown Instructional skill of target student: Unknown Instructional skill of average student: Unknown Problems in math applications: Fractions, word problems, embedded word problems Results of math probes: Probe type
No.
Addition:
Digits correct/min
Digits incorrect/min
% problems correct
Learning level (M, I, F)
19.5
0.9
80
Mastery
26.0
0
100
Mastery
9.5
0
100
Instructional
8.5
0.5
86
Frustrational
53.0
2.4
96
Mastery
2 addends of 4-digit numbers with regrouping Subtraction: 2 digits from 2 digits, no regrouping Subtraction: 2 digits from 2 digits, with regrouping Subtraction: 3 digits from 3 digits, with regrouping Multiplication: Facts 0–9 (continued)
FIGURE 9.2. Math probe and interview data for George.
Case Illustrations 429
Multiplication:
23.0
0.3
96
Mastery
14.25
1.2
67
Instructional
11.0
5.8
68
Instructional
2.5
24
0
Frustrational
2 digits x 1 digit, with regrouping
Multiplication: 2 digits x 2 digits, with regrouping Division: 2 digits/1 digit, no regrouping Division: 2 digits/1 digit, with regrouping Concepts–Applications
Number of Percentage of correct responses problems correct
Applications—3rd grade
15
66
Applications—4th grade
12
61
None completed for this area
STUDENT-REPORTED BEHAVIOR Understands expectations of teacher
Yes
No
: Not sure
Understands assignments
: Yes
No
Not sure
Feels they can do the assignments
: Yes
No
Not sure
Likes the subject
Yes
: No
Not sure
Feels they are given enough time to complete assignments
Yes
: No
Not sure
Feels like they are called upon to participate in discussions
Yes
: No
Not sure
FIGURE 9.2. (continued)
DIRECT ASSESSMENT
A total of three story starters were administered to George, and his responses were scored in terms of the total number of words written. His stories all fell below instructional levels for a fourth grader. For the first story starter, which was about basketball player Yao Ming, George wrote 33 words in 3 minutes, which fell within instructional levels for a second grader. However, instead of a story, George provided descriptive information only and did not show much creativity (see Figure 9.3). Although his capitalization and spelling were mostly correct, the words he used were fairly simple, and his punctuation was inconsistent and the grammar immature. To examine whether George was able to express his ideas without writing, he was asked to tell the evaluator a story in response to a story prompt. George’s response, again, included only factual information and not a story. When asked to write a second story, George wrote 36 words per 3 minutes, again placing him at the instructional level for second graders.
Academic Skills Problems
430
FIGURE 9.3. Writing probe for George.
REVIEW OF WORK SAMPLES
A review of work samples supported Mrs. Pickney’s observations that George’s writing was quite immature and lacked depth. He had a difficult time formulating ideas into paragraphs and instead made a list of ideas. He also had problems using correct tenses as well as articles. Words that were not spelled correctly were written phonetically, consistent with George’s skills in decoding. George’s printing was legible and showed good spacing. There was no evidence that George was able to use cursive writing.
Assessment Results: Spelling TEACHER INTERVIEW
Mrs. Pickney uses a list of the 500 most commonly used words for the purpose of teaching spelling. Although time is not specifically set aside for spelling instruction, students are given spelling packets and work independently to complete them within the week. A pretest is given on Wednesday and a final spelling test on Friday. Students are awarded stickers for accurate spelling assignments. Mrs. Pickney indicated that George does not complete his spelling packets and is behind by approximately three packets. STUDENT INTERVIEW
Although George stated that he is a good speller, he reported that he does not always understand the assignments. Often, he does not know how to do the spelling packets, which accounts for the fact that many are not completed.
Case Illustrations 431 DIRECT OBSERVATION
No observations of student behavior were conducted for this skills area. DIRECT ASSESSMENT
The evaluator assessed George in spelling by asking him to spell words presented from fourth-, third-, and second-grade levels. Table 9.3 reports the outcomes of the assessment process. George appeared to be instructional at a second-grade level based on correct letter sequences per minute. However, at third- and fourth-grade levels, he averaged only 29% of the words spelled correctly. George’s common errors included not differentiating c and s, k and c, b and f, and t and d. Medial vowels were often omitted, and double consonants were not used correctly. REVIEW OF WORK SAMPLES
George’s recent weekly spelling tests were examined. Although George tended to get many of these words correct, the words were rather simple, involving mostly three- and four-letter words. Given George’s strong decoding skills, it is not surprising that he was successful in many of these weekly spelling tests. Spelling words assessed for this evaluation were more complex and showed substantial deficiencies in George’s spelling performance.
Conclusions George, a student identified for support as an emergent bilingual student, was referred because of difficulties primarily in reading and writing. Assessments suggested that George is currently at a fourth-grade instructional level in oral reading; however, his comprehension lags much further behind. He appears to be appropriately placed in curricular materials, although he remains primarily a word-by-word reader, which certainly impedes his comprehension development. A review of work samples shows that George’s comprehension problems occur mainly with inferential questions, and understanding the meaning of key vocabulary and action verbs. In math, George has mastered most computational objectives at a fourth-grade level but still has some difficulty with regrouping skills across all operations. In applications, he shows some problems in the areas of geometry, word problems, and fractions. His difficulties with word problems can be at least partially explained by his difficulties with reading comprehension. TABLE 9.3. Spelling Probe Data for George Grade level of probe
LSC/min
% words correct
Level (M, I, F)
2
43
47
Instructional
3
33
29
Frustrational
4
31
24
Frustrational
Note. LSC, letter sequence correct.
Academic Skills Problems
432
Writing remains a significant problem for George. His work lacks creativity, length, and depth. His writing quality is underdeveloped, and mechanics such as punctuation are inconsistent. In spelling, he was noted to be instructional at the second-grade level. Although he is successful in his spelling instruction in class, the assessment indicated that he struggles with words more consistent with fourth-grade expectations. Behaviorally, George was noted to be somewhat less attentive in reading- and language-related activities when taught in large groups. In contrast, he demonstrated higher levels of task engagement during small-group and one-to-one instruction. These findings were fairly consistent across different subject areas.
Recommendations Specific recommendations related to reading, writing, and spelling are made given that these areas appear to be the most problematic for George. In addition, a general concern is the need for teachers to have better insight into George’s current level of language development. 1. Language development. George’s lack of familiarity of vocabulary terms and phrases that commonly occur in academic texts are likely a primary reason for his difficulties with reading comprehension, word-problem solving, and writing. It is also possible that George’s developing understanding of English is a reason he tends to be more off task in large-group instruction. Continued focus on developing George’s receptive and expressive language, especially pertaining to academic vocabulary and usage, is recommended. To support continued language support, it is recommended that an updated determination of George’s level of cognitive academic language proficiency (CALP) be obtained. CALP reflects the level needed by emergent bilingual students to do well on academic tasks in English. It is possible that George may have developed basic interpersonal communication skills (BICS) in English, but that his CALP is not sufficiently developed to allow him to benefit from language instruction. It is also recommended that an assessment examine George’s proficiency in Mandarin. 2. Reading. Reading comprehension is the primary area of reading difficulty for George, influenced mainly by George’s developing English- language skills, particularly, vocabulary knowledge. Several strategies are recommended to improve George’s reading comprehension that can occur before reading, during reading, and after reading.
• Prereading strategies: Before reading, the most important things to do are to
increase George’s knowledge of the words and information relevant to what he is about to read. | Preteach
relevant vocabulary and background knowledge: Before reading a passage or starting a new unit, teach the meaning of vocabulary terms or phrases that will be important for understanding the text. Teach vocabulary using a brief definition that describes what is important about the word, and have George practice using the word in sentences or evaluating whether it is used correctly. Word webs and the incremental rehearsal (i.e., folding-in) technique can be used to increase his vocabulary knowledge. Additionally, introduce George to background knowledge, facts, and concepts that will be important for understanding an upcoming passage. Increasing his vocabulary and background knowledge is particularly important for understanding texts in science and social studies.
Case Illustrations 433
• During-reading strategies: Strategies used during reading should be those that rein-
force and clarify new vocabulary terms, helping George make inferences to identify new word meanings and make connections within the text, and increasing his self- monitoring of his reading comprehension. | Oral reading; In intervention settings (such as small-group or one-to-one instruction), George should read orally. Oral reading will allow the teacher to identify reading errors, missed punctuation, or phrases that are not fluent (indicating difficulties with comprehension). These problems can go undetected if he reads silently. Additionally, further oral reading practice can help George better self-monitor his comprehension compared to reading silently, and can also help improve his prosody and expression. | Reinforce target vocabulary and teach unknown word meanings: While George is reading, point out when targeted vocabulary terms occur in the text and discuss how they are used. When other unknown words or phrases are encountered, explain what they mean and use them in a sentence. This is also important when previously learned words are encountered that have a different meaning for a specific context. | Inferencing: Ask questions that help George make connections within the text, connect events to his knowledge or experiences, and interpret what the author is saying. Help George use surrounding text to infer the meaning of unknown words. Pay attention to pronouns to make sure that George is connecting pronouns to the correct person or thing the pronoun refers to. Understanding connectives and conjunctions like however, although, and nevertheless are also important for inferring meaning in the text. | Teach comprehension self- monitoring and fix-up strategies: Help George self- monitor his comprehension and recognize when his understanding breaks down. Paragraph breaks provide good opportunities to stop and check for understanding by having him summarize what he read and/or identify the main idea of the paragraph. Also, strategies like “Click or Clunk” can be used to help George detect sentences that do not make sense. When George identifies a lack of understanding in the passage, help George “fix up” his comprehension by rereading, identifying unknown vocabulary or phrases, or recognizing when his background knowledge is insufficient. | Rereading challenging portions of text: When support is provided for George’s understanding in a section of text, have George reread the challenging portion. This will help George practice applying knowledge of vocabulary terms or improved understanding in context. • After-reading strategies: After reading, support George’s comprehension and formation of new knowledge. | Summarization and main idea identification: After reading, teach George to generate summary or main idea statements that, in his own words, explain what the passage was about and what he learned. Encourage his use of new vocabulary terms when summarizing. For longer texts, George can write a summary paragraph that will help him practice both his reading comprehension as well as provide writing practice. | Review target vocabulary: After reading, briefly review any vocabulary terms or phrases that were targeted before reading or occurred in the passage. Improving
434
Academic Skills Problems
George’s language comprehension skills is one of the most important ways to improve his reading comprehension. 3. Writing. Efforts to improve George’s writing should concentrate on both mechanics and conceptual development in the writing process. • Frequent opportunities to practice writing skills are needed. • The SRSD approach to writing can be used, which involves teaching prewriting strategies for generating and organizing thoughts and ideas, strategies during writing for using his notes to write in an organized way, and strategies after writing for revising and editing. | Instruction should teach George about the parts of a story, including text structure for narrative and expository texts. | Prior to writing, George should complete a graphic organizer or story web to generate thoughts and organize his ideas. Because interviews with George suggest that he can be highly motivated by some topics (e.g., his excitement when writing about the basketball player Yao Ming), an inventory of his interests should be developed. | During writing, George should be prompted to use his notes to write in an organized way. | Direct instruction in the editing and revision process is needed. Mrs. Pickney may want to teach George specifically about important aspects of revision, such as locating and diagnosing a problem, determining the type of changes needed in writing, and using an editing or proofreading checklist to monitor his work. His development of language skills in his ESL support should be connected to improving his grammar and syntax in his writing. | Use of a self-monitoring checklist to evaluate his own writing may allow George to better recognize the problems with his work. 4. Spelling. Recommendations in spelling involve trying to increase George’s vocabulary and capitalizing on his already established skills in decoding. • The CCC technique can be used to support George’s spelling practice. • Teaching common morphemes (i.e., common prefixes and suffixes) in terms of how they are spelled and what they mean when they occur in words may help George with both spelling and vocabulary development. This will be important for the academic vocabulary terms he will encounter in text now and in the future that often include prefixes and suffixes. • Specific instruction on long- and short-vowel sounds, differences between hard consonants such as c and k, and the use of rules such as silent-e should be taught.
Comments on George’s Case This case illustrates several issues related to academic assessment. It demonstrates the first two steps of the four-step model of academic assessment: assessing the academic environment and instructional placement. It also illustrates how an academic assessment can be conducted and important information obtained by using only curriculum-based measures, interviews, and direct observations. George was referred by the teacher due primarily to academic failure in reading and written language. Interview data with the teacher suggested that the psychologist would
Case Illustrations 435
find a child who was mostly on task but had poor academic skills in reading and writing. Results of the direct observations also revealed some level of inattention, predominantly in language-related activities involving whole-group instruction. In George’s case, the data did not suggest that his inattention was the cause of his academic skills problems; rather, his poor academic skills in certain areas and developing English-language skills were the reasons for his inattention. It is likely that he had great difficulty when being instructed in the typical large-group, whole-group methods through which most reading and language arts were being instructed. Regarding academic skills, results of the administration of skill probes showed that George was being instructed at the level at which he should have been reading. However, his performance in oral reading fluency was viewed as potentially not indicative of his much poorer level of comprehension. Recommendations to focus George on comprehension skills and to capitalize on his good skills in decoding were made. Importantly, improving George’s vocabulary and language skills were primary areas in which intervention should focus. Strategies for before, during, and after reading were aimed at improving George’s understanding of text. Improving George’s reading comprehension, combined with reading practice, may help improve his reading prosody and expression, thus reducing the “word-by-word reading” that was reported. In the area of writing, again, George was found to have substantial difficulties in language usage. Specific and direct attention to the use of strategies likely to improve his overall conceptualization of written communication was needed.
Case 2: Brittany Name: Brittany Birth date: June 18, 2002 Chronological age: 6 years, 11 months Evaluator: Kate Tresco2
Grade: 1 Teacher: Mrs. Metz School: Bell Elementary Report date: May 24, 2009
Background Information Brittany is a first-grade student who recently moved to the school district at the end of March. Her teacher reported that Brittany has been struggling in all academic areas. In addition, she has difficulty following directions. She is receiving additional support from a reading specialist three times per week, along with math tutoring from an instructional aide twice per week. The purpose of this evaluation is to determine Brittany’s strengths and weaknesses, and to make recommendations for appropriate objectives and goals. As reported by her teacher, Bell Elementary is in the preliminary stages of developing an MTSS model, but no model was in place at the time of the evaluation.
Assessment Methods
• Review of records • Teacher interview • Student interview 2 Many
thanks to Kate Tresco, whose case report as a doctoral student in School Psychology at Lehigh University was the basis for this example.
Academic Skills Problems
436
• Direct classroom observation • Direct assessment of reading, math Assessment Results: Reading TEACHER INTERVIEW
The reading portion of the teacher interview showed that Brittany is currently placed in the Houghton Mifflin series, level E (grade level 1.1). By contrast, the average student in her class is placed at level I (grade 1.4) of the series. Mrs. Metz reported that Brittany has difficulty reading orally and does not work independently. She recognizes very few words automatically (i.e., “by sight”) and fails to use appropriate decoding strategies to sound out unknown words. Reading instruction is allotted 90 minutes each day, with most instruction involving the whole class. In addition, Brittany is part of a small guided reading group led by Mrs. Metz at least three times per week. Currently, there are six different levels of groups in the class, and Brittany is assigned to the lowest level. In comparison to her peers, Brittany’s oral reading, word recognition, and comprehension are far worse. Mrs. Metz reported that when compared to others in her small guided reading group, Brittany reads orally about the same but has much worse word recognition and comprehension. In addition, Mrs. Metz noted that Brittany has difficulty staying on task, does not complete assigned work on time, and rarely hands in any complete homework. DIRECT OBSERVATION
Brittany was observed during large-group reading instruction for 15 minutes using the BOSS during a period of large-group instruction. Students were asked to follow along in their books while Mrs. Metz read aloud. Periodically, Mrs. Metz stopped to discuss the story. Data were also collected on randomly selected peers who were observed every fifth interval (once every 60 seconds). Results of the observations are provided in Table 9.4. During these observations, Brittany’s levels of active and passive engagement were fairly equivalent to those of her peers. She did, however, show somewhat more off-task motor behavior. While other students were following the story with their finger or looking at
TABLE 9.4. Direct Classroom Observation Data from the BOSS Collected during Reading for Brittany Behavior
Percentage of intervals Brittany (total intervals = 60)
Peers (total intervals = 16)
Active Engagement
35.4
41.7
Passive Engagement
35.4
33.3
Off-Task Motor
8.3
0
Off-Task Verbal
0
0
Off-Task Passive
18.8
16.7
Teacher-Directed Instruction
Teacher (total intervals = 16)
83.3
Case Illustrations 437
their book, Brittany was observed to be flipping pages inappropriately or fidgeting with articles of clothing. STUDENT INTERVIEW
Brittany was interviewed following the direct observations. During the interview, Brittany was quite distracted and wanted to discuss topics other than her work assignment. When questioned about her work during reading, Brittany indicated that she did not know what was expected of her or why she needed to do the assignment. Although she indicated that she liked to read, Brittany said that when asked to read a passage, she “only reads the words I know” and “just skips the hard ones.” DIRECT ASSESSMENT
Brittany’s reading skills were assessed first by the administration of a set of passages at the first-grade level. Her median oral reading fluency on those passages was 33 WCPM with eight errors (80% accuracy), which places her within a frustrational level based on district norms for current time of year. She was unable to answer any comprehension questions correctly. In addition, Brittany’s high error rate was a reflection of her pattern of skipping many words that she did not think she could read. An examination of the checklist of qualitative features of reading (see Figure 9.4) found that she had great difficulty decoding, showed very poor understanding of material being read, and great difficulty establishing any level of fluency in her reading. Due to Brittany’s below-grade-level performance on first-grade text, an assessment of prereading skills was conducted. The Dynamic Indicators of Basic Educational Literacy Skills, DIBELS, Sixth Edition, was administered. Measures of Initial Sound Fluency, Letter Naming Fluency, Phonemic Segmentation Fluency, and Nonsense Word Fluency were administered. Results of the assessment of prereading skills showed that Brittany reached appropriate benchmarks for a first-grade student in letter naming. Her skills in Initial Sound Fluency—identifying and producing the initial sound of a given word—were considered as emerging for a student in the middle of kindergarten. Her skills in Phonemic Segmentation Fluency and Nonsense Word Fluency were within the emerging area of first grade. In particular, her performance on Nonsense Word Fluency, a skill that is one of the later prereading benchmarks, was nearly established. REVIEW OF WORK PRODUCTS
No work products were available to review for reading.
Assessment Results: Mathematics TEACHER INTERVIEW
Brittany is currently placed in the first-grade book of the Everyday Mathematics curriculum series, the same level at which the entire first-grade class is placed. Instruction occurs for approximately 60 minutes per day. Typically, new material is introduced to the entire class so that students practice problems independently, then review them. Mrs. Metz
Academic Skills Problems
438
Qualitative Features of Good Reading Brittany
Students Name:
Date: 5/1/2009 Grade: 1
Pre-first
Instructional level assessed:
Teacher: Mrs. Metz School: Bell Elementary 1. Is highly fluent (speed and accuracy). Yes
Mixed
No
2. Uses effective strategies to decode words. • Effective wawwvvord attack
Yes
Mixed
No
• Context
Yes
Mixed
No
3. Adjusts pacing (i.e., slows down and speeds up according to level of text difficulty). • Of word(s)
Yes
Mixed
No
• Syntax (word order)
Yes
Mixed
No
• Semantics (word meaning)
Yes
Mixed
No
• Inflection (pause, voice goes up and down)
Yes
Mixed
No
• Reads with expression
Yes
Mixed
No
• Punctuation (commas, exclamation points, etc.) Yes
Mixed
No
• Predicts level of expression according to syntax Yes
Mixed
No
4. Attends to prosodic features.
5. Possesses prediction orientation. • Seems to look ahead when reading
Yes
Mixed
No
• Reads at a sentence or paragraph level
Yes
Mixed
No
• Self-corrects if makes meaning distortion errors Yes
Mixed
No
6. Self-monitors what they are reading.
7. Makes only meaning preservation errors. • More errors that preserve meaning (e.g., house for home)
Yes
Mixed
No
• Fewer meaning distortion errors (e.g., mouse for house.)
Yes
Mixed
No
• Words that appear throughout text are Yes read automatically (e.g., become “sight words”)
Mixed
No
8. Automaticity on reread words.
FIGURE 9.4. Qualitative Features of Good Reading checklist for Brittany.
Case Illustrations 439
also uses small-group work and independent seatwork. During the independent seatwork time, Mrs. Metz often works one-to-one with Brittany. Brittany’s skills were reported as below expected levels. Mrs. Metz indicated that Brittany struggles with number concepts, can only count to 50, has trouble identifying the value of coins, can only tell time to the hour and half-hour, and has little skill in completing subtraction problems. She was reported as successful at some simple addition problems. STUDENT INTERVIEW
No opportunity to conduct a student interview with Brittany regarding math was available. DIRECT OBSERVATION
Two 15-minute observations of Brittany during math instruction were completed using the BOSS. In one observation of the entire group, Mrs. Metz introduced a new method to compute double-digit addition problems. Instruction during the second observation asked students to complete two worksheets independently. Mrs. Metz worked directly with Brittany during this time. Data for each of the observations are presented in Table 9.5. During the large-group setting, Brittany’s active engagement in completing her work was equivalent to that of
TABLE 9.5. Direct Observation Data from the BOSS Collected during Math Large-Group and Independent Work Periods for Brittany Percentage of intervals Behavior
Brittany (total intervals = 60)
Peers (total intervals = 16)
Teacher (total intervals = 12)
Large group Active Engagement
37.5
33.3
Passive Engagement
33.8
66.7
Off-Task Motor
10.4
0
Off-Task Verbal
6.3
0
Off-Task Passive
2.1
0
Teacher-Directed Instruction
75.0 Independent seatwork
Active Engagement
45.2
58.3
Passive Engagement
3.2
16.7
Off-Task Motor
22.6
8.3
Off-Task Verbal
0
8.3
Off-Task Passive
29.0
Teacher-Directed Instruction
0 41.7
440
Academic Skills Problems
her peers, but she demonstrated lower levels of passive engagement. Typically, while Brittany attended to her required problems, she also engaged in off-task behaviors, such as looking around the room, fidgeting with articles of clothing, and talking to a peer instead of attending to the teacher as instruction was occurring at the board. During the independent seatwork activity, Brittany showed less active and passive engagement compared to her peers. Often, she either gazed around the room or fidgeted with her clothes when she was supposed to be working. In terms of the amount of material actually completed, Brittany finished only one worksheet during the observation, whereas the rest of her class completed both assigned and additional worksheets within the same time period. REVIEW OF WORK SAMPLES
A review of the worksheet Brittany completed during the independent seatwork activity showed that she was unsure how to apply the strategy she was instructed to use. The worksheet provided place-value blocks for the 10’s and 1’s columns in two-digit addition problems. Brittany was able to complete the worksheet accurately only for problems in which the number zero was added. Her most common errors were to add all place-value blocks together in the 1’s column, instead of distinguishing between the 1’s and 10’s columns. DIRECT ASSESSMENT
Brittany’s skills with basic addition and subtraction facts were assessed. Results indicated that she was able to add single-digit problems, sums to 10, at a rate of only 1.6 correct per minute, and that she scored no problems correct on a probe of similar types of subtraction facts. Observation of her behavior while trying to do the problems indicates that she has some basic concepts of addition and used her fingers to count as a strategy. However, she had difficulty with counting skills and did not demonstrate that she had committed any math facts to memory.
Conclusions Brittany was referred for poor academic performance in reading and math. According to her teacher, she was far behind in both skills areas. During both reading and math activities, Brittany appeared to be somewhat more off task then her peers. Typically, much of her distractibility appeared to be due to her inability to accurately complete her assigned academic work. In reading, Brittany scored as frustrational in oral reading fluency when assessed at the first-grade level. As such, she was assessed on prereading skills of phonemic segmentation, recognizing initial sounds in words, decoding nonsense words, and letter naming. She was found to be below the expected benchmarks for first-grade students in all skills except for nonsense word decoding. Assessment of her letter–sound correspondence revealed that she could associate all letters with their most common sound with the exception of b, d, u, and h, and she also did not identify the sounds of several letter combinations (vowel and consonant digraphs) including ch, sh, th, ea, oo, ou, and oi. Brittany was unable to apply decoding skills when reading in context. In math, Brittany showed skills that suggested she was at frustrational levels at even the most basic addition and subtraction facts.
Case Illustrations 441
Recommendations A series of recommendations focused on intervention strategies to improve Brittany’s skills in reading and basic math was made. 1. Reading. Brittany is demonstrating difficulties in acquiring word reading skills that allow her to read words with accuracy and fluency when they appear in text. Strategies should include those that improve her foundational skills in phonemic awareness and alphabetic knowledge and how to use these skills to decode words, and frequent practice reading both individual words and connected text. • First, direct instruction should teach any letter–sound correspondences and letter combinations that she may be missing. Lack of letter–sound correspondence will impair her ability to decode words, and lack of knowledge of letter combinations is increasingly problematic as words become more complex over time. | Direct instruction in sounding out words should continue and move beyond consonant–vowel–consonant words (CVC). As she is successful, words should include CVCC words, followed by CCVC words, CCVCC, CVCCC, and so on. When she encounters unfamiliar words in text, Brittany should be prompted to sound out as her first word-attack strategy, and affirmative or corrective feedback should be provided immediately. As she demonstrates success with sounding out, additional word-attack strategies can be introduced, such as reading words by analogy using the rime unit (e.g., “If I can read duck, I can read truck”). • The incremental rehearsal flashcard technique (i.e., so-called “folding in”) can be used to develop Brittany’s automaticity in reading words. Words should be those that are targeted in instruction and/or contain targeted letters or letter combinations. When she makes an error, Brittany should be prompted to sound out the word, and feedback and support should be provided immediately. This strategy should also include high-frequency words. Even when words are not phonetically regular (i.e., not “wholly decodable”), Brittany should still be prompted to try sounding it out, and feedback should be provided to help her convert her partial decoding to the correct pronunciation. Take note of any words or spelling patterns that are particularly problematic that may require additional instruction or practice. • Use a rereading activity in which Brittany reads a short passage aloud. To start, passages should be approximately 50–100 words in length, and she should read the passage three to four times. Frequent affirmative feedback should be provided as she reads words correctly (i.e., offer a simple “good” as she reads challenging words correctly), and error correction provided immediately when errors occur or she skips words. The emphasis in this activity should be on improving her reading accuracy, with a goal of reading the passage without an error by the end of the session. Avoid prompting Brittany to read “faster”; instead, focus her attention on reading carefully and accurately. Through repeated readings, Brittany’s fluency should naturally improve. Take note of words that are problematic for her that can be targeted specifically in instruction or flashcard practice. | Continue to support her reading comprehension while word and text reading accuracy are targeted. After she reads a passage, ask Brittany a few questions that require literal and inferential comprehension, and have her summarize what she read to reinforce her comprehension skills. Additionally, work on expanding her vocabulary by talking about the meanings of any words or phrases when she is unsure of what they mean.
Academic Skills Problems
442
2. Math. Intervention in mathematics should provide instruction and practice in improving her knowledge and fluency with number combinations (i.e., so-called “math facts”) in addition and subtraction. This instruction should begin with teaching Brittany a set of reliable, efficient counting strategies for solving math facts while she commits them to memory over time.
• Teach reliable and efficient counting strategies: Although the goal is for Brittany to eventually commit all math facts to memory, counting strategies will provide her with strategies for correctly solving math facts in the meantime. Counting instruction should be done with a number line and counting finders as scaffolds. | Addition
counting: Teach Brittany counting on from larger (i.e., the “min” strategy) as a strategy for solving addition facts. With this strategy, Brittany learns to start with the larger number in the problem, count up the other number using a number line or her fingers, and the answer is the number she ends on.
| Subtraction
counting: Teach Brittany the counting up strategy for solving subtraction facts. Here, Brittany starts with the subtrahend (i.e., the second number in the problem) and counts up to the minuend (i.e., the first number) using a number line or her fingers. The answer is the number of hops on the line or number of fingers. This strategy is more reliable and efficient because students tend to make more errors when counting backward.
• Systematic math fact practice: Reintroduce math facts in families and provide fre-
quent practice with each. Begin with +/–0, followed by +/–1, then doubles (1 + 1, 2 + 2, 3 + 3, etc., and 10 – 5, 8 – 4. 6 – 3, etc.). Then, teach the remainder in sets in which the sum and minuend are that number. For example, the 5 set would include 5 + 0, 4 + 1, 3 + 2, 5 – 0, 5 – 1, 5 – 4, 5 – 3, 5 – 2, 5 – 1, 5 – 0. The 6 set would be targeted next, and so on. When learning new facts, Brittany should always be prompted to use the counting strategies she learned (see above). As each fact or set is targeted, the following practice strategies can be used: | Flashcard
drills such as incremental rehearsal (“folding in”) can be used. Encourage Brittany to recall answers to facts quickly, but prompt her to always fall back on her counting strategies when needed.
| Cover–copy–compare
can be used as a practice strategy that will allow Brittany to self-check her solutions. Independent practice can also make use of timed fact worksheets with an answer key that Brittany can use to score her work after the timer rings. Brittany can also graph her scores to watch her number of correct problems per minute go up over time.
Comments on Brittany’s Case This case represents an example of how direct assessment of academic skills can be used for a student whose reading skills are not instructional at the first-grade level. An interesting finding was that although she did not have well-developed reading skills, Brittany understands letter–sound correspondence and how to use letter sounds in decoding CVC words. The problem seemed to be that Brittany could not transfer the skills to more complex words or reading words in context. In math, Brittany again showed only rudimentary ability to apply any strategies she was being taught. Her failure to demonstrate basic addition or subtraction skills requires immediate interventions aimed at increasing her accuracy and automaticity. In addition,
Case Illustrations 443
she needs to be given more explicit instruction within the teaching process to ensure that she continues to use these skills when applied to other parts of the math curriculum. Finally, Brittany was more distractible than her peers in many places. However, the outcome of the assessment shows that the distractibility was probably due to her poor academic skills development, as opposed to an underlying deficit in attention. Overall, Brittany’s case presents a common problem frequently seen in referrals for students at first-grade level. Her low levels of reading and math require immediate and intensive intervention; otherwise, she is destined to be referred for retention and possibly for special education services. Indeed, one advantage of the type of assessment illustrated in this case is that if the intensive interventions focused on Brittany’s skills and frequent and systematic data-based decision making fail to result in a positive outcome, a referral to determine her eligibility for special education would be warranted.
CASE EXAMPLES OF INTERVENTION AND PROGRESS MONITORING The following cases are designed to illustrate intervention and progress monitoring integration. Space does not allow for case illustrations of even a portion of the many academic interventions covered in Chapters 5 and 6. Readers are encouraged to examine the publications cited in those chapters for specific examples of the procedures of interest. Two types of intervention cases were chosen for presentation here. The first case illustrates a student whose difficulties called for a focus on improving fluency in reading. In particular, emphasis in the intervention was placed on increasing opportunities to read connected text. The second case illustrates an intervention focused in the areas of both reading comprehension and math, with an emphasis on increasing skills in multiplication facts, division, fractions, and place value. In both cases, specific subskill and general outcomes forms of progress monitoring are used to evaluate student responsiveness.
Case 3: Earl Name: Earl Birth date: December 16, 2000 Chronological age: 8 years, 6 months Evaluator: Hillary Strong3
Grade: 3 Teacher: Mrs. Losche School: Storr Report date: May 25, 2009
Background Information Earl is a third-grade student referred because of poor academic performance in the area of reading. In particular, a CBA of reading found that Earl is instructional at the second-grade level, although he is being instructed at the 3.2 level of the reading series. Mrs. Losche supplements Earl’s instruction with drills from phonics book assigned at the second-grade level. Beyond regular classroom instruction, Earl receives both Title I services 5 days per week and additional remedial help through a special district-run program for students at risk for reading failure, although it was unclear what interventions were implemented as part of this program or the quality of their implementation. 3 Many
thanks to Hillary Strong, whose case report was the basis for this example while she was a graduate student in School Psychology at Lehigh University.
Academic Skills Problems
444
Given Earl’s low reading fluency but high reading accuracy rate in third-grade text, it was determined that intervention should primarily focus on increasing his opportunities to practice reading connected text. High reading accuracy indicates that Earl’s word- identification skills are strong at this time. It is possible that his low reading fluency may be the result of insufficient opportunities to read third-grade text, which is needed for improving his automaticity.
Goal Setting A CBA was administered to determine a baseline rate of Earl’s oral reading rates. Baseline probes taken from the third-grade level found that Earl read at 42 WCPM, with few errors. A goal was set for monitoring Earl’s reading performance. In discussion with the school problem-solving team, all agreed that it was important to set an ambitious goal of 70 WCPM in third-grade reading passages across the 7 weeks of intervention remaining in the school year. To accomplish this goal, an increase from a baseline level of 42 to 70 WCPM would be equal to a gain of 28 words across the remaining 7 weeks of the school year, or four words correct per week. The examiner discussed with the team how this goal was very ambitious and significantly greater than a typical rate of improvement for third graders. However, the team believed it important to set an ambitious goal and adjust it if needed. Additionally, given Earl’s high reading accuracy rate (i.e., no need existed to remediate word reading difficulties), it was possible that Earl could achieve this ambitious goal through increased opportunities to practice reading.
Intervention A tutor worked with Earl three times per week, and a student tutor worked with Earl two times per week, across the intervention period. Sessions occurred for 20 minutes at the start of the school day. Each session consisted of strategies aimed at increasing practice reading third-grade text: 1. A series of passages approximately 200 words in length from Earl’s third-grade reading text, social studies, and science books were photocopied. 2. Earl read the passage aloud while the tutor timed him for 1 minute and marked any errors. At the end of 1 minute, the tutor indicated how many words Earl read correctly, and Earl then graphed his pretest score. 3. The tutor and Earl discussed what the passage was about. The tutor asked questions to help Earl identify the main idea and answer questions that required inferential understanding. 4. The tutor reviewed any words that Earl missed by (1) pointing to the word and asking him to sound it out, and providing feedback and support in reading it correctly, and (2) asking him to read the sentence that contained the word. 5. Earl read aloud again, but this reading was untimed. Earl read the entire 200word passage this time. The tutor used the same error correction procedure noted in the previous step. One to two comprehension questions were asked at the end of the passage. 6. Earl then read the passage for a third time, while the tutor again timed him for 1 minute and marked his errors as he read. The tutor informed Earl of the number of words he read correctly. This served as the “posttest” score.
Case Illustrations 445
7. Earl graphed his posttest score and compared it to his pretest score and his preand posttest scores from previous sessions. 8. At the end of the session, the tutor recorded the date and Earl’s respective scores for the day on a special data sheet prepared by the evaluator. 9. On alternate days when he did not work with the tutor, Earl worked with a high- achieving fourth-grade student. In these sessions, the student tutor followed a similar approach: (1) pretest timed reading and graph the score; (2) Earl and the student tutor read the whole passage aloud by alternating reading sentences aloud (e.g., the student tutor read the first sentence aloud, Earl read the second sentence, the student tutor read the third sentence, etc.), and the tutor provided help with difficult words; and (3) Earl read the passage a third time while the student tutor timed him for 1 minute, and Earl graphed his posttest score. The student tutor was fully trained and understood the intervention before it started.
Progress Monitoring Procedures Earl’s pre- and posttest reading scores on each session served as an index of progress and helped both the tutor and Earl see how his reading was improving over time. Formal progress was measured by the evaluator twice per week using unfamiliar third-grade passages from the AIMSweb system. In each assessment session, Earl was asked to read three passages, and the median WCPM across those passages was recorded.
Results Figure 9.5 reflects the outcomes of Earl’s short-term progress monitoring. The data show that Earl consistently read passages at posttest at faster rates than the initial “cold read” at the beginning of each session. In particular, Earl reached fluency levels on 9 of 14 sessions at posttest that exceeded 70 WCPM. Earl’s performances during the initial cold reads prior to each session were examined over time to determine whether he was making session-to-session progress in oral reading. As shown in Figure 9.6, Earl’s prereading scores over the first six sessions increased from 39 to 68 WCPM, which surpassed the initial short-term goal of 60 WCPM. As a result, a new goal of 70 WCPM across the last seven sessions was set for Earl. As can be seen from Figure 9.6, Earl approached and nearly attained the higher goal level set for him. Using the long-term monitoring, general outcomes measurement (GOM) passages taken from the AIMSweb material with which Earl was not familiar showed that he made substantial progress toward the desired goal. Based on the goals set for Earl by the problem-solving team, he was expected to reach a level of 54 WCPM across the first 2 weeks of the intervention program, and then reach a level of 70 WCPM across the 7 weeks of intervention. Although not quite reaching 70 WCPM, Earl improved his reading performance from 42 WCPM at baseline to around 60 WCPM by the end of the 7 weeks of intervention. This was a gain of 18 words correct in 7 weeks, or almost 2.6 words correct per week (see Figure 9.7).
Conclusions and Comments on Earl’s Case Over the course of 7 weeks, Earl made considerable progress in his reading fluency. Despite scheduling conflicts and absences that made the intervention less consistent than
446
Academic Skills Problems
FIGURE 9.5. Results of reading intervention for Earl.
FIGURE 9.6. Short-term progress monitoring of reading intervention for Earl (pre-reading scores).
Case Illustrations 447
FIGURE 9.7. GOM (long-term) progress monitoring for Earl in reading.
planned, Earl increased his fluency in third-grade reading text, and demonstrated a rate of improvement that far exceeded not only students achieving below the 10th percentile (where Earl was reading at the start of the intervention), but also exceeded rates of improvement demonstrated by students reading at the 50th percentile. This is evidence that Earl was able to “close the achievement gap” to an extent with his peers. The results from this intervention indicate that Earl was highly responsive to an intervention that increased his supported opportunities to read grade-level text. Earl was very cooperative and enjoyed the intervention, especially the sessions with a fourth-grade tutor. Earl also said that he really enjoyed graphing his own data. In this case, the goal set for Earl was very ambitious. Making progress at a rate of greater than two words per week exceeds the typical performance of most students who do not have any difficulties in reading (see the discussion of normative data in Chapter 4). Earl’s case shows the outcomes that can occur when interventions are constructed to meet a student’s need and implemented with integrity. Earl’s case illustrates the connections between setting ambitious goals and interventions that are not highly complex to produce significant gains in students who are struggling. In some cases, as in Earl’s case, what students need the most is more practice. In addition, the case shows the links in progress monitoring between short- and long-term assessment methods.
Case 4: Jessi Name: Jessi Birth date: November 28, 1997 Chronological age: 11 years, 3 months Evaluator: Rebecca Lemieux4 4 Many
Grade: 5 Teacher: Mrs. Emlin School: Lakeside Report date: March 1, 2009
thanks to Rebecca Lemieux, graduate student in school psychology at Lehigh University, whose case report is the basis for this material.
448
Academic Skills Problems
Background Information Jessi was referred to the problem-solving team of her elementary school because of failure to make adequate progress in reading comprehension and math. In particular, her teacher and the team had placed her on a list of students for whom grade retention was likely. Results of the evaluation revealed that Jessi was being instructed in a fifth-grade reading group for guided reading and read-aloud, and in a fourth-grade group for independent reading. It was found that Jessi experienced significant difficulties in comprehension. She was able to read passages accurately and fluently, including passages at her grade level, but struggled with inferential comprehension at all levels. Furthermore, Jessi was found to have difficulty with math computation. In particular, she had significant problems with multiplication and division problems. Although her overall accuracy in these skills indicated that she was aware of how to correctly compute these types of problems, her performance rate was extremely slow, suggesting the need for increased fluency in these areas. Finally, an assessment of Jessi’s math concepts–applications skills revealed that she struggled with many areas, including place value, fractions, and multistep word problems.
Goal Setting READING COMPREHENSION
Data obtained through a CBA revealed that Jessi’s baseline level for comprehension questions answered correctly following the reading of passages was 63%. A goal of increasing her level to 80% correct or higher was set to be obtained over a 6-week intervention period of instruction designed to focus specifically on improving her strategies for reading comprehension. MATHEMATICS
The initial CBA conducted with Jessi in math identified both computation and concepts– applications as problematic. In computation, Jessi attained a baseline of 53% of division problems correct, an attained score of six digits correct on an assessment of mixed operations, and only 2 points correct on an assessment of math applications. Short-term goals set in division problems were for Jessi to achieve a score of 70% or better when presented with 10 division problems. A fluency goal of an 0.5-digit increase per week was established across a 6-week instructional period. In concepts–applications, a similar increase of 0.5 points per week was established.
Interventions READING COMPREHENSION
Based on the finding that Jessi’s reading fluency rates were consistent with grade-level performance, the recommendation was made for Jessi to become more aware of how to read for meaningful understanding. Specifically, Jessi was found to be reading at a rate of 130 WCPM (with 97% accuracy) in fifth-grade reading text but was attaining very low comprehension scores. The initial sessions of the intervention plan focused on having Jessi attend more carefully to text she was reading, essentially slowing down her reading rate, and to read with better prosody (i.e., inflection) by altering her inflection
Case Illustrations 449
to appropriately reflect punctuation, tone, and emotion. A model of good reading was provided by the instructor, who read the material at a rate of approximately 100–110 WCPM and demonstrated appropriate inflection. Additionally, Jessi was taught to stop periodically, such as at paragraph breaks, to self-monitor her comprehension by having her generate a summary or main idea statement, in approximately 10 words or less, of the preceding text. The tutor helped identify inadequate understanding and prompted Jessi to reread portions of text to “fix up” her understanding. Subsequent sessions with Jessi focused on vocabulary building, text-structure knowledge, and reading for a purpose using passages of text from her fifth-grade textbooks in English language arts, science, and social studies. First, vocabulary terms important for understanding the passage were discussed before reading. Second, Jessi was taught to preview the passage by looking at subject headings and the introductory and concluding paragraphs to identify the genre and text structure, such as narrative text (e.g., stories) or informational text, and the specific type: definition/description, problem–solution, cause and effect, compare and contrast, or sequence of steps/events. Jessi was taught what each term meant, and the purpose of each text type. Passages were strategically selected to demonstrate each text structure type. Third, the tutor provided Jessi with one or two questions or things to learn or determine from the upcoming passage, which helped provide a purpose for reading. Questions were selected that would require some level of inference making and connecting information or events in the passage with Jessi’s knowledge or experience. Jessi then read the story and answered the comprehension questions. The intervention was implemented three times per week for 20 minutes each session. MATHEMATICS
Interventions were developed to improve Jessi’s skills in multiplication facts, division, and fractions. The interventions described below were implemented 3 days per week for the first 2 weeks, and 5 days per week for the remaining 4 weeks. Each lesson lasted approximately 15 minutes. Interventions for multiplication facts were provided for both school and home settings. In school, the incremental rehearsal technique (see Chapter 6) was used to help Jessi review her facts. The technique involved first assessing those facts that were known and unknown, then reviewing the facts in a group of 10, where seven facts were known and three were unknown. At home, Jessi was given worksheets to complete, with her mother reviewing multiplication facts as well. Special rewards at school were given for completion of these worksheets, including free time with a friend and lunch with the teacher. To help Jessi acquire division skills, a set of mini-lessons was created based on the steps involved in long division. During these mini-lessons, Jessi constructed reminder cards (see Figure 9.8) that provided examples of division problems, with and without remainders. In order to facilitate her response, Jessi was instructed to place boxes over the dividend that corresponded to the correct number of digits she should have in her answer. This technique helped Jessi in sequencing division problems and knowing where to start her answers. The boxes were faded as instruction progressed. Jessi was also taught to prove her answers to division problems by using multiplication (i.e., multiplying the divisor by her answer [quotient] to the division problem should result in the dividend; if not, her division was incorrect). If problems were incorrect, she was instructed to set up and solve the problem again. Jessi was also encouraged to draw on her developing fluency of multiplication facts in helping to solve division problems. As instruction progressed and
450
Academic Skills Problems
FIGURE 9.8. Jessi’s division reminder.
Jessi increased her skill development, proving division problems was reduced to a weekly basis only. Mini-lessons in fractions were also constructed; these involved drill and practice taken from Jessi’s fractions quiz, on which she had scored 50% correct. Working with Jessi, the instructor focused the drill on renaming, simplifying, and adding and subtracting fractions. Jessi again constructed a reminder card for each type of problem. As part of the math intervention, Jessi also graphed her scores on math quizzes. This intervention was used to help Jessi see her progress, as well as to help her slow down and increase the accuracy of her work. It was emphasized to Jessi that she try to be as accurate as possible in her work, which meant slowing down to work carefully and checking her answers. She was rewarded for any grade above 80%, earning tickets that could be exchanged for activities or privileges.
Progress Monitoring Procedures READING COMPREHENSION
Once each week, Jessi was given a fifth-grade CBMreading passage with which she was not familiar. Following her reading of the passage aloud, she was asked a set of eight comprehension questions (five literal, three inferential), and the percentage of questions answered correctly was calculated. MATHEMATICS
Both general outcome and specific subskill assessments of math skills were conducted for Jessi. Once each week, Jessi was assessed on computation or concepts–applications. These two areas were alternated so that only one of these measures was given each week. Specific subskill assessment of progress in division was obtained daily, prior to starting each instructional session, by having Jessi complete a probe of division facts (two digits divided by one digit; three digits divided by one digit) similar to those that were being taught. General outcome assessment involved having Jessi complete fourth-grade mixed- operation probes.
Case Illustrations 451
Results READING COMPREHENSION
Outcomes for Jessi’s performance in reading comprehension are displayed in Figure 9.9. Jessi’s performance rapidly improved from a baseline of 63% of comprehension questions answered correctly to average correct responding between 75% and 100% correct after only two instructional sessions. In particular, slowing Jessi’s oral reading fluency contributed to the growth in her comprehension, as well as the previewing strategies taught in subsequent lessons. MATHEMATICS
In mathematics, Jessi made progress in both her multiplication facts and division skills. By the end of the intervention period, Jessi was accurate and fluent in all multiplication facts to 9. She also showed very high accuracy in division skills, increasing from a baseline of 53% correct to 100% for both two-digit by one-digit and three-digit by one-digit division, as reflected in Figure 9.10. Jessi’s specific subskill gains in division and multiplication were also reflected in general outcomes mixed-operations progress monitoring. In addition, the instruction in fractions was also evident in the GOM assessment collected every other week. These data are shown in Figures 9.11 and 9.12.
Conclusions and Comments on Jessi’s Case Over the course of 6 weeks of intervention, Jessi made considerable progress in the areas of concern that had resulted in her being referred to the IST. She showed considerable success in acquiring skills in multiplication and division, gains in concepts–applications of mathematics, especially fractions, and reading comprehension. These gains occurred as a function of focused and intensive intervention that included multiple strategies. As a result of the intervention program, Jessi was no longer considered at high risk for retention. Instead, the team decided to continue to monitor her progress carefully until the
FIGURE 9.9. Short-term progress monitoring of Jessi’s performance in reading comprehension.
452
Academic Skills Problems
FIGURE 9.10. Short-term progress monitoring of Jessi’s accuracy in division facts.
school year ended, and to make recommendations for Jessi to obtain ongoing academic support over the upcoming summer months so that she would start the sixth grade of middle school in the fall with a high degree of success. The case illustrates several important components of the process of conducting a CBA. First, the assessment data were directly linked to the development of specific intervention strategies. In Jessi’s case, these strategies would all be viewed as moderate interventions in which substantial increases in skills were emphasized. Second, the interventions were implemented on a schedule that was reasonable and practical for both Jessi’s teacher and the instructor. At the same time, the importance of implementing the intervention with consistency was recognized by everyone on the team. Third, the assessment demonstrates nicely how specific subskill mastery and general outcomes measures can work together to
FIGURE 9.11. GOM (long-term) progress monitoring of Jessi’s performance in math computation.
Case Illustrations 453
FIGURE 9.12. GOM (long-term) progress monitoring of Jessi’s performance in math concepts– applications.
offer clear determinations of student outcomes. For reading comprehension, Jessi’s performance was monitored by the GOM measures, using Jessi’s performance on unknown passages as the mechanism to evaluate outcomes of the intervention. In math, however, specific subskill mastery measurement was conducted by examining Jessi’s performance on the skills being taught—multiplication facts, division facts, and division of two-digit by one-digit and three-digit by one-digit problems. Jessi’s general outcomes performance on measures involving mixed fourth-grade operations showed gains that equaled the goal levels set for her. Likewise, her gains from instruction in fractions were evident in her performance on fourth-grade concepts–application probes. Another important aspect of this case was the recognition that although Jessi had strong oral reading fluency consistent with or exceeding grade-level expectations, she lacked success in fully comprehending the material she read. As such, increased fluency was certainly not a goal of the instructional process. Instead, Jessi was taught to read more carefully and attend to her understanding of the text, which slowed her rate of reading somewhat, but allowed her to better process the information and to synthesize what she was reading. This case illustrates a point made in Chapter 2: A student’s reading rate need only be fast enough to support comprehension—if a student is trying to read too quickly, fluency is no longer a facilitator of comprehension and can actually impede it. Once Jessi slowed down, strategies used to facilitate comprehension became valuable tools that she was able to use in future reading. Likewise, the fluency rate could be increased to levels evident prior to the intervention program, but with Jessi’s new comprehension of what she was reading.
CASE EXAMPLES OF THE FOUR‑STEP MODEL OF DIRECT ACADEMIC ASSESSMENT The two cases presented next, Ace and Leo, illustrate how all four steps of the model described in this text translate into practice. Leo’s case also demonstrates how CBM measures can be used in conjunction with standardized tests of relevant academic skills
454
Academic Skills Problems
for informing intervention development. Both cases illustrate the combined use of general outcomes measurement and specific subskill mastery measurement to monitor progress.
Case 5: Ace Name: Ace Birth date: August 1, 2000 Chronological age: 8 years, 8 months Evaluator: Amy Landt Kregel5
Grade: 2 Teacher: Mrs. Kregel School: Hyatt Report date: April 3, 2009
Background Information Ace was referred because of reading difficulties, as well as other academic skills problems. At the time of referral, his teachers reported that he had often been absent from school. Currently, Ace attends a general education classroom and had not previously been considered for special education. However, his teacher reports that Ace is not making the progress expected compared to other second graders and his achievement is becoming a significant concern.
Step 1: Assessment of the Instructional Environment TEACHER INTERVIEW
Mrs. Kregel reported that Ace is currently being taught in the Houghton Mifflin reading series, recently finishing the second book of level 1 (grade 1) and has started the first book of level 2 (grade 2). Unfortunately, Ace missed a considerable amount of instruction as a result of a 6-week absence from school during a family trip abroad. According to his teacher, Ace is experiencing difficulties reading. In particular, Mrs. Kregel noted that Ace lacks decoding skills, has poor word reading efficiency, and weak oral reading fluency. Although Ace is placed in a book consistent with his grade level, he is placed in a below-average reading group. He and six other students receive supplemental reading support with another teacher for 90 minutes per day. During the allotted time, Ace receives a considerable amount of one-to-one instruction. Typically, the first 30 minutes of the instructional period are spent in silent reading and reviewing vocabulary. Students then read aloud to the group and complete written assignments. Any tasks not completed during the reading period are assigned as homework. A chart is displayed prominently in front of the classroom, on which accurate assignment completion is noted by stickers. In terms of behavior, Mrs. Kregel reported that Ace is generally on task and works quietly to complete his assigned work. Although he volunteers in class, he rarely gives the correct answer when called upon. The teacher’s primary concern surrounds the accuracy of Ace’s work in reading. Mrs. Kregel also completed the Academic Performance Rating Scale (DuPaul, Rapport, et al., 1991), where she indicated that all aspects of language arts (reading, spelling, and writing) are the most significant problem for Ace. Math was noted as one of Ace’s strengths. 5 Many
thanks to Amy Landt Kregel, graduate student in the School Psychology program at Lehigh University, whose case report is the basis for this material.
Case Illustrations 455 DIRECT OBSERVATION
Using the BOSS, Ace was observed during one 20-minute silent reading period and one 40-minute period of teacher-directed instruction period. Comparison to peers from Ace’s classroom was made during the silent reading period only. As can be seen in Table 9.6, during both sessions Ace was found to be engaged in schoolwork (a combination of active and passive engagement) for 70% of the intervals. These levels of engagement were similar to those of his peers. Disruptive behaviors, such as calling out, socializing with peers, or getting out of their seat, were infrequent for both Ace and his classmates. Ace did show somewhat higher levels of staring-off behavior compared to peers. During silent reading, Ace received one-to-one instruction from his teacher for most of the intervals. Most of the time, the teacher was trying to teach Ace phonetic analysis. Teacher affirmative feedback was minimal throughout the observation despite the high frequency of attention. During oral reading, it was observed that Ace was having great difficulty in reading words or answering questions about the material being read. A high degree of on-task behavior was evident despite Ace’s academic problems. STUDENT INTERVIEW
When asked about school, Ace stated that he enjoyed math and felt that it was his strongest area. He thought that most of his problems were in spelling, although he acknowledged having trouble with the reading assignment he had just been asked to complete. Ace noted that he sometimes did not understand the assignments he was given and was not interested in the work, especially if it was related to language arts. Although he thought the time he was given to do his work was fair, he disliked the fact that he often had homework in reading and spelling because he had trouble with class assignments. Ace noted that he frequently had trouble sounding out words. When asked to explain the procedure he should use if he were confused about his assignments, Ace demonstrated that he knew how to access help from his teacher or peers. PERMANENT PRODUCT REVIEW
Examination of Ace’s reading journal and written reading comprehension assignments for the past week showed that Ace had good knowledge of beginning consonant sounds. Areas of difficulty appeared to be consonant blends, punctuation, and medial vowels. Although all assignments were completed, they lacked accuracy. TABLE 9.6. Direct Observation Data from the BOSS Collected during Reading for Ace Percentage of intervals Ace (silent reading, 20 min)
Peers (total intervals = 16)
Active Engagement
31
25
14
Passive Engagement
44
13
56
Off-Task Motor
2
0
2
Off-Task Verbal
0
2
0
Off-Task Passive
20
15
20
Teacher-Directed Instruction
50
Behavior
Ace (oral reading/teacherdirected instruction)
90
Academic Skills Problems
456 SUMMARY
Assessment of the academic environment showed that Ace’s classroom instruction follows fairly traditional approaches to teaching reading. Students are grouped by ability, and much of the instruction is teacher-directed in small groups. Indeed, Ace receives significant teacher attention for his reading problems and is paired with students of similar ability levels. Although Ace is now reading material consistent with his grade level, he lacks many of the skills needed for success in reading. The direct observation of Ace’s behavior reveals that he is an attentive child who appears to try hard to overcome his academic problems and to maintain this good level of engaged behavior despite low levels of teacher approval. Ace’s products and the classroom observation are very consistent with the information reported through the teacher interview.
Step 2: Assessing Instructional Placement Timed passages were administered to Ace across five levels of the reading series. As seen in Table 9.7, Ace was found to be instructional at the preprimer B level but frustrational at all other levels. Because Ace was found to be frustrational at all levels below the level where he is currently placed (level 2.1), passages from that level were not administered. During the reading of these passages, Ace was observed to be very dysfluent. He frequently made reading errors (i.e., reading accuracy ranged from 75 to 88% on Levels C through 1.2), would often lose his place, read a line more than once, and guess at unknown words rather than attempt to sound them out. He answered comprehension screening questions with at least 80% accuracy for most passages.
Step 3: Instructional Modification After a review of data from Steps 1 and 2 with the teacher, a decision was made to construct an intervention to improve Ace’s decoding skills, word recognition, and reading fluency in grade-level material. These skills were considered essential ingredients for Ace to succeed, especially because his teacher indicated that she was reluctant to move him back to a lower level in the reading series. The folding-in technique, combined with instruction in sounding out and reading connected text, was selected for implementation three times per week in one-to-one sessions led by an instructional assistant. Baseline data were first collected by the evaluator in the 2.1 level of the reading series in which Ace was being taught. Material for instruction was selected from the story just TABLE 9.7. Results of Reading Probe Assessment for Ace Grade level/book
Median words correct/min
Median words incorrect/min
% questions correct
Learning level (M, I, F)
B—Preprimer
45
0
100
Instructional
C—Preprimer
38
5
100
Frustrational
D—Preprimer
39
8
80
Frustrational
1.1
31
10
80
Frustrational
1.2
18
6
60
Frustrational
Case Illustrations 457
ahead of where the class was presently reading. This close alignment between the intervention and core instruction would permit Ace the opportunity to preview and practice reading content that would be targeted in the near future, thus potentially allowing him to better access and benefit from upcoming large-group instruction. Each session began with Ace first being asked to read a passage (usually a paragraph or two consisting of about 100 words) from the upcoming story. The number of words read correctly per minute was calculated and plotted on Ace’s bar graph. From the passage, seven words read correctly (identified as known words) and three words read incorrectly (identified as unknown words) were selected. The selected known words were relevant to the content of the story and not simply the, and, and similar types of words. All of the words were written on 3″ × 5″ index cards. First, the tutor explicitly taught Ace how to attack the unknown words by sounding out. Affirmative and corrective feedback were provided immediately, and the tutor helped Ace adjust spelling pronunciations to the correct pronunciations. The unknown words were then interspersed among the known words by folding each unknown word into a review of the seven known words. This was done by having the tutor present the first unknown word to Ace. The first time an unknown word was presented the tutor said the word, sounded it out, spelled it, and used the word in a sentence. Ace was then asked to do the same. After the unknown word was taught, it was presented to Ace, followed by the first known word. Next, the unknown word was presented, followed by the first known word, and then the second known word. This sequence continued until all seven known and the first unknown word had been presented. Affirmative feedback was provided immediately for all correct responses. If at any point in the process Ace hesitated on a word or responded incorrectly, the tutor immediately asked him to sound it out or spell the word, and then use it in a sentence. The second unknown word was next introduced in the same way, folded in among the seven known and one unknown word already presented. The third unknown was then folded in, using the same procedures. The entire procedure took 7–10 minutes. After the folding-in procedure was completed, Ace was asked to read the same passage he had read at the beginning of the session. His WCPM was again calculated and plotted on his bar graph. The tutor also pointed out the words in the passage that were used during the folding-in procedure and asked him to read them again. If for 2 consecutive days he correctly read a word that had been previously classified as an unknown, the word was then considered known. At the next session, a previous word from the known pile was discarded and replaced by this previously unknown word. The number of words that moved from unknown to known status was also plotted on a graph by Ace. Each session that Ace learned at least one new word, he was rewarded with an opportunity to select an item from a prize bag, which included items such as pencils, erasers, and stickers.
Step 4: Progress Monitoring Progress monitoring included both specific subskill and general outcomes measurement. In consultation with Ace’s teacher, a short-term goal of learning five new words per week was selected. In addition, at the end of the 4-week intervention, Ace would be able to read at least 40 WCPM during the presession reading. A long-term (end-of-year) goal was selected by examining the normative data for students in the 25th percentile of grade 2 in Ace’s school district. Those data suggested that Ace should be able to read at least 40 WCPM in the grade 2.1 book.
458
Academic Skills Problems
Specific subskill monitoring was reflected in Ace’s performance on the passage read before and after each folding-in session. In addition, the cumulative number of words learned per week was used to show the acquisition of new words for Ace. Long-term monitoring was obtained through the collection of CBMs taken twice per week by randomly selecting passages from across the second half of the level 2.1 book.
Results The results of the four-step process are shown in Figures 9.13, 9.14, and 9.15. Ace demonstrated consistent improvement in his reading fluency from pre- to postsession readings during each time the folding-in intervention was conducted. As seen in Figure 9.13, Ace initially had a reading rate of 10 WCPM in the materials being taught. Following each folding-in session, Ace improved his rate by between 10–20 WCPM. An examination of his presession reading rate reflects steady improvement toward the goal of 40 WCPM in presession reading performance over the 4 weeks of intervention. As seen in Figure 9.14, Ace also displayed consistent gains each day in the number of new words learned. Given Ace’s strong performance on short-term objectives, an increase from five to six new words per week occurred after the eighth session. As is evident from Figure 9.14, Ace was able to easily achieve this goal throughout the rest of the intervention. Also, at the end of the eighth session, the teacher decided to shift Ace (as well as the rest of the class) from the literature-based reading series to grade-based trade books. Across the last four intervention sessions, the words and passages used for the folding-in technique were taken from the new instructional material. Passages for CBM were still taken from the latter portion of the level 2.1 basal reading series. Ace maintained his performance across the final week of the intervention using this new material. An examination of Figure 9.15 shows that Ace was moving toward the long-term goal set for him by his teacher. Despite a change in the instructional material, the data show that Ace was making excellent progress.
FIGURE 9.13. Results of folding-in intervention sessions for Ace.
Case Illustrations 459
FIGURE 9.14. Results of short-term progress monitoring (cumulative words learned) for Ace during folding-in intervention.
Comments on Ace’s Case This case illustrates how the full four-step model discussed throughout this text can be applied. In this particular example, an assessment of the instructional environment showed that Ace was receiving substantial assistance from the teacher. Indeed, despite high levels of teacher-directed instruction, he was not succeeding. In addition, he had been moved through curriculum materials, even though he clearly lacked mastery with those materials. The teacher was unable or unwilling to move him to material that would be instructional for his fluency level.
FIGURE 9.15. GOM (long-term) progress monitoring for Ace as a result of the folding-in intervention for reading.
460
Academic Skills Problems
Using these data, the evaluator was able to pinpoint what might be keystone behaviors to promote future success. Specifically, the evaluator recognized that increasing word reading efficiency was likely to increase Ace’s effective participation in class, as well as maintaining him within the present classroom environment. By constructing an intervention that included opportunities to improve decoding skills, and word reading and fluency in upcoming reading content, the probabilities that Ace would succeed were enhanced. The case also illustrates the use of the powerful folding-in technique. Because this intervention mixes known and unknown material in a way that systematically increases the amount of information presented between presentations of known content, it is consistent with learning principles. The high proportion of known words also promotes student success and is likely to result in increased student motivation to learn. Indeed, bar graphs and other visual displays of the data are known to be excellent mechanisms to motivate students. Finally, the case illustrates how the data can be used to make instructional decisions. In this case, Ace had demonstrated the ability to meet the short-term goals set for him (five new words per week). Seeing this, the evaluator determined that the goal could be increased to six words. When this change was made, Ace’s performance improved. The case also shows the outcomes of an unexpected change—in this case, an alteration in the instructional materials. Although the data collection ended as the school year came to a close, the results did illustrate that Ace could maintain his performance when a new set of reading materials was introduced.
Case 6: Leo Name: Leo Birth date: October 10, 2007 Chronological age: 8 years, 2 months Evaluators: Melina Cavazos and Zhiqing Zhou6
Grade: 2 Teachers: Ms. Reed, Ms. Matthews School: Charles Davis Elementary Report date: December 16, 2015
Problem Identification Leo was identified by his teachers, Ms. Reed (Reading/Writing and Social Studies) and Ms. Matthews (Mathematics and Science), as a student experiencing academic difficulties in both reading and mathematics. The primary area of concern is reading, as he is roughly 8 months behind academic performance expected for his grade level. Additionally, this assessment was conducted by the examiners, current school psychology doctoral students at Texas A&M University, with the experience of conducting academic assessments in order to fulfill graduate coursework requirements.
Assessment Measures
• Teacher interview • Student interview 6 Many
thanks to Melina Cavazos and Zhiqing Zhou, doctoral students in the School Psychology program at Texas A&M University, whose case report was the basis for this example.
Case Illustrations 461
• Direct observation • AIMSweb Reading Curriculum-Based Measure (R-CBM) • Woodcock Reading Mastery Tests—Third Edition (WRMT-III) | Phonological
Awareness Identification | Word Attack | Word
Step 1: Assessing the Academic Environment TEACHER INTERVIEW
Leo is currently a second-grade student at Charles Davis Elementary. Both Ms. Reed and Ms. Matthews were interviewed by the examiners in mid-October in order to collect information about Leo’s academic and behavioral performance in class. Although Leo has been identified as having both reading and mathematics difficulties, Ms. Reed and Ms. Matthews agreed that reading should be the focus for this intervention. Ms. Matthews mentioned the notable progress Leo has made in math with the exclusion of word problems; it is required that a word problem be read aloud to him so he can complete the question. Ms. Matthews further explained that Leo can identify the numerical form of numbers in a word problem, but cannot understand how he should manipulate the numbers due to his lack of reading comprehension. This observation further solidified his need for a reading-focused intervention. Leo is currently at a “Level E” reading level, which is approximately 8 months behind expected academic performance (“Level I”). His phonemic awareness is poor as he can minimally chunk and blend unfamiliar words. This affects his reading fluency, which sets him behind his peers at reading about 30 words per minute instead of the expected 60 words per minute. In order to improve on these skills, he is currently placed in a Leveled Literacy Intervention (LLI) group within the school. In the group, he is given three books to read at home per week in addition to being pulled from class three times per week to work on these skills. Ms. Reed mentioned that Leo learns the strategies taught by the intervention, but he cannot readily apply them in his classwork. His spelling skills are also poor, so adjustments are made for his weekly spelling tests (5 words instead of 10). She stated that he lacks reading comprehension (e.g., cannot retell a story using organizational skills) as well as listening comprehension (e.g., cannot follow directions given in class; he almost always asks questions just after instruction is given) and possesses poor handwriting skills. Lastly, Ms. Reed also expressed concerns regarding his behavior when he is faced with a difficult task. Upon exposure of an unfamiliar word, he immediately becomes discouraged, ceases all effort without first attempting to sound out the word, and plays with the pencils on his desk instead. Leo was previously in the foster care system, during which time he has been a student at about five other schools, and has very recently been placed with a permanent family. He was formally diagnosed with attention-deficit/hyperactivity disorder (ADHD) and Oppositional Defiant Disorder (ODD) and has been prescribed medications. He also displays inattentive/lethargic behaviors throughout the day (i.e., blank stares, rolling of eyes, sleeping during lunch time). Both teachers agreed that he does not usually become aggressive toward authoritative figures; however, he does have a particularly difficult time with peer interactions. For example, in partner/group work and playground time, he sometimes presents a defensive attitude and aggressive behavior toward peers in the absence of provocation, and bouts of crying throughout the day.
462
Academic Skills Problems
Both Ms. Reed and Ms. Matthews mentioned that Leo’s curiosity is one of his valuable strengths; he asks questions in class and demonstrates effort toward learning class material despite his academic difficulties. He is also often compliant with teacher requests. Ms. Matthews described Leo as a boy more interested in learning mathematics, especially through the use of manipulatives. STUDENT INTERVIEW
Leo was interviewed the third week of October to understand his perspective on academic subjects. Leo indicated that mathematics and science are his favorite subjects. He enjoys answering questions in mathematics and typically completes assignments using his fingers as a primary counting strategy. He also reported that he enjoys working with other students and explained a group science activity earlier that day. When asked about reading, Leo stated that he does not want to read, regardless of the genre of the reading material, because reading is “boring” for him. When Leo is having difficulties in his reading and writing class, he asks Ms. Reed for assistance. DIRECT OBSERVATION
A direct observation was conducted to assess Leo’s engagement in the classroom. Leo’s engagement was recorded for 15 minutes in intervals of 15 seconds, with every fifth interval used to observe a student peer for comparison purposes. Occurrences of student on-task engagement (e.g., active listening, following instructions, asking questions) and off-task engagement (e.g., becoming distracted by objects or peers, not following instructions, daydreaming) were recorded for each interval. An initial direct observation was conducted during Leo’s reading/writing class one day in early November. However, because the classroom activity observed did not include reading or writing assignments for that particular day, it was decided that a second observation would be conducted 3 days later in order to observe Leo’s classroom behavior when completing a typical reading or writing assignment. During this second observation period, students were asked to write a composition that required various transition techniques. Students were also instructed to receive feedback and final approval from Ms. Reed upon completion. It was observed that Leo engaged in on-task behavior for 96% of the intervals and in off-task behavior for 4% of the intervals. Leo’s on-task engagement can be considered to be above average when compared to that of his peers, who engaged in on-task behavior for 50% of recorded intervals. When asked if this behavior is typical for Leo, Ms. Reed stated that Leo does make a strong effort to complete his assignments when behavioral issues do not infer with his functioning.
Step 2: Direct Assessment of Academic Skills and Academic Placement In order to ascertain Leo’s current reading skills level and determine which areas he may benefit the most from additional support, a series of assessments were administered on 2 days in early November. READING CURRICULUM‑BASED MEASUREMENT
A Reading Curriculum-Based Measurement (R-CBM) is a measure of oral reading fluency in which the student reads grade-level appropriate text for 1 minute and is then
Case Illustrations 463
scored in terms of the number of words read correctly. Three different reading passages from the AIMSweb system were used for this assessment in order to provide a stronger assessment of Leo’s oral reading fluency. The results may be found in Table 9.8. Across three passages, Leo’s median score of 23 words per minute with four errors (85% accuracy) placed him in the 16th percentile of second graders when tested during the fall. On first-grade passages, his median WCPM of 35 was consistent with the 45th percentile. An error analysis indicated that Leo’s primary area of concern is his ability to decode unfamiliar words. During test administration, Leo would sometimes mix letter order (e.g., eat for ate) and incorrectly assume word pronunciations through the sole use of beginning and ending letters (e.g., wished for watched). Although evidence of chunking and blending were present, he showed limited knowledge of digraphs (e.g., pone for phone). There were also a few instances when Leo would state that he did not know a word on the test and would not make an attempt to sound out the unfamiliar word. WOODCOCK READING MASTERY TESTS—THIRD EDITION
The WRMT-III includes subtests that assess a variety of key reading skills, ranging from letter sounds and recognition to reading comprehension. The following subtests were administered with Leo, and the results are reported in Table 9.9. Phonological Awareness. The Phonological Awareness subtest includes a total of five tasks that require the student to demonstrate knowledge of first-sound matching, last-sound matching, rhyme production, sound blending, and sound deletion. In the firstsound matching task, examinees are presented with a series of target pictures and alternative pictures from which to choose from. For each instance, examinees are asked to select an alternative picture that has the same first sound as the target picture (e.g., for the target word pen, the correct answer choice would be the alternative picture pie). Last-sound matching is a similar task where examinees are asked to demonstrate their knowledge of last sounds. This task follows the same format as first-sound matching. In the rhyme production task, examinees are asked to produce a word that rhymes with the
TABLE 9.8. R-CBM Results for Leo Grade level of passages
Median correct/errors
Median accuracy
Percentile (fall norms)
Grade 2
23/5
85%
16th
Grade 1
35/6
85%
45th
TABLE 9.9. WRMT-III Results for Leo Measure
Standard score
90% confidence interval
Percentile rank
Phonological Awareness
105
94–116
63rd
Word Identification
88
82–94
21st
Word Attack
86
78–94
18th
464
Academic Skills Problems
words presented by the examiner (e.g., “What word rhymes with cat?”). The blending task asks examinees to demonstrate their ability to blend sounds together to form the desired word (e.g., pop and corn would make the word popcorn). For deletion, examinees are asked to demonstrate their ability to delete the desired sound from words (e.g., “Say pancake without pan”). Scoring of the Phonological Awareness test is based on the number of items answered correctly. Leo achieved a total standard score of 105. His performance placed him in the 63rd percentile, which indicates that he is performing in the average range when compared to other second-grade students in the fall of the school year. Word Identification. The Word Identification subtest assesses the student’s ability to read real words. Scoring of this test is based on the number of words read correctly. Leo achieved a standard score of 88 on this subtest. His performance placed him at the 21st percentile, which indicates that he is performing below the average range when compared to other second-grade students in the fall. He demonstrated limited word recognition and decoding difficulty. When presented with unfamiliar words, especially longer words, he would typically not make an attempt to decode those words, which is problematic when building reading confidence and fluency. Word Attack. Word Attack assesses the student’s ability to read pseudowords for the primary purpose of evaluating decoding skills. Scoring is also based on the number of non-real words read correctly. From the number of words read correctly, Leo achieved a standard score of 86, which placed him at the 18th percentile when compared to second- grade normative data for the fall. His performance was below average for second graders at this point of the year. Error analysis indicated that Leo has notable difficulty with words that include both consonant and vowel letter combinations. When asked to read words that included digraphs, Leo would typically read only one letter of the digraph instead of two (e.g., s for sh). SUMMARY OF DIRECT ASSESSMENT
Leo was identified by his teachers, Ms. Reed and Ms. Mathews, as performing below grade-level expectations in reading. Upon reflection of teacher commentary and results from direct observation, Leo makes an effort to complete assignments when not emotionally distressed. This remained true during assessment, when Leo made his best effort and was cooperative with examiners. Analyses of assessment results revealed that Leo’s performance for phonological awareness was average for a second-grade student during the fall. However, Leo did perform below average in tasks related to word decoding and oral reading fluency, especially with multisyllabic words and words that contain specific letter combinations. It is often observed that students who struggle in these areas of reading also struggle with reading comprehension and will do so to an even greater degree with grade-level advancement. It is recommended that intervention efforts be directed toward improvement of Leo’s decoding skills and reading fluency.
Step 3: Intervention Based on the information gathered in the assessment, an intervention was developed to address Leo’s reading skill needs. Intervention began at the end of November and continued for 10 days. A total of six sessions were completed, each within a 20-minute
Case Illustrations 465
time frame, on an individual basis. Each session consisted of the following intervention strategies:
• Digraph sound flashcards and “folding in”: To expand Leo’s knowledge of letter– sound combinations, the intervention targeted the following letter combinations: ew, oi, oa, ing, ow, and ph. Following a brief introduction of the new letter combination to be learned from the examiner at the beginning of each session, letter recognition automaticity was practiced through digraph sound flashcards using the “folding-in” (i.e., incremental rehearsal) technique. Letter combinations learned during previous days of the intervention were added to the set of flashcards used at each lesson to encourage retention. • Digraph review word list: In order to reinforce retention of letter combination sounds, Leo first reviewed the letter combination learned from the previous day with the examiner(s). During this time, Leo was asked to read a short list of five decodable words that contained the previously learned letter combinations. • Word flashcards: To build decoding skills and reading confidence, Leo was presented with five flashcards following the digraph sound flashcards activity. Five decodable words that contained the letter combination to be learned were listed on one flashcard each. Word flashcards were presented in two rounds, with the first round designed to be an introductory practice round and the second round designed to practice decoding and word recognition automaticity. • Oral reading of decodable text: Before the conclusion of each session, Leo was asked to read a short, decodable text that was composed of words containing the letter combination learned. This not only provided Leo with an opportunity to practice the newly learned letter combination within real text, it also gave him the chance to receive immediate corrective feedback by the examiner. Step 4: Progress Monitoring The following progress monitoring instruments were used to evaluate intervention effectiveness:
• Decodable word list: Specific subskill mastery measurement was used to directly assess Leo’s improvement in decoding words through the letter combinations targeted in the intervention. The examiners created a list of decodable words (both real and nonreal) that contained both learned and yet-to-be-learned letter combinations. Each list contained three words per digraph (i.e., now, vow, how, sing, ring, king, noi, koi, join, new, kew, few, goat, boat, roast, pho, raph, neph), creating a total of 18 words. A total of three decodable word lists were used across the intervention, with each list employing a random order of the same words. Correct and incorrect responses were recorded. The results may be found in Figure 9.16. One hundred percent accuracy by the end of six sessions was set as Leo’s decodable word list goal. This goal would represent a significant improvement in his digraph knowledge and decoding skills. • R-CBM: General outcomes measurement was used to monitor Leo’s overall reading progress. R-CBM is a test of oral reading in which the student reads grade-level appropriate text for 1 minute and is scored according to the number of words read correctly. During initial assessment using R-CBM, Leo’s performance placed him in the
Academic Skills Problems
466
16th percentile when compared to other second-grade students during the fall, which represents below-average student performance. Because this intervention has a strong focus on improving word reading, it was decided that second-grade-level R-CBM passages would be used to capture Leo’s overall reading growth. R-CBM passages were administered twice per week. The results may be found in Figure 9.17. A goal of 75 words read correctly with at least an accuracy rate of 90% by the end of the academic year was set as Leo’s R-CBM goal. This goal was developed based on normative data available for second-grade students in the spring. A rate of improvement of 2.17 words read correct gained per week, which is approximately twice the rate of improvement for second graders in the spring with the same initial level of performance as Leo, would be needed for him to reach this goal given his baseline score.
Intervention Results Progress monitoring data were collected in each session to assess Leo’s decoding and oral reading growth. Predetermined goals for Leo were set by the examiners. The data below reflect Leo’s progress after six completed intervention sessions. It is noted that Leo was highly engaged with examiners, demonstrated an increased interest in reading, and put forth his best effort to perform well during the intervention. As displayed in Figure 9.18, Leo demonstrated growth in decodable word reading. At the conclusion of the intervention, Leo was able to read 16 words correctly out of 18 words targeted in intervention, with an accuracy rate of 89%. He demonstrated an improved knowledge of letter combinations and skills in decoding. Error analysis revealed that Leo’s inattention to word spellings is a major contributor to his current level of reading skill (e.g., reading sing as sling). Upon evaluation of the results after six completed intervention sessions, it is suggested that intervention specifically targeting word reading be continued on an individual basis. 100%
100% 89%
90%
89%
Reading Accuracy
80% 70%
75%
60% 50% 40%
50%
30%
Aim
20%
Percent Read Correctly
10% 0%
12/7
12/8 Date
12/10
FIGURE 9.16. Progress monitoring: decodable word list for Leo.
Case Illustrations 467 100 90 80 70 60 50 40 30 20
WCPM
Errors
Goal
22-May
8-May
15-May
1-May
24-Apr
17-Apr
10-Apr
3-Apr
27-Mar
20-Mar
13-Mar
6-Mar
27-Feb
20-Feb
13-Feb
6-Feb
30-Jan
23-Jan
16-Jan
9-Jan
2-Jan
19-Dec
26-Dec
5-Dec
12-Dec
0
28-Nov
10
Linear (WCPM)
FIGURE 9.17. Progress monitoring: R-CBM with second-grade passages for Leo.
As displayed in Figure 9.17, Leo’s performance on second-grade-level R-CBM passages has shown improvement since the implementation of the intervention. On average, Leo’s rate of progress is 7.23 additional words read correctly per week. As depicted in Figure 9.17, this current rate of progress is well above the predetermined goal line (1.67 additional words read correctly per week). Leo’s performance varied by intervention session. More data are needed to accurately describe his progress and evaluate the effects of the intervention, preferably until he meets his goal. Using the three second-grade passages administered as part of the assessment as a pretest, the examiners administered three second-grade-level R-CBM passages during the final intervention session in order to observe Leo’s total growth using grade-level assessment. As shown in Figure 9.18, in this final assessment, Leo’s median score of 52 words read correctly per minute with five errors (91% accuracy) indicated a significant improvement in both his words read per minute as well as his reading accuracy. An increase in positive reading behaviors was also noted by the examiners. For example, Leo seemed to focus more on reading accurately. Across the intervention, Leo exhibited an increase in self-correction behaviors and made an increased effort to decode unfamiliar words. The data suggest that the strategies employed during this intervention have been beneficial for Leo’s reading skills; however, it should be acknowledged that the intervention was implemented for only a very short time.
Summary and Recommendations Upon initial assessment, it was observed that Leo’s phonological awareness was consistent with average second graders’ performance during the fall. However, it was observed
Academic Skills Problems
468
that Leo performed below average in tasks related to word decoding and oral reading fluency, especially with multisyllabic words and words that contain specific letter combinations. An intervention was created that targeted these specific reading difficulties. Data obtained throughout the intervention suggest that Leo is making progress in his reading skills. The intervention thus far appears to have strengthened his skills in decoding and oral reading. The primary focus of this intervention was to improve Leo’s decoding and reading accuracy skills by increasing his familiarity with letter digraph sounds. Over the course of the intervention sessions, progress monitoring data suggested that Leo made noticeable progress in decoding and oral reading. Leo was able to successfully decode more digraphs than initially assessed, made an increased effort to employ appropriate decoding strategies when faced with unfamiliar words, and had demonstrated improved reading fluency. Given the considerable progress Leo has made, it is recommended that the implemented intervention continue. Future intervention focus may be directed toward the instruction of multisyllabic word decoding techniques (e.g., reading elephant as “e/le/ phant”). It is also recommended that Leo’s intervention continue to include decodable texts as a way to provide practice opportunities. Continued progress monitoring will be helpful for deciding when to make appropriate adjustments to the content, frequency, and/or intensity of the intervention. Intervention components may be faded as his reading reaches grade-level expectations.
Comments on Leo’s Case This case is a good example of the integration of CBM and standardized measures of relevant academic skills. The assessment was concise and efficient, limited to measures that directly assessed skills related to Leo’s areas of difficulty in reading. The assessment clearly indicated difficulties with word reading, and the measures allowed the examiners
60
Median Corrects 52
Median Errors
Number of Words Read
50 40 30 23 20 10 0
5
Pretest
5
Tests
Posttest
FIGURE 9.18. Pretest–posttest R-CBM assessment with second-grade passages for Leo.
Case Illustrations 469
to identify the word types and spelling patterns (in this case, digraphs) that were problematic. An interesting detail is that, in contrast to the teachers’ impressions, Leo’s phonological processing was above average, indicating that his word reading difficulties were due mainly to incomplete knowledge of spelling–sound correspondences and difficulties decoding unknown words. The results of the assessment helped the evaluators identify intervention strategies that directly targeted areas of need (letter–sound correspondence in learning digraphs, improved decoding, word and text reading practice). The assessment helped indicate skills that were less important to target, such as phonological awareness, given his above-average test performance in that area. The results of the assessment led to the development of a set of brief strategies, each designed to address skill gaps and provide practice reading words and text. The intervention is a good example of a blend of activities that target skill acquisition and practice. Progress was monitored with both a specific subskill mastery measure (which the evaluators created) and a CBM oral reading general outcomes measure. Both measures provided evidence that Leo benefited from the intervention; thus, recommendations consisted of continuing to implement the current strategies and monitor progress.
CASE EXAMPLE OF DIRECT ASSESSMENT WITHIN AN MTSS MODEL The case presented next illustrates how the data sources from universal screening and progress monitoring are integrated into data-based decision making regarding a student’s academic performance. Specifically, this case illustrates how CBA data would be used within an MTSS model. During grade 1, this student performed at benchmark expectations, but she had been moved into a Tier 2 intervention at the start of her second grade based on her initial fall performance. When assessed midyear, the student had not shown responsiveness to the intervention at Tier 2, so she was moved to an intensive level of intervention at Tier 3, where a specific intervention program focused on her reading skills was developed. The report provided here offers the outcomes of intervention and progress monitoring within the Tier 3 intervention. The key data that are used in determining the level of the student’s response to intervention are presented, and the ways in which universal screening and progress monitoring data, as described in Chapter 8, work together to substantially influence the data-based decisions made by teachers are explored. Finally, the case illustrates the fluidity of the MTSS model and the way in which teams link the outcomes of student performance with the instructional needs of the student.
Case 7: Sabina Name: Sabina Birth date: May 24, 2001 Chronological age: 7 years, 11 months Evaluator: Melissa Ruiz7
7 Many
Grade: 2 Teacher: Mrs. Books School: Lake Elementary Report date: May 2, 2009
thanks to Melissa Ruiz, graduate student in the School Psychology program at Lehigh University, whose case report was the basis for this example.
Academic Skills Problems
470
Background within the RTI Process Sabina attends a school district that implements an MTSS framework in reading, which is made up of three tiers of intensification in reading support, depending on student need. The district uses a standard protocol model, with assigned intervention strategies consistent with student needs linked to all tiered interventions. Sabina has been receiving instruction through the MTSS framework since first grade. Results of the universal screening data obtained at midyear and spring of first grade are displayed in Figure 9.19. The data-decision team at the school assigned Sabina to Tier 1, given that her NWF score was well above benchmark, with her ORF score just slightly below benchmark. During the 30-minute, daily assigned tier time, Sabina was placed in a group that received an intervention focused primarily on increasing fluency with connected text, given her performance on the NWF measure, which indicated that she had good decoding skills. In first grade, Sabina received the peer-assisted learning strategies (PALS) intervention, which aims to improve reading performance through the use of partner reading and peer tutoring. She remained in the Tier 1 benchmark group until the end of the school year. Although she made reasonable progress within the intervention process provided to all students in Tier 1 (i.e., PALS), she still fell short of the benchmark goal by the end of first grade. At the fall assessment period of grade 2, Sabina’s performance on the DIBELS ORF measure was only 35 WCPM, below the benchmark of 44 WCPM for the start of grade 2. As a result, the core data-decision team assigned Sabina to a Tier 2, small instructional group of six students. The standard protocol used by the team was the Sonday program, which focuses on phonics instruction including phonemic awareness, letter–sound correspondence, and decoding delivered daily for 40 minutes. At the midyear benchmark, Sabina had increased her performance to 51 WCPM, but she still remained below the benchmark of 68 WCPM mid-second grade. Equally important was her progress monitoring data that had been collected every 2 weeks while she was receiving the Tier 2 intervention. Given her starting score of 36 WCPM, Sabina’s target rate of improvement for the year was 1.5 WCPM, using a score of 90 WCPM by the end of the year as the target established by DIBELS as the benchmark for the spring of grade 2: (90 WCPM—36 WCPM)/36 weeks = 1.5 WCPM/week. An examination of her progress monitoring showed that she had attained a rate of improvement of 0.63 WCPM, indicating that although she was making progress, she was moving at about half the rate expected of typical second-grade students (1.2 WCPM = typical ROI) and was falling further behind her peers (see Figure 9.20). As a result, when her progress was reviewed at the winter benchmark of second grade, the team decided to move her to a Tier 3 level of intervention. Direct assessment Winter
Spring
Universal Screening Measures
Sabina
DIBELS Benchmark
Sabina
DIBELS Benchmark
DIBELS NWF
60
50
63
50
DIBELS ORF
17
20
34
40
FIGURE 9.19. Results of universal screening measures for Sabina in winter and spring of grade 1.
Case Illustrations 471
FIGURE 9.20. Results of progress monitoring for Tier 2 intervention for Sabina at Tier 2 of grade 2.
of Sabina’s reading performance revealed that her instructional level was within firstgrade material for oral reading fluency, with second-g rade material being her frustrational level. When she came to a word she didn’t know, she either attempted to segment it or just guessed by replacing it with a different word. She was unable to achieve the 80% criterion for reading comprehension at the first-grade or second-g rade level. While reading the passages, Sabina frequently skipped over punctuation and made many errors. Her reading was slow and laborious, which is a factor likely limiting her comprehension. The intervention constructed for Sabina involved increasing her decoding skills (i.e., word recognition) and simultaneously building fluency in reading connected text. The activity consisted of keeping track of words Sabina got wrong while reading and keeping them in a special log. Flashcards were then created and reviewed with Sabina. Immediate corrective feedback was given, which aimed to improve her performance. 1. Passages were sampled from Sabina’s second-grade textbooks. During a reading session, Sabina read aloud for approximately 5 minutes while the tutor provided feedback and error correction. Errors were written down in an error word log, and the following sequence of activities was implemented. 2. Before working with Sabina, all error words from the reading session were written on index cards by the examiner. (If she had misread more than 20 words during the session, just the first 20 words from the error log were used. If she had misread fewer than 20 words, past words from the log were selected from past sessions to expand the review list to 20 words.) 3. The index cards were reviewed with Sabina. Whenever she pronounced a word correctly, that card was removed from the deck and set aside. (A word was considered correct if it was read correctly within 5 seconds. Self-corrected words were counted as correct if the correction was made within the 5-second period. Words read correctly after the 5-second period were counted as incorrect.) 4. When Sabina missed a word, the examiner modeled sounding the word out and pronouncing it correctly. Sabina was then directed to sound out the word and say it correctly. This was the case even for words that were phonetically irregular; in these cases,
472
Academic Skills Problems
the tutor supported Sabina in adjusting her sounded out version to the correct pronunciation. The card with the missed word was then placed at the bottom of the deck. 5. Error words in the deck were presented until all were read correctly. All word cards were then gathered together, reshuffled, and presented again to Sabina. The drill continued until either time ran out or she progressed through the deck without an error on two consecutive cards. 6. Sabina read the same passage from the beginning of the session again for 5 minutes, while the tutor provided affirmative feedback and error correction.
Results The Tier 3 intervention was implemented for a total of 4 weeks. Data were collected on the number of words read correctly (a goal of 80% correct was set for the 4 weeks). Sabina exceeded her goal and performed at 95% words correct over that time period. Because she was assigned to Tier 3, progress monitoring was now collected weekly. Figure 9.21 shows the outcomes of her performance on the DIBELS ORF measure as a function of the Tier 3 intervention. As seen in the figure, Sabina’s ROI, once the intervention began, increased to 2.95 WCPM/week, a rate that was almost double the rate she needed to attain to close the gap between herself and her peers. As a result of her response to Tier 3 instruction, the team returned Sabina to Tier 2 instruction during the last 2 months of the school year. When she was assessed during the spring benchmark assessment, she scored at 91 WCPM, just above the benchmark expected for the spring of second grade.
Comments on Sabina’s Case Sabina’s case illustrates several important aspects of the use of curriculum-based assessment data within an MTSS context. In Sabina’s case, the team decided to assign her to a benchmark group (Tier 1) based on her performance during grade 1. Although she seemed to do well enough to maintain her status throughout the year, the universal screening data at the beginning of grade 2 indicated that she was at some level of risk for
FIGURE 9.21. Results of progress monitoring for Tier 3 intervention for Sabina at Tier 3 of grade 2. Gray circles indicate her score above target, white circles her score at the new target, black circles her score below target.
Case Illustrations 473
not succeeding in reading. As a result, the team assigned her to small-group interventions consistent with Tier 2. Because her school used a standard protocol method of MTSS, the Tier 2 intervention consisted of a specific phonics program entitled Sonday (Windsor Learning Systems, 2009). When the progress monitoring data were examined midyear, Sabina did not appear to be as responsive to the instructional intervention as the team expected. As a result, the team intensified the level of intervention to Tier 3, designed a specific intervention focused on improving her decoding skills and reading fluency, and monitored her more frequently. The data showed that the intensified intervention was successful in accelerating her performance to the point that the team chose to return her to the Tier 2 intervention for the remainder of the year. Her end-of-year scores on the universal screening measure suggested that she had achieved the benchmark performance for an end-of-second-grade student. It is possible that the additional intervention helped reinforce her use of letter–sound correspondence for word reading and provided more opportunities for her to read grade-level text with immediate feedback and support. Assuming she maintains this level upon returning to school at the beginning of grade 3, the team intended to move her to Tier 1, a benchmark group. This case is an excellent illustration of how a combination of universal screening and progress monitoring data can inform the instructional process. Of course, additional data sources needed to be considered by the team in reaching their decision of what type of intervention program was best matched to Sabina’s needs. In this example, Sabina did respond to the intervention and potentially “righted” her trajectory of learning so that it was similar to that of her peers.
SUMMARY AND CONCLUSIONS Interventions that may be effective in improving academic performance cannot always be predicted on the basis of assessment data alone. Although the evaluation may provide clues as to the best choices of possibly potent strategies, and evidence-based approaches are selected, only the ongoing collection of progress monitoring data indicates what actually works for an individual student. What worked for Jessi may not work for Ace. What works for Ace now may not work for him as well later. These are the realities of academic difficulties, and they make the collection of data to assess progress imperative. The cases in this chapter provide only a brief sampling of possible cases that could have been selected. They represent the performance of actual students and are not hypothetical. The future of effective academic interventions most likely will rely on the ability to convince teachers, psychologists, school administrators, and consultants of the importance of data collection to evaluate student progress. Being aware of the range of potential interventions to improve academic skills surely will help plan effective teaching procedures. However, even those less skilled in developing interventions can greatly facilitate the process of intervention by understanding how to collect data on student progress. When what was believed to be a good intervention option does not result in improved student achievement, high-quality progress monitoring data are present to indicate a change is needed. Without the data, instructional decisions are far less timely and often based on guesswork regarding the effectiveness of procedures. This text has presented techniques and strategies for assessing academic skills difficulties, as well as potential interventions for solving these complex problems. Reading about them, however, is only the first step. Firsthand experience implementing the
474
Academic Skills Problems
strategies is most valuable for understanding the situations in which they are more and less practical, effective, and viewed favorably by students and teachers. Readers will probably find that the procedures are easy to implement and informative and will result in successful and rewarding delivery of services. In addition, systemwide changes such as MTSS represent wide-scale implementation of many of the key types of assessment and intervention techniques discussed throughout the text. There is strong evidence that the methods described here are making significant differences in the academic lives—and futures—of children.
References
Aaron, P. G., Joshi, R. M., Ayotollah, M., Ellsberry, A., Henderson, J., & Lindsey, K. (1999). Decoding and sight-word naming: Are they independent components of word recognition skill? Reading and Writing, 11(2), 89–127. Abbott, R. D., & Berninger, V. W. (1993). Structural equation modeling of relationships among developmental skills and writing skills in primary and intermediate-grade writers. Journal of Educational Psychology, 85(3), 478–493. Abbott, R. D., Berninger, V. W., & Fayol, M. (2010). Longitudinal relationships of levels of language in writing and between writing and reading in grades 1 to 7. Journal of Educational Psychology, 102(2), 281–300. Adams, G. L., & Engelmann, S. (1996). Research on direct instruction: 25 years beyond DISTAR. Seattle, WA: Educational Achievement Systems. Adams, M. J. (1994). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press. Adams, M. J., Foorman, B. R., Lundberg, I., & Beeler, T. (1998). The elusive phoneme: Why phonemic awareness is so important and how to help children develop it. American Educator, 22(1–2), 18–29. Agrawal, J., & Morin, L. L. (2016). Evidence- based practices: Applications of concrete representational abstract framework across
math concepts for students with mathematics disabilities. Learning Disabilities Research and Practice, 31(1), 34–44. Ahmed, Y., Francis, D. J., York, M., Fletcher, J. M., Barnes, M., & Kulesz, P. (2016). Validation of the direct and inferential mediation (DIME) model of reading comprehension in grades 7 through 12. Contemporary Educational Psychology, 44, 68–82. Ahmed, Y., & Wagner, R. K. (2020). A “simple” illustration of a joint model of reading and writing using meta-analytic structural equation modeling (MASEM). In R. A. Alves, T. Limpo, & R. Malatesha Joshi (Eds.), Reading- w riting connections (pp. 55–75). New York: Springer. Alessi, G., & Kaye, J. H. (1983). Behavior assessment for school psychologists. Kent, OH: National Association of School Psychologists. Allen, J., Gregory, A., Mikami, A., Lun, J., Hamre, B., & Pianta, R. (2013). Observations of effective teacher–student interactions in secondary school classrooms: Predicting student achievement with the classroom assessment scoring system— secondary. School Psychology Review, 42(1), 76–98. Al Otaiba, S., & Fuchs, D. (2002). Characteristics of children who are unresponsive to early literacy intervention: A review of the literature. Remedial and Special Education, 23(5), 300–316.
475
476
References
Allen, M. S., Walter, E. E., & Swann, C. (2019). Sedentary behaviour and risk of anxiety: A systematic review and meta-analysis. Journal of Affective Disorders, 242, 5–13. Allsopp, D. H. (1997). Using classwide peer tutoring to teach beginning algebra problem- solving skills in heterogeneous classrooms. Remedial and Special Education, 18, 367– 379. Alonzo, J., & Anderson, D. (2018). Supplementary report on easyCBM MCRC measures: A follow- up to previous technical reports (Technical Report No. 1807). Eugene: University of Oregon, Behavioral Research & Teaching. Altani, A., Protopapas, A., Katopodi, K., & Georgiou, G. K. (2020). From individual word recognition to word list and text reading fluency. Journal of Educational Psychology, 112(1), 22–39. Amato, J. M., & Watkins, M. W. (2011). The predictive validity of CBM writing indices for eighth-grade students. Journal of Special Education, 44(4), 195–204. American Academy of Pediatrics. (2011). Joint technical report—learning disabilities, dyslexia, and vision. Pediatrics, 127(3), e818– e856. Anderson, D., Alonzo, J., Tindal, G., Farley, D., Irvin, P. S., Lai, C. F., . . . Wray, K. A. (2014). Technical manual: Easy CBM (Technical Report No. 1408). Eugene: University of Oregon, Behavioral Research & Teaching. Anderson, P. J., Lee, K. J., Roberts, G., Spencer- Smith, M. M., Thompson, D. K., Seal, M. L., . . . Pascoe, L. (2018). Long-term academic functioning following Cogmed working memory training for children born extremely preterm: A randomized controlled trial. Journal of Pediatrics, 202, 92–97. Andersson, U. (2008). Mathematical competencies in children with different types of learning difficulties. Journal of Educational Psychology, 100, 48–66. Andreassen, R., & Bråten, I. (2010). Examining the prediction of reading comprehension on different multiple-choice tests. Journal of Research in Reading, 33(3), 263–283. Anthony, J. L., & Lonigan, C. J. (2004). The nature of phonological awareness: Converging evidence from four studies of preschool and early grade school children. Journal of Educational Psychology, 96(1), 43–55
Apfelbaum, K. S., Hazeltine, E., & McMurray, B. (2013). Statistical learning in reading: Variability in irrelevant letters helps children learn phonics skills. Developmental Psychology, 49(7), 1348–1361. Araújo, S., Reis, A., Petersson, K. M., & Faísca, L. (2015). Rapid automatized naming and reading performance: A meta- analysis. Journal of Educational Psychology, 107(3), 868–888. Arblaster, G. R., Butler, A. L., Taylor, C. A., Arnold, C., & Pitchford, M. (1991). Sameage tutoring, mastery learning and the mixed ability teaching of reading. School Psychology International, 12, 111–118. Archer, A. L., & Hughes, C. A. (2011). Explicit instruction: Effective and efficient teaching (what works for special-needs learners). Journal of Special Education, 36(4), 186–205. Arciuli, J. (2018). Reading as statistical learning. Language, Speech, and Hearing Services in Schools, 49(3S), 634–643. Ardoin, S. P., Binder, K. S., Foster, T. E., & Zawoyski, A. M. (2016). Repeated versus wide reading: A randomized control design study examining the impact of fluency interventions on underlying reading behavior. Journal of School Psychology, 59, 13–38. Ardoin, S. P., Binder, K. S., Zawoyski, A. M., Foster, T. E., & Blevins, L. A. (2013). Using eye-tracking procedures to evaluate generalization effects: Practicing target words during repeated readings within versus across texts. School Psychology Review, 42, 477– 495. Ardoin, S. P., & Christ, T. J. (2009). Curriculum- based measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology Review, 38(2), 266–283. Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding curriculum-based measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51(1), 1–18. Ardoin, S. P., Suldo, S. M., Witt, J., Aldrich, S., & McDonald, E. (2005). Accuracy of reliability estimates’ predictions of CBM performance. School Psychology Quarterly, 20, 1–22.
References 477 Ardoin, S. P., Witt, J. C., Suldo, S. M., Connell, J. E., Koenig, J. L., Resetar, J. L., . . . Williams, K. L. (2004). Examining the incremental benefits of administering a maze and three versus one curriculum-based measurement reading probes when conducting universal screening. School Psychology Review, 33(2), 218–233. Arsalidou, M., Pawliw- L evac, M., Sadeghi, M., & Pascual-L eone, J. (2018). Brain areas associated with numbers and calculations in children: Meta-analyses of fMRI studies. Developmental Cognitive Neuroscience, 30, 239–250. Arter, J. A., & Jenkins, J. R. (1979). Differential diagnosis– prescriptive teaching: A critical appraisal. Review of Educational Research, 49, 517–555. Austin, C. R., Vaughn, S. R., Clemens, N. H., Pustejovsky, J. E., & Boucher, A. N. (2022). The relative effects of integrating word reading and word meaning instruction compared to word read instruction alone on the accuracy, fluency, and word meaning knowledge of 4th–5th grade students with dyslexia. Scientific Studies of Reading, 26(3), 204–222. Axelrod, S., & Greer, R. D. (1994). Cooperative learning revisited. Journal of Behavioral Education, 4, 41–48. Babyak, A. E., Koorland, M., & Mathes, P. G. (2000). The effects of story mapping instruction on the reading comprehension of students with behavioral disorders. Behavioral Disorders, 25, 239–258. Badami, R., Mahmoudi, S., & Baluch, B. (2016). Effect of sports vision exercise on visual perception and reading performance in 7- to 10-year-old developmental dyslexic children. Journal of Exercise Rehabilitation, 12(6), 604–609. Baker, D. L., Biancarosa, G., Park, B. J., Bousselot, T., Smith, J. L., Baker, S. K., . . . Tindal, G. (2015). Validity of CBM measures of oral reading fluency and reading comprehension on high-stakes reading assessments in grades 7 and 8. Reading and Writing, 28(1), 57–104. Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading (NCEE 2016-4000). Washington, DC: National Center for Education Evaluation and Regional Assistance.
Barnes, M. A., Dennis, M., & Haefele- Kalvaitis, J. (1996). The effects of knowledge availability and knowledge accessibility on coherence and elaborative inferencing in children from six to fifteen years of age. Journal of Experimental Child Psychology, 61(3), 216–241. Barnett, D. (2005). Keystone behaviors. In S. W. Lee (Ed.), Encyclopedia of school psychology (pp. 279–280). Los Angeles: SAGE. Barquero, L. A., Davis, N., & Cutting, L. E. (2014). Neuroimaging of reading intervention: A systematic review and activation likelihood estimate meta-analysis. PLOS ONE, 9(1), e83668. Barrish, H. H., Saunders, M., & Wolf, M. M. (1969). Good behavior game: Effects of individual contingencies for group consequences on disruptive behavior in a classroom. Journal of Applied Behavior Analysis, 2(2), 119– 124. Barth, A. E., Denton, C. A., Stuebing, K. K., Fletcher, J. M., Cirino, P. T., Francis, D. J., . . . Vaughn, S. (2010). A test of the cerebellar hypothesis of dyslexia in adequate and inadequate responders to reading intervention. Journal of the International Neuropsychological Society, 16(3), 526–540. Baumann, J. F., & Bergeron, B. S. (1993). Story map instruction using children’s literature: Effects on first graders’ comprehension of central narrative elements. Journal of Reading Behavior, 25, 407–437. Beck, I. L., & Beck, M. E. (2013). Making sense of phonics: The hows and whys (2nd ed.). New York: Guilford Press. Beck, I., & Hamilton, R. (2000). Beginning reading module. Washington DC: American Federation of Teachers. Beck, I. L., McKeown, M. G., & Kucan, L. (2013). Bringing words to life: Robust vocabulary instruction. New York: Guilford Press. Becker, W. C., & Carnine, D. W. (1981). Direct instruction: A behavior theory model for comprehensive educational intervention with the disadvantaged. In S. W. Bijou & R. Ruiz (Eds.), Behavior modification: Contributions to education (pp. 145–210). Hillsdale, NJ: Erlbaum. Beilock, S. L., Gunderson, E. A., Ramirez, G., & Levine, S. C. (2010). Female teachers’ math anxiety affects girls’ math achieve-
478
References
ment. Proceedings of the National Academy of Sciences of the USA, 107(5), 1860–1863. Beirne-Smith, M. (1991). Peer tutoring in arithmetic for children with learning disabilities. Exceptional Children, 57, 330–337. Bell, P. F., Lentz, F. E., & Graden, J. L. (1992). Effects of curriculum–test overlap on standardized test scores: Identifying systematic confounds in educational decision making. School Psychology Review, 21, 644–655. Benson, N. F., Floyd, R. G., Kranzler, J. H., Eckert, T. L., Fefer, S. A., & Morgan, G. B. (2019). Test use and assessment practices of school psychologists in the United States: Findings from the 2017 National Survey. Journal of School Psychology, 72, 29–48. Bergan, J. R. (1977). Behavioral consultation. Columbus, OH: Merrill. Bergan, J. R., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York: Plenum Press. Berkeley, S., Scruggs, T. E., & Mastropieri, M. A. (2010). Reading comprehension instruction for students with learning disabilities, 1995–2006: A meta-analysis. Remedial and Special Education, 31(6), 423–436. Berliner, D. C. (1979). Tempus educare. In P. L. Peterson & H. J. Walberg (Eds.), Research on teaching (pp. 120–135). Berkley, CA: McCutchan. Berliner, D. C. (1988). Effective classroom management and instruction: A knowledge base for consultation. In J. L. Graden, J. E. Zins, & M. J. Curtis (Eds.), Alternate educational delivery systems: Enhancing instructional options for all students (pp. 309–326). Washington, DC: National Association of School Psychologists. Berninger, V. W., Abbott, R. D., Jones, J., Wolf, B. J., Gould, L., Anderson-Youngstrom, M., . . . Apel, K. (2006). Early development of language by hand: Composing, reading, listening, and speaking connections; three letter- writing modes; and fast mapping in spelling. Developmental Neuropsychology, 29(1), 61–92. Berninger, V. W., & Amtmann, D. (2003). Preventing written expression disabilities through early and continuing assessment and intervention for handwriting and/or spelling problems: Research into practice. In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (pp. 345– 363). New York: Guilford Press.
Berninger, V. W., Nielsen, K. H., Abbott, R. D., Wijsman, E., & Raskind, W. (2008). Gender differences in severity of writing and reading disabilities. Journal of School Psychology, 46(2), 151–172. Best, R. M., Floyd, R. G., & Mcnamara, D. S. (2008). Differential competencies contributing to children’s comprehension of narrative and expository texts. Reading Psychology, 29(2), 137–164. Betts, E. A. (1946). Foundations of reading instruction. New York: American Book. Betts, J., Pickart, M., & Heistad, D. (2009). An investigation of the psychometric evidence of CBM-R passage equivalence: Utility of readability statistics and equating for alternate forms. Journal of School Psychology, 47, 1–17. Beyers, S. J., Lembke, E. S., & Curs, B. (2013). Social studies progress monitoring and intervention for middle school students. Assessment for Effective Intervention, 38(4), 224– 235. Bierman, K. L., Nix, R. L., Greenberg, M. T., Blair, C., & Domitrovich, C. E. (2008). Executive functions and school readiness intervention: Impact, moderation, and mediation in the Head Start REDI program. Development and Psychopathology, 20(3), 821–834. Billingsley, B. S., & Ferro- A lmeida, S. C. (1993). Strategies to facilitate reading comprehension in students with learning disabilities. Reading and Writing Quarterly: Overcoming Learning Difficulties, 9, 263–278. Birch, S. H., & Ladd, G. W. (1998). Children’s interpersonal behaviors and the teacher– child relationship. Developmental Psychology, 34(5), 934–946. Birnbaum, M. S., Kornell, N., Bjork, E. L., & Bjork, R. A. (2013). Why interleaving enhances inductive learning: The roles of discrimination and retrieval. Memory and Cognition, 41(3), 392–402. Blachman, B. A., Ball, E. W., Black, R. S., & Tangel, D. M. (1994). Kindergarten teachers develop phoneme awareness in low-income, inner-city classrooms. Reading and Writing, 6(1), 1–18. Blachman, B. A., Ball, E. W., Black, R., & Tangei, D. M. (2000). Road to the code. Baltimore, MD: Brookes. Blachowicz, C., & Ogle, D. (2001). Reading comprehension: Strategies for independent learners. New York: Guilford Press.
References 479 Blair, C., & Razza, R. P. (2007). Relating effortful control, executive function, and false belief understanding to emerging math and literacy ability in kindergarten. Child Development, 78(2), 647–663. Blandford, B. J., & Lloyd, J. W. (1987). Effects of a self- instructional procedure on handwriting. Journal of Learning Disabilities, 20, 342–346. Blankenship, C. S. (1985). Using curriculum- based assessment data to make instructional management decisions. Exceptional Children, 42, 233–238. Blankenship, T. L., Slough, M. A., Calkins, S. D., Deater-Deckard, K., Kim-Spoon, J., & Bell, M. A. (2019). Attention and executive functioning in infancy: Links to childhood executive function and reading achievement. Developmental Science, 22(6), e12824. Boardman, A. G., Vaughn, S., Buckley, P., Reutebuch, C., Roberts, G., & Klingner, J. (2016). Collaborative strategic reading for students with learning disabilities in upper elementary classrooms. Exceptional Children, 82(4), 409–427. Boë, L. J., Sawallis, T. R., Fagot, J., Badin, P., Barbier, G., Captier, G., . . . Schwartz, J. L. (2019). Which way to the dawn of speech? Reanalyzing half a century of debates and data in light of speech science. Science Advances, 5(12), 1–23. Bollman, K. A., Silberglitt, B., & Gibbons, K. A. (2007). The St. Croix River Education District model: Incorporating systems- level organization and a multi- tiered problem- solving process for intervention delivery. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 319–330). New York: Springer. Booth, J. L., Newton, K. J., & Twiss-Garrity, L. K. (2014). The impact of fraction magnitude knowledge on algebra performance and learning. Journal of Experimental Child Psychology, 118, 110–118. Bornstein, P. H., & Quevillon, R. P. (1976). The effects of a self- instructional package on overactive preschool boys. Journal of Applied Behavior Analysis, 9, 179–188. Borsuk, E. R. (2010). Examination of an administrator-read vocabulary-matching measure as an indicator of science achievement. Assessment for Effective Intervention, 35(3), 168–177.
Boulineau, T., Fore, C., III, Hagan-Burke, S., & Burke, M. D. (2004). Use of story-mapping to increase the story–grammar text comprehension of elementary students with learning disabilities. Learning Disability Quarterly, 27, 105–121. Bowers, P. N., Kirby, J. R., & Deacon, S. H. (2010). The effects of morphological instruction on literacy skills: A systematic review of the literature. Review of Educational Research, 80(2), 144–179. Bowman-Perrott, L., Burke, M. D., Zaini, S., Zhang, N., & Vannest, K. (2016). Promoting positive behavior using the Good Behavior Game: A meta- analysis of single- case research. Journal of Positive Behavior Interventions, 18(3), 180–190. Brady, S. (2020). A 2020 perspective on research findings on alphabetics (phoneme awareness and phonics): Implications for instruction (expanded version). The Reading League Journal, 1(3), 20–28. Bramlett, R. K., Murphy, J. J., Johnson, J., & Wallingsford, L. (2002). Contemporary practices in school psychology: A national survey of roles and referral problems. Psychology in the Schools, 39, 327–335. Brasseur-Hock, I. F., Hock, M. F., Kieffer, M. J., Biancarosa, G., & Deshler, D. D. (2011). Adolescent struggling readers in urban schools: Results of a latent class analysis. Learning and Individual Differences, 21(4), 438–452. Breaux, K. C. (2020). Wechsler Individual Achievement Test (4th ed.): Technical and interpretive manual. Bloomington, MI: NCS Pearson. Briesch, A. M., & Chafouleas, S. M. (2009). Review and analysis of literature on self- management interventions to promote appropriate classroom behaviors (1988– 2008). School Psychology Quarterly, 24(2), 106–118. Brigance, A. (2013a). Brigance Inventory of Early Development— III. North Billerica, MA: Curriculum Associates. Brigance, A. (2013b). Brigance Comprehensive Inventory of Basic Skills—III. North Billerica, MA: Curriculum Associates. Brigance, A. (2013c). Brigance Transition Skills Inventory. North Billerica, MA: Curriculum Associates. Brophy, J. E., & Good, T. L. (1986). Teacher behavior and student achievement. In M. C.
480
References
Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 328–377). New York: Macmillan. Browder, D. M., & Shapiro, E. S. (1985). Applications of self- management to individuals with severe handicaps: A review. Journal of the Association for Persons with Severe Handicaps, 10, 200–208. Brown, B. W., & Saks, D. H. (1986). Measuring the effects of instructional time on student learning: Evidence from the beginning teacher evaluation study. American Journal of Education, 94(4), 480–500. Brown-Chidsey, R., & Steege, M. W. (2005). Response to intervention: Principles and strategies for effective practice. New York: Guilford Press. Brownsett, S. L., & Wise, R. J. (2010). The contribution of the parietal lobes to speaking and writing. Cerebral Cortex, 20(3), 517–523. Bruhn, A., Gilmour, A., Rila, A., Van Camp, A., Sheaffer, A., Hancock, E., . . . Wehby, J. (2020). Treatment components and participant characteristics associated with outcomes in self-monitoring interventions. Journal of Positive Behavior Interventions, 24(1) Brune, I. H. (1953). Language in mathematics. In H. F. Fehr (Ed.), The learning of mathematics: Its theory and practice (pp. 156– 191). National Council of Teachers of Mathematics Yearbook 21. Reston, VA: National Council of Teachers of Mathematics. Bryan, T., Burstein, K., & Bryan, J. (2001). Students with learning disabilities: Homework problems and promising practices. Educational Psychologist, 36, 167–180. Bryant, D. P., Bryant, B. R., Roberts, G., Vaughn, S., Pfannenstiel, K. H., Porterfield, J., & Gersten, R. (2011). Early numeracy intervention program for first-grade students with mathematics difficulties. Exceptional Children, 78(1), 7–23. Burgess, S. R., & Lonigan, C. J. (1998). Bidirectional relations of phonological sensitivity and prereading abilities: Evidence from a preschool sample. Journal of Experimental Child Psychology, 70(2), 117–141. Burke, M. D., Hagan-Burke, S., Kwok, O., & Parker, R. (2009). Predictive validity of early literacy indicators from the middle of kindergarten to second grade. Journal of Special Education, 42(4), 209–226.
Burns, M. K. (2001). Measuring sightword acquisition and retention rates with curriculum-based assessment. Journal of Psychoeducational Assessment, 19, 148–157. Burns, M. K. (2005). Using incremental rehearsal to increase fluency of single-digit multiplication facts with children identified as learning disabled in mathematics computation. Education and Treatment of Children, 28(3), 237–249. Burns, M. K. (2007). Comparison of opportunities to respond within a drill model when rehearsing sight words with a child with mental retardation. School Psychology Quarterly, 22, 250–263. Burns, M. K., Codding, R. S., Boice, C. H., & Lukito, G. (2010). Meta-analysis of acquisition and fluency math interventions with instructional and frustration level skills: Evidence for a skill-by- treatment interaction. School Psychology Review, 39(1), 69–83. Burns, M. K., Dean, V. J., & Foley, S. (2004). Preteaching unknown key words with incremental rehearsal to improve reading fluency and comprehension with children identified as reading disabled. Journal of School Psychology, 42, 303–314. Burns, M. K., & Gibbons, K. (2008). Implementing response- to- intervention in elementary and secondary schools: Procedures to assure scientific- based practices. New York: Taylor & Francis. Burns, M. K., Petersen-Brown, S., Haegele, K., Rodriguez, M., Schmitt, B., Cooper, M., . . . VanDerHeyden, A. M. (2016). Meta-analysis of academic interventions derived from neuropsychological data. School Psychology Quarterly, 31(1), 28–45. Burns, M. K., Taylor, C. N., Warmbold-Brann, K. L., Preast, J. L., Hosp, J. L., & Ford, J. W. (2017). Empirical synthesis of the effect of standard error of measurement on decisions made within brief experimental analyses of reading fluency. Psychology in the Schools, 54(6), 640–654. Burns, M. K., Tucker, J. A., Frame, J., Foley, S., & Hauser, A. (2000). Interscorer, alternate- form, internal consistency, and test– retest reliability of Gickling’s model of curriculum- based assessment for reading. Journal of Psychoeducational Assessment, 18, 353–360. Burns, M. K., & VanDerHeyden, A. M. (2006). Using response to intervention to assess
References 481 learning disabilities: Introduction to the special series. Assessment for Effective Intervention, 32, 3–5. Burns, M. K., & Wagner, D. (2008). Determining an effective intervention within a brief experimental analysis for reading: A meta- analytic review. School Psychology Review, 37(1), 126–136. Burns, M. K., Zaslofsky, A. F., Kanive, R., & Parker, D. C. (2012). Meta-analysis of incremental rehearsal using phi coefficients to compare single-case and group designs. Journal of Behavioral Education, 21(3), 185–202. Bus, A. G., & Van IJzendoorn, M. H. (1999). Phonological awareness and early reading: A meta- analysis of experimental training studies. Journal of Educational Psychology, 91(3), 403–418. Byrne, B., & Fielding- Barnsley, R. (1993). Evaluation of a program to teach phonemic awareness to young children: A 1-year follow-up. Journal of Educational Psychology, 85(1), 104. Cain, K., & Oakhill, J. (2009). Reading comprehension development from 8 to 14 years. In R. K. Wagner, C. Schatchneider, & C. Pythian-S ence (Eds.), Beyond decoding: The behavioral and biological foundations of reading comprehension (pp. 143–175). New York: Guilford Press. Cain, K., & Oakhill, J. (2011). Matthew effects in young readers: Reading comprehension and reading experience aid vocabulary development. Journal of Learning Disabilities, 44(5), 431–443. Cain, K., Oakhill, J. V., Barnes, M. A., & Bryant, P. E. (2001). Comprehension skill, inference- making ability, and their relation to knowledge. Memory and Cognition, 29(6), 850–859. Caldwell, J. H., Huitt, W. G., & Graeber, A. O. (1982). Time spent in learning: Implications from research. Elementary School Journal, 82, 471–480. Calhoun, M. B., & Fuchs, L. S. (2003). The effects of peer- assisted learning strategies and curriculum-based measurement on the mathematics performance of secondary students with disabilities. Remedial and Special Education, 24, 235–245. Calkins, L. (2003). The nuts and bolts of teaching writing. Portsmouth, NH: Firsthand Books.
Cancelli, A. A., & Kratochwill, T. R. (1981). Advances in criterion- referenced assessment. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. 1, pp. 213–254). Hillsdale, NJ: Erlbaum. Canter, A. (1995). Best practices in developing local norms in behavioral assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 3, pp. 689–700). Washington, DC: National Association of School Psychologists. Carnine, D. (1976). Effects of two teacher presentation rates on off-task behavior, answering correctly, and participation. Journal of Applied Behavior Analysis, 9, 199–206. Carnine, D., Silbert, J., Kame’enui, E. J., Slocum, T. A., & Travers, P. A. (2017). Direct instruction reading (6th ed.). Boston: Pearson. Carr, S. C., & Punzo, R. P. (1993). The effects of self-monitoring of academic accuracy and productivity on the performance of students with behavioral disorders. Behavioral Disorders, 18, 241–250. Carroll, J. B. (1963). A model of school learning. Teachers College Record, 64, 723–733. Carta, J. J., Atwater, J. B., Schwartz, I. S., & Miller, P. A. (1990). Applications of ecobehavioral analysis to the study of transitions across early education. Education and Treatment of Children, 13, 298–315. Carta, J. J., Greenwood, C. R., & Atwater, J. (1985). Ecobehavioral System for Complex Assessments of Preschool Environments (ESCAPE). Kansas City: Juniper Gardens Children’s Project, Bureau of Child Research, University of Kansas. Carta, J. J., Greenwood, C. R., Schulte, D., Arreaga-Mayer, C., & Terry, B. (1987). The Mainstream Code for Instructional Structure and Student Academic Response (MS- CISSAR): Observer training manual. Kansas City: Juniper Gardens Children’s Project, Bureau of Child Research, University of Kansas. Cartledge, G., & Milburn, J. F. (1983). Social skill assessment and teaching in the schools. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. III, pp. 175–236). Hillsdale, NJ: Erlbaum. Carvalho, P. F., & Goldstone, R. L. (2014). Putting category learning in order: Category structure and temporal arrangement affect
482
References
the benefit of interleaved over blocked study. Memory and Cognition, 42(3), 481–495. Carver, R. P. (1994). Percentage of unknown vocabulary words in text as a function of the relative difficulty of the text: Implications for instruction. Journal of Reading Behavior, 26(4), 413–437. Case, L. P., Harris, K. R., & Graham, S. (1992). Improving the mathematical problem-solving skills of students with learning disabilities: Self-regulated strategy development. Journal of Special Education, 26, 1–19. Castles, A., Holmes, V. M., Neath, J., & Kinoshita, S. (2003). How does orthographic knowledge influence performance on phonological awareness tasks? The Quarterly Journal of Experimental Psychology, 56(3), 445–467. Cates, G. L., Dunne, M., Erkfritz, K. N., Kivisto, A., Lee, N., & Wierzbicki, J. (2007). Differential effects of two spelling procedures on acquisition, maintenance and adaption to reading. Journal of Behavioral Education, 16(1), 70–81. Catts, H. W. (2022). Rethinking how to promote reading comprehension. American Educator, 45(4), 26–33. Catts, H. W., Adlof, S. M., & Weismer, S. E. (2006). Language deficits in poor comprehenders: A case for the simple view of reading. Journal of Speech, Language, and Hearing Research, 49, 278–293. Catts, H. W., Compton, D., Tomblin, J. B., & Bridges, M. S. (2012). Prevalence and nature of late- emerging poor readers. Journal of Educational Psychology, 104(1), 166–180. Catts, H. W., & Kamhi, A. G. (2017). Prologue: Reading comprehension is not a single ability. Language, Speech, and Hearing Services in Schools, 48(2), 73–76. Catts, H. W., Petscher, Y., Schatschneider, C., Sittner Bridges, M., & Mendoza, K. (2009). Floor effects associated with universal screening and their impact on the early identification of reading disabilities. Journal of Learning Disabilities, 42(2), 163–176. Chace, K. H., Rayner, K., & Well, A. D. (2005). Eye movements and phonological parafoveal preview: Effects of reading skill. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 59(3), 209–217. Chacko, A., Bedard, A. C., Marks, D. J.,
Feirsen, N., Uderman, J. Z., Chimiklis, A., . . . Ramon, M. (2014). A randomized clinical trial of Cogmed working memory training in school-age children with ADHD: A replication in a diverse sample using a control condition. Journal of Child Psychology and Psychiatry, 55(3), 247–255. Chafouleas, S. M. (2011). Direct behavior rating: A review of the issues and research in its development. Education and Treatment of Children, 34(4), 575–591. Chafouleas, S. M., Riley- Tillman, T. C., & Christ, T. J. (2009). Direct behavior rating (DBR) an emerging method for assessing social behavior within a tiered intervention system. Assessment for Effective Intervention, 34(4), 195–200. Chafouleas, S. M., Riley- Tillman, T. C., & Eckert, T. L. (2003). A comparison of school psychologists’ acceptability, training, and use of norm- referenced, curriculum- based, and brief experimental analysis methods to assess reading. School Psychology Review, 32(2), 272–281. Chafouleas, S. M., Sanetti, L. M., Kilgus, S. P., & Maggin, D. M. (2012). Evaluating sensitivity to behavioral change using direct behavior rating single- item scales. Exceptional Children, 78(4), 491–505. Chang, H., & Beilock, S. L. (2016). The math anxiety-math performance link and its relation to individual and envinmental factors: A review of current behavioral and psychophysiological research. Current Opinion in Behavioral Sciences, 10, 33–38. Chard, D. J., Clarke, B., Baker, S., Otterstedt, J., Braun, D., & Katz, R. (2005). Using measures of number sense to screen for difficulties in mathematics: Preliminary findings. Assessment for Effective Intervention, 30(2), 3–14. Chen, V., & Savage, R. S. (2014). Evidence for a simplicity principle: Teaching common complex grapheme-to-phonemes improves reading and motivation in at-risk readers. Journal of Research in Reading, 37(2), 196–214. Chetail, F. (2017). What do we do with what we learn? Statistical learning of orthographic regularities impacts written word processing. Cognition, 163, 103–120. Choi, D., Hatcher, R. C., Dulong- Langley, S., Liu, X., Bray, M. A., Courville, T., . . . DeBiase, E. (2017). What do phonological
References 483
processing errors tell about students’ skills in reading, writing, and oral language? Journal of Psychoeducational Assessment, 35(1–2), 24–46. Chow, J. C., & Gilmour, A. F. (2016). Designing and implementing group contingencies in the classroom: A teacher’s guide. Teaching Exceptional Children, 48(3), 137–143. Christ, T. J., & Ardoin, S. P. (2009). Curriculum- based measurement of oral reading: Passage equivalence and probe-set development, Journal of School Psychology, 47, 55–75. Christ, T. J., Riley-Tillman, T. C., Chafouleas, S., & Jaffery, R. (2011). Direct behavior rating: An evaluation of alternate definitions to assess classroom behaviors. School Psychology Review, 40(2), 181–199. Christ, T. J., & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36(1), 130–146. Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum- based measurement of oral reading: Multistudy evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51(1), 19–57. Christenson, S. L., & Ysseldyke, J. E. (1989). Assessment student performance: An important change is needed. Journal of School Psychology, 27, 409–425. Chung, S., Espin, C. A., & Stevenson, C. E. (2018). CBM maze- scores as indicators of reading level and growth for seventh-grade students. Reading and Writing, 31(3), 627– 648. Cirino, P. T., Fletcher, J. M., Ewing-Cobbs, L., Barnes, M. A., & Fuchs, L. S. (2007). Cognitive arithmetic differences in learning difficulty groups and the role of behavioral inattention. Learning Disabilities Research and Practice, 22(1), 25–35. Cirino, P. T., Miciak, J., Ahmed, Y., Barnes, M. A., Taylor, W. P., & Gerst, E. H. (2019). Executive function: Association with multiple reading skills. Reading and Writing, 32(7), 1819–1846. Cirino, P. T., Miciak, J., Gerst, E., Barnes, M. A., Vaughn, S., Child, A., & Huston-Warren, E. (2017). Executive function, self-regulated
learning, and reading comprehension: A training study. Journal of Learning Disabilities, 50(4), 450–467. Cirino, P. T., Romain, M. A., Barth, A. E., Tolar, T. D., Fletcher, J. M., & Vaughn, S. (2013). Reading skill components and impairments in middle school struggling readers. Reading and Writing, 26(7), 1059–1086. Ciullo, S., Ely, E., McKenna, J. W., Alves, K. D., & Kennedy, M. J. (2019). Reading instruction for students with learning disabilities in grades 4 and 5: An observation study. Learning Disability Quarterly, 42(2), 67–79. Clark, F. L., Deshler, D. D., Schumaker, J. B., & Alley, G. R. (1984). Visual imagery and self-questioning: Strategies to improve comprehension of written materials. Journal of Learning Disabilities, 17, 145–149. Clark, L., & Elliott, S. N. (1988). The influence of treatment strength information of knowledgeable teachers’ pretreatment evaluation of social skills training methods. Professional School Psychology, 3, 241–251. Clarke, B., Baker, S., Smolkowski, K., & Chard, D. J. (2008). An analysis of early numeracy curriculum- based measurement: Examining the role of growth in student outcomes. Remedial and Special Education, 29(1), 46–57. Clarke, B., Doabler, C. T., Kosty, D., Kurtz Nelson, E., Smolkowski, K., Fien, H., & Turtura, J. (2017). Testing the efficacy of a kindergarten mathematics intervention by small group size. AERA Open, 3(2), article 2332858417706899. Clarke, B., Doabler, C. T., Nelson, N. J., & Shanley, C. (2015). Effective instructional strategies for kindergarten and first-grade students at risk in mathematics. Intervention in School and Clinic, 50(5), 257–265. Clarke, B., Doabler, C. T., Smolkowski, K., Baker, S. K., Fien, H., & Strand Cary, M. (2016). Examining the efficacy of a Tier 2 kindergarten mathematics intervention. Journal of Learning Disabilities, 49(2), 152– 165. Clarke, B., Doabler, C., Smolkowski, K., Kurtz Nelson, E., Fien, H., Baker, S. K., & Kosty, D. (2016). Testing the immediate and longterm efficacy of a Tier 2 kindergarten mathematics intervention. Journal of Research on Educational Effectiveness, 9(4), 607–634. Clarke, B., Doabler, C. T., Sutherland, M.,
484
References
Kosty, D., Turtura, J., & Smolkowski, K. (2022). Examining the impact of a first grade whole number intervention by group size. Journal of Research on Educational Effectiveness. [Epub ahead of print] Clarke, B., Doabler, C. T., Turtura, J., Smolkowski, K., Kosty, D. B., Sutherland, M., . . . Baker, S. K. (2020). Examining the efficacy of a kindergarten mathematics intervention by group size and initial skill: Implications for practice and policy. Elementary School Journal, 121(1), 125–153. Clarke, B., Gersten, R. M., Dimino, J., & Rolfhus, E. (2011). Assessing student proficiency of number sense (ASPENS). Longmont, CO: Cambium Learning Group, Sopris Learning. Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification and development of early mathematics curriculum-based measurement. School Psychology Review, 33, 234–248. Clay, M. M. (1985). The early detection of reading difficulties. Portsmouth, NH: Heinemann. Clemens, N. H., Schnakenberg, J., O’Donnell, K. E., Roberts, G., Hamilton, B., Austin, C., & Vaughn, S. R. (2022). Effects of integrating support for behavioral self- regulation within reading intervention: A sequential multiple assignment randomized trial (SMART). Manuscript in preparation. Clemens, N. H., & Fuchs, D., (2022). Commercially- developed tests of reading comprehension: Gold standard or fools’ gold? Reading Research Quarterly, 57(2), 385–397. Clemens, N. H., Hagan-Burke, S., Luo, W., Cerda, C. A., Blakely, A., Frosch, J., . . . Jones, M. (2015). The predictive validity of a computer- adaptive assessment of kindergarten and first-grade reading skills. School Psychology Review, 44, 76–97. Clemens, N. H., Hilt- Panahon, A., Shapiro, E. S., & Yoon, M. (2012). Tracing student responsiveness to intervention with early literacy skills indicators: Do they reflect growth toward text reading outcomes? Reading Psychology, 33(1–2), 47–77. Clemens, N. H., Hsiao, Y. Y., Lee, K., Martinez- Lincoln, A., Moore, C., Toste, J., . . . Simmons, L. (2020). The differential importance of component skills on reading comprehension test performance among struggling
adolescent readers. Journal of Learning Disabilities, 54(3). Clemens, N. H., Hsiao, Y., Simmons, L., Kwok, O., Greene, E., Soohoo, M., . . . Al Otaiba, S. (2019). Predictive validity of kindergarten progress monitoring measures across the school year: An application of dominance analysis. Assessment for Effective Intervention, 44, 241–255. Clemens, N. H., Keller- Margulis, M. A., Scholten, T., & Yoon, M. (2016). Screening assessment within a multi-tiered system of support: Current practices, advances, and next steps. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention (pp. 187–213). New York: Springer. Clemens, N. H., Lai, M., Burke, M., & Wu, J. (2017). Interrelations of growth in letter- name and sound fluency in kindergarten and implications for subsequent reading fluency. School Psychology Review, 46, 272–287. Clemens, N. H., Lee, K., Henri, M., Simmons, L. E., Kwok, O. M., & Al Otaiba, S. (2020). Growth on sublexical fluency progress monitoring measures in early kindergarten and relations to word reading acquisition. Journal of School Psychology, 79, 43–62. Clemens, N. H., Lee, K., Liu, X., Boucher, A., Al Otaiba, S., Simmons, L., & Simmons, L. (2022). Kindergarten early literacy skill trajectories and relations to subsequent word- reading difficulties. Manuscript under review. Clemens, N. H., Oslund, E. L., Simmons, L. E., & Simmons, D. (2014). Assessing spelling in kindergarten: Further comparison of scoring metrics and their relation to reading skills. Journal of School Psychology, 52(1), 49–61. Clemens, N. H., Shapiro, E. S., Hilt-Panahon, A., & Gischlar, K. L. (2011). Student achievement outcomes. In E. S. Shapiro, N. Zigmond, T. Wallace, & D. Marston (Eds.), Models for implementing response- to- intervention: Tools, outcomes, and implications. New York: Guilford Press. Clemens, N. H., Shapiro, E. S., & Thoemmes, F. J. (2011). Improving the efficacy of first grade reading screening: An investigation of word identification fluency with other early literacy indicators. School Psychology Quarterly, 26, 231–244. Clemens, N. H., Shapiro, E. S., Wu, J. Y., Tay-
References 485 lor, A., & Caskie, G. L. (2014). Monitoring early first grade reading progress: A comparison of two measures. Journal of Learning Disabilities, 47, 254–270. Clemens, N. H., Simmons, D., Simmons, L. E., Wang, H., & Kwok, O. M. (2017). The prevalence of reading fluency and vocabulary difficulties among adolescents struggling with reading comprehension. Journal of Psychoeducational Assessment, 35(8), 785–798. Clemens, N. H., Solari, E., Kearns, D. M., Fien, H., Nelson, N. J., Stelega, M., . . . Hoeft, F. (2021). They Say You Can Do Phonemic Awareness Instruction “In the Dark,” But Should You? A Critical Evaluation of the Trend Toward Advanced Phonemic Awareness Training. Manuscript under revision. Clemens, N. H., Soohoo, M., Wiley, C. P., Hsiao, Y., Estrella, I., Allee-Smith, P. J., & Yoon, M. (2018). Advancing stage 2 research on measures for monitoring kindergarten reading progress. Journal of Learning Disabilities, 51, 85–104. Clements, D. H., & Sarama, J. (2011). Early childhood mathematics intervention. Science, 333(6045), 968–970. Coatney, R. P. (1985). The Beginning Teacher Evaluation Study: Further examination of educational implications. Journal of Research and Development in Education, 18(2), 44–48. Cochran, L., Feng, H., Cartledge, G., & Hamilton, S. (1993). The effects of crossage tutoring on the academic achievement, social behavior, and self-perceptions of low- achieving African- A merican males with behavior disorders. Behavioral Disorders, 18, 292–302. Codding, G. S., Archer, J., & Cornell, J. (2010). A systematic replication and extension of using incremental rehearsal to improve multiplication skills: An investigation of generalization. Journal of Behavioral Education, 19, 93–105. Codding, R. S., Burns, M. K., & Lukito, G. (2011). Meta-analysis of mathematic basicfact fluency interventions: A component analysis. Learning Disabilities Research and Practice, 26(1), 36–47. Codding, R. S., Chan-Ianetta, L., Palmer, M., & Lukito, G. (2009). Examining a classwide application of cover–copy– compare with and without goal setting to enhance math-
ematics fluency. School Psychology Quarterly, 24, 173–185. Codding, R. S., Lewandowski, L., & Eckert, T. (2005). Examining the efficacy of performance feedback and goal- setting interventions in children with ADHD: A comparison of two methods of goal setting. Journal of Evidence- Based Practices for Schools, 6, 42–58. Codding, R. S., VanDerHeyden, A. M., Martin, R. J., Desai, S., Allard, N., & Perrault, L. (2016). Manipulating treatment dose: Evaluating the frequency of a small group intervention targeting whole number operations. Learning Disabilities Research and Practice, 31(4), 208–220. Codding, R. S., Volpe, R. J., & Poncy, B. C. (2017). Effective math interventions: A guide to improving whole-number knowledge. New York: Guilford Press. Coker Jr., D. L., & Ritchey, K. D. (2014). Universal screening for writing risk in kindergarten. Assessment for Effective Intervention, 39(4), 245–256. Colenbrander, D., Kohnen, S., Beyersmann, E., Robidoux, S., Wegener, S., Arrow, T., . . . Castles, A. (2022). Teaching children to read irregular words: A comparison of three instructional methods. Scientific Studies of Reading, 26(6), 545–564. Collins, A. A., Lindström, E. R., & Compton, D. L. (2018). Comparing students with and without reading difficulties on reading comprehension assessments: A meta- analysis. Journal of Learning Disabilities, 51(2), 108– 123. Combs, M. L., & Lahey, B. B. (1981). A cognitive social skills training program: Evaluation with young children. Behavior Modification, 5, 39–60. Common, E. A., Lane, K. L., Cantwell, E. D., Brunsting, N. C., Oakes, W. P., Germer, K. A., . . . Bross, L. A. (2020). Teacher-delivered strategies to increase students’ opportunities to respond: A systematic methodological review. Behavioral Disorders, 45(2), 67–84. Compton, D. L., Fuchs, D., Fuchs, L. S., & Bryant, J. D. (2006). Selecting at-risk readers in first grade for early intervention: A two-year longitudinal study of decision rules and procedures. Journal of Educational Psychology, 98, 394–409. Compton, D. L., Miller, A. C., Elleman, A. M.,
486
References
& Steacy, L. M. (2014). Have we forsaken reading theory in the name of “quick fix” interventions for children with reading disability? Scientific Studies of Reading, 18(1), 55–73. Conderman, G., & Hedin, L. (2011). Cue cards: A self-regulatory strategy for students with learning disabilities. Intervention in School and Clinic, 46(3), 165–173. Conley, K. M., Everett, S. R., & Pinkelman, S. E. (2019). Strengthening progress monitoring procedures for individual student behavior support. Beyond Behavior, 28(3), 124–133. Connell, M. C., Carta, J. J., & Baer, D. M. (1993). Programming generalization of inclass transition skills: Teaching preschoolers with developmental delays to self-assess and recruit contingent praise. Journal of Applied Behavior Analysis, 26, 345–352. Connolly, A. J. (2007). KeyMath—3: A diagnostic inventory of essential mathematics. Circle Pines, MN: American Guidance Service. Connor, C. M., Morrison, F. J., Fishman, B., Giuliani, S., Luck, M., Underwood, P. S., . . . Schatschneider, C. (2011). Testing the impact of child characteristics × instruction interactions on third graders’ reading comprehension by differentiating literacy instruction. Reading Research Quarterly, 46(3), 189– 221. Connor, C. M., Piasta, S. B., Fishman, B., Glasney, S., Schatschneider, C., Crowe, E., . . . Morrison, F. J. (2009). Individualizing student instruction precisely: Effects of child × instruction interactions on first graders’ literacy development. Child Development, 80(1), 77–100. Conoyer, S. J., Foegen, A., & Lembke, E. S. (2016). Early numeracy indicators: Examining predictive utility across years and states. Remedial and Special Education, 37(3), 159–171. Conoyer, S. J., Ford, J. W., Smith, R. A., Mason, E. N., Lembke, E. S., & Hosp, J. L. (2019). Examining curriculum- based measurement screening tools in middle school science: A scaled replication study. Journal of Psychoeducational Assessment, 37(7), 887–898. Conoyer, S. J., Therrien, W. J., & White, K. K. (2022). Meta-analysis of validity and review
of alternate form reliability and slope for curriculum- based measurement in science and social studies. Assessment for Effective Intervention, 47(2), 101–111. Consortium on Reading Excellence. (2008). Core assessing reading: Multiple measures for kindergarten through eighth grade. Novato, CA: Arena. Cook, C. R., Rasetshwane, K. B., Truelson, E., Grant, S., Dart, E. H., Collins, T. A., & Sprague, J. (2011). Development and validation of the Student Internalizing Behavior Screener: Examination of reliability, validity, and classification accuracy. Assessment for Effective Intervention, 36(2), 71–79. Copeland, S. R., Hughes, C., Agran, M., Wehmeyer, M. L., & Fowler, S. E. (2002). An intervention package to support high school students with mental retardation in general education classrooms. American Journal on Mental Retardation, 107, 32–45. Cosden, M. A., & Haring, T. G. (1992). Cooperative learning in the classroom: Contingencies, group interactions, and students with special needs. Journal of Behavioral Education, 2, 53–71. Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34(2), 213–238. Coyne, M. D., Kame’enui, E. J., & Simmons, D. C. (2001). Prevention and intervention in beginning reading: Two complex systems. Learning Disabilities Research and Practice, 16, 62–73. Crain-T horeson, C., Dahlin, M. P., & Powell, T. A. (2001). Parent-child interaction in three conversational contexts: Variations in style and strategy. New Directions for Child and Adolescent Development, 2001(92), 23–38. Creavin, A. L., Lingam, R., Steer, C., & Williams, C. (2015). Ophthalmic abnormalities and reading impairment. Pediatrics, 135(6), 1057–1065. Cromley, J. G., & Azevedo, R. (2007). Testing and refining the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology, 99(2), 311. Cullinan, D., Lloyd, J., & Epstein, M. H. (1981). Strategy training: A structured approach to arithmetic instruction. Exceptional Education Quarterly, 2, 41–49. Cumming, J., & Elkins, J. (1999). Lack of automaticity in the basic addition facts as a char-
References 487 acteristic of arithmetic learning problems and instructional needs. Mathematical Cognition, 5(2), 149–180. Cummings, K. D., Park, Y., & Bauer Schaper, H. A. (2013). Form effects on DIBELS Next oral reading fluency progress- monitoring passages. Assessment for Effective Intervention, 38(2), 91–104. Cunningham, P. (1990). The Names Test: A quick assessment of decoding ability. The Reading Teacher, 44(2), 124–129. Cutting, L. E., Materek, A., Cole, C. A., Levine, T. M., & Mahone, E. M. (2009). Effects of fluency, oral language, and executive function on reading comprehension performance. Annals of Dyslexia, 59(1), 34–54. Daly, E. J., III, Martens, B. K., Hamler, K. R., Dool, E. J., & Eckert, T. L. (1999). A brief experimental analysis for identifying instructional components needed to improve oral reading fluency. Journal of Applied Behavior Analysis, 32, 83–94. Daly, E. J., III, & Murdoch, A. (2000). Direct observation in the assessment of academic skills problems. In E. S. Shapiro & T. R. Kratochwill (Eds.), Behavioral assessment in schools: Theory, research, and clinical foundations (2nd ed., pp. 46–77). New York: Guilford Press. Daly, E. J., III, Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting a functional analysis of academic performance problems. School Psychology Review, 26, 554–574. Danielson, L., & Rosenquist, C. (2014). Introduction to the TEC special issue on databased individualization. Teaching Exceptional Children, 46(4), 6–12. Darch, C., & Gersten, R. (1985). The effects of teacher presentation rate and praise on LD student’s oral reading performance. British Journal of Educational Psychology, 55(3), 295–303. Davis, Z. T. (1994). Effects of prereading story mapping on elementary readers’ comprehension. Journal of Educational Research, 87, 353–360. de Haas-Warner, S. J. (1991). Effects of self- monitoring on preschoolers’ on-task behavior: A pilot study. Topics in Early Childhood Special Education, 11, 59–73. de Jong, P. F., & Vrielink, L. O. (2004). Rapid
automatic naming: Easy to measure, hard to improve (quickly). Annals of Dyslexia, 54(1), 65–88. De Smedt, B., Peters, L., & Ghesquière, P. (2019). Neurobiological origins of mathematical learning disabilities or dyscalculia: A review of brain imaging data. In A. Fritz, V. Haase, & P. Räsänen (Eds.), International handbook of mathematical learning difficulties (pp. 367–384). Cham, Switzerland: Springer. Deacon, S. H., & Leung, D. (2013). Testing the statistical learning of spelling patterns by manipulating semantic and orthographic frequency. Applied Psycholinguistics, 34(6), 1093. Dehaene, S. (2011). The number sense: How the mind creates mathematics. New York: Oxford University Press. Dehaene, S. (2020). How we learn: Why brains learn better than any machine . . . for now. New York: Penguin. Delacato, C. (1963). The diagnosis and treatment of speech and reading problems. Springfield, IL: Charles C Thomas. Delquadri, J. C., Greenwood, C. R., Stretton, K., & Hall, R. V. (1983). The peer tutoring spelling game: A classroom procedure for increasing opportunities to respond and spelling performance. Education and Treatment of Children, 6, 225–239. Delquadri, J. C., Greenwood, C. R., Whorton, D., Carta, J., & Hall, R. V. (1986). Classwide peer tutoring. Exceptional Children, 52, 535–542. Denham, C., & Lieberman, P. (1980). Time to learn. Washington, DC: National Institute of Education. Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232. Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37(3), 184–192. Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507–524. Deno, S. L., King, R., Skiba, R., Sevcik, B., & Wesson, C. (1983). The structure of instruction rating scale (SIRS): Development and
488
References
technical characteristics (Research Report No. 107). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities. Deno, S. L., Marston, D., & Mirkin, P. K. (1982). Valid measurement procedures for continuous evaluation of written expression. Exceptional Children, 48, 368–371. Deno, S., Marston, D., Mirkin, P., Lowry, L., Sindelar, P., & Jenkins, J. (1982). The use of standard tasks to measure achievement in reading, spelling, and written expression: A normative and developmental study (Vol. IRI.D-RR-87). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities. Deno, S. L., Marston, D., & Tindal, G. (1985– 1986). Direct and frequent curriculum-based measurement: An alternative for educational decision making. Special Services in the Schools, 2, 5–27. Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston, VA: Council for Exceptional Children. Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 36–47. Deno, S. L., Mirkin, P. K., Lowry, L., & Kuehnle, K. (1980). Relationships among simple measures of spelling and performance on standardized achievement tests (Research Report No. 21; ERIC Document Reproduction Service No. ED197508). Minneapolis: University of Minnesota, Institute for Research on Learning Disabilities. Denton, C. A., Barth, A. E., Fletcher, J. M., Wexler, J., Vaughn, S., Cirino, P. T., . . . Francis, D. J. (2011). The relations among oral and silent reading fluency and comprehension in middle school: Implications for identification and instruction of students with reading difficulties. Scientific Studies of Reading, 15(2), 109–135. Derr, T. F., & Shapiro, E. S. (1989). A behavioral evaluation of curriculum-based assessment of reading. Journal of Psychoeducational Assessment, 7, 148–160. Derr-M inneci, T. F., & Shapiro, E. S. (1992). Validating curriculum- based measurement in reading from a behavioral perspective. School Psychology Quarterly, 7, 2–16. Deshler, D. D., & Schumaker, J. B. (1986). Learning strategies: An instructional alter-
native for low-achieving adolescents. Exceptional Children, 52, 583–590. Deshler, D. D., & Schumaker, J. B. (1993). Strategy mastery by at-risk students: Not a simple matter. Elementary School Journal, 94, 153–167. Deshler, D. D., Schumaker, J. B., Alley, G. R., Warner, M. M., & Clark, F. L. (1982). Learning disabilities in adolescent and young adult populations: Research implications (Pt. I). Focus on Exceptional Children, 15(1), 1–12. Deshler, D. D., Schumaker, J. B., Lenz, B. K., & Ellis, E. S. (1984). Academic and cognitive interventions for LD adolescents (Pt. II). Journal of Learning Disabilities, 17, 170– 187. Dever, B. V., Dowdy, E., Raines, T. C., & Carnazzo, K. (2015). Stability and change of behavioral and emotional screening scores. Psychology in the Schools, 52(6), 618–629. Diamond, A. (2013). Executive functions. Annual Review of Psychology, 64, 135–168. DiGangi, S. A., Maag, J. W., & Rutherford, R. B. Jr. (1991). Self- graphing of on-task behavior: Enhancing the reactive effects of self- monitoring on on-task behavior and academic performance. Learning Disability Quarterly, 14(3), 221–230. Diggs, C. R., & Christ, T. J. (2019). Investigating the diagnostic consistency and incremental validity evidence of curriculum-based measurements of oral reading rate and comprehension. Contemporary School Psychology, 23(2), 163–178. Dineen, J. P., Clark, H. B., & Risley, T. R. (1977). Peer tutoring among elementary students: Educational benefits to the tutor. Journal of Applied Behavior Analysis, 10, 231–238. Dion, E., Fuchs, D., & Fuchs, L. S. (2005). Differential effects of peer-assisted learning strategies on students’ social preference and friendship making. Behavioral Disorders, 30, 421–429. DiPerna, J. C., & Elliott, S. N. (2000). Academic Competence Evaluation Scales. San Antonio, TX: Pearson Assessment. Doabler, C. T., Baker, S. K., Kosty, D. B., Smolkowski, K., Clarke, B., Miller, S. J., & Fien, H. (2015). Examining the association between explicit mathematics instruction and student mathematics achievement. Elementary School Journal, 115(3), 303–333.
References 489
Doabler, C. T., Clarke, B., Kosty, D., Turtura, J. E., Sutherland, M., Maddox, S. A., & Smolkowski, K. (2020). Using direct observation to document “practice-based evidence” of evidence-based mathematics instruction. Journal of Learning Disabilities, 54(1). Doabler, C. T., & Fien, H. (2013). Explicit mathematics instruction: What teachers can do for teaching students with mathematics difficulties. Intervention in School and Clinic, 48(5), 276–285. Doabler, C. T., Gearin, B., Baker, S. K., Stoolmiller, M., Kennedy, P. C., Clarke, B., . . . Smolkowski, K. (2019). Student practice opportunities in core mathematics instruction: Exploring for a Goldilocks effect for kindergartners with mathematics difficulties. Journal of Learning Disabilities, 52(3), 271–283. Dobbins, A., Gagnon, J. C., & Ulrich, T. (2014). Teaching geometry to students with math difficulties using graduated and peer-mediated instruction in a response-to- intervention model. Preventing School Failure: Alternative Education for Children and Youth, 58(1), 17–25. Doll, C., McLaughlin, T. F., & Barretto, A. (2013). The token economy: A recent review and evaluation. International Journal of Basic and Applied Science, 2(1), 131–149. Donovan, M. S., & Cross, C. T. (Eds.). (2002). Minority students in special and gifted education. Washington, DC: National Academies Press. Dowdy, E., Furlong, M., Raines, T. C., Bovery, B., Kauffman, B., Kamphaus, R. W., . . . Murdock, J. (2015). Enhancing school-based mental health services with a preventive and promotive approach to universal screening for complete mental health. Journal of Educational and Psychological Consultation, 25(2–3), 178–197. Dowker, A., Sarkar, A., & Looi, C. Y. (2016). Mathematics anxiety: What have we learned in 60 years? Frontiers in Psychology, 7, 508. DuBois, M. R., Volpe, R. J., & Hemphill, E. M. (2014). A randomized trial of a computer- assisted tutoring program targeting letter- sound expression. School Psychology Review, 43(2), 210–221. Ducharme, J. M., & Shecter, C. (2011). Bridging the gap between clinical and classroom intervention: Keystone approaches for stu-
dents with challenging behavior. School Psychology Review, 40(2), 257–274. Duhon, G. J., Moell, G. H., Win, J. C., Freeland, J. T., Dufiene, B. A., & Gilbertson, D. N. (2004). Identifying academic skills and performance deficits: The experimental analysis of brief assessments of academic skills. School Psychology Review, 33, 429– 443. Duke, N. K., & Pearson, P. D. (2008). Effective practices for developing reading comprehension. Journal of Education, 189(1/2), 107–122. Dunlap, G., Clarke, S., Jackson, M., & Wright, S. (1995). Self- monitoring of classroom behaviors with students exhibiting emotional and behavioral challenges. School Psychology Quarterly, 10, 165–177. Dunlap, G., Kern- Dunlap, L., Clarke, S., & Robbins, F. R. (1991). Functional assessment, curricular revision, and severe behavior problems. Journal of Applied Behavior Analysis, 24, 387–397. DuPaul, G. J., Ervin, R. A., Hook, C. L., & McGoey, K. E. (1998). Peer tutoring for children with attention deficit hyperactivity disorder: Effects on classroom behavior and academic performance. Journal of Applied Behavior Analysis, 31, 579–592. DuPaul, G. J., & Henningson, P. N. (1993). Peer tutoring effects on the classroom performance of children with attention deficit hyperactivity disorder. School Psychology Review, 22, 134–142. DuPaul, G. J., Jitendra, A. K., Tresco, K. E., Junod, R. E. V., Volpe, R. J., & Lutz, J. G. (2006). Children with attention deficit hyperactivity disorder: Are there gender differences in school functioning? School Psychology Review, 35, 292–308. DuPaul, G. J., Rapport, M. D., & Perriello, L. M. (1991). Teacher ratings of academic skills: The development of the Academic Performance Rating Scale. School Psychology Review, 20, 284–300. DuPaul, G. J., Volpe, R. J., Jitendra, A. K., Lutz, J. G., Lorah, K. S., & Gruber, R. (2004). Elementary school students with AD/HD: Predictors of academic achievement. Journal of School Psychology, 42(4), 285–301. Dyson, H., Best, W., Solity, J., & Hulme, C. (2017). Training mispronunciation correction and word meanings improves children’s
490
References
ability to learn to read words. Scientific Studies of Reading, 21(5), 392–407. Dyson, N. I., Jordan, N. C., & Glutting, J. (2013). A number sense intervention for low- income kindergartners at risk for mathematics difficulties. Journal of Learning Disabilities, 46(2), 166–181. Eagle, J. W., Dowd-Eagle, S. E., Snyder, A., & Holtzman, E. G. (2015). Implementing a multi- tiered system of support (MTSS): Collaboration between school psychologists and administrators to promote systems-level change. Journal of Educational and Psychological Consultation, 25(2–3), 160–177. Eason, S. H., Sabatini, J., Goldberg, L., Bruce, K., & Cutting, L. E. (2013). Examining the relationship between word reading efficiency and oral reading rate in predicting comprehension among different types of readers. Scientific Studies of Reading, 17(3), 199–223. Eckert, T. L., Hintze, J. M., & Shapiro, E. S. (1999). Development and refinement of a measure for assessing the acceptability of assessment methods: The Assessment Rating Profile—Revised. Canadian Journal of School Psychology, 15, 21–42. Eckert, T. L., & Shapiro, E. S. (1999). Methodological issues in analog research: Are teachers’ acceptability ratings of assessment methods influenced by experimental design? School Psychology Review, 28, 5–16. Eckert, T. L., Shapiro, E. S., & Lutz, J. G. (1995). Teacher’s ratings of the acceptability of curriculum- based assessment methods. School Psychology Review, 24, 499–510. Edmonds, M., & Briggs, K. L. (2003). The instructional content emphasis instrument: Observations of reading instruction. In S. Vaughn & K. L. Briggs (Eds.), Reading in the classroom: Systems for the observation of teaching and learning (pp. 31–52). Baltimore, MD: Brookes. Edmonds, M. S., Vaughn, S., Wexler, J., Reutebuch, C., Cable, A., Tackett, K. K., & Schnakenberg, J. W. (2009). A synthesis of reading interventions and effects on reading comprehension outcomes for older struggling readers. Review of Educational Research, 79(1), 262–300. Ehri, L. C. (1998). Grapheme–phoneme knowledge is essential for learning to read words in English. Word Recognition in Beginning Literacy, 3, 40.
Ehri, L. C. (2000). Learning to read and learning to spell: Two sides of a coin. Topics in Language Disorders, 20(3), 19–36. Ehri, L. C. (2020). The science of learning to read words: A case for systematic phonics instruction. Reading Research Quarterly, 55, S45–S60. Ehri, L. C., Nunes, S. R., Stahl, S. A., & Willows, D. M. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel’s meta-analysis. Review of Educational Research, 71(3), 393–447. Ehri, L. C., & Saltmarsh, J. (1995). Beginning readers outperform older disabled readers in learning to read words by sight. Reading and Writing: An Interdisciplinary Journal, 7, 295–326. Ehri, L. C., & Wilce, L. S. (1980). Do beginners learn to read function words better in sentences or in lists? Reading Research Quarterly, 451–476. Eiserman, W. D. (1988). Three types of peer tutoring: Effects on the attitudes of students with learning disabilities and their regular class peers. Journal of Learning Disabilities, 21, 249–252. Eklund, K., Renshaw, T. L., Dowdy, E., Jimerson, S. R., Hart, S. R., Jones, C. N., & Earhart, J. (2009). Early identification of behavioral and emotional problems in youth: Universal screening versus teacher- referral identification. California School Psychologist, 14(1), 89–95. Elbro, C., & de Jong, P. F. (2017). Orthographic learning is verbal learning. In K. Cain, D. Compton, & R. K. Parrila (Eds.), Theories of reading development (pp. 169–189). Amsterdam: John Benjamins. Elbro, C., & Petersen, D. K. (2004). Longterm effects of phoneme awareness and letter sound training: An intervention study with children at risk for dyslexia. Journal of Educational Psychology, 96(4), 660–679. Elkonin, D. B. (1973). U.S.S.R. In J. Downing (Ed.), Comparative reading (pp. 551–579). New York: Macmillan. Elleman, A. M. (2017). Examining the impact of inference instruction on the literal and inferential comprehension of skilled and less skilled readers: A meta-analytic review. Journal of Educational Psychology, 109(6), 761–781. Elleman, A. M., & Compton, D. L. (2017).
References 491 Beyond comprehension strategy instruction: What’s next? Language, Speech, and Hearing Services in Schools, 48(2), 84–91. Elliott, J., Lee, S. W., & Tollefson, N. (2001). A reliability and validity study of the Dynamic Indicators of Basic Early Literacy Skills— modified. School Psychology Review, 30(1), 33–49. Elliott, S. N., Busse, R. T., & Gresham, F. M. (1993). Behavior rating scales: Issues of use and development. School Psychology Review, 22, 313–321. Elliott, S. N., & Fuchs, L. S. (1997). The utility of curriculum-based measurement and performance assessment as alternatives to traditional intelligence and achievement tests. School Psychology Review, 26, 224–233. Ellis, E. S., & Lenz, B. K. (1987). A component analysis of effective learning strategies for LD students. Learning Disabilities Focus, 2, 94–107. Ellis, E. S., Lenz, B. K., & Sabournie, E. J. (1987a). Generalization and adaptation of learning strategies to natural environments: Part I. Critical agents. Remedial and Special Education, 8(1), 6–20. Ellis, E. S., Lenz, B. K., & Sabournie, E. J. (1987b). Generalization and adaptation of learning strategies to natural environments: Part II. Research into practice. Remedial and Special Education, 8(2), 6–23. Embry, D. D. (2002). The Good Behavior Game: A best practice candidate as a universal behavioral vaccine. Clinical Child and Family Psychology Review, 5(4), 273–297. Englemann, S., & Carnine, D. (1982). Theory of instruction: Principles and applications. New York: Irvington. Ennis, R. P. (2018). Group contingencies to increase appropriate behaviors in the classroom: Tips for success. Beyond Behavior, 27(2), 82–89. Ennis, R. P., Royer, D. J., Lane, K. L., Menzies, H. M., Oakes, W. P., & Schellman, L. E. (2018). Behavior- specific praise: An effective, efficient, low-intensity strategy to support student success. Beyond Behavior, 27(3), 134–139. Erchul, W. P., Covington, C. G., Hughes, J. N., & Meyers, J. (1995). Further explorations of request- centered relational communication within school consultation. School Psychology Review, 24, 621–632.
Espin, C. A., Busch, T. W., Lembke, E. S., Hampton, D. D., Seo, K., & Zukowski, B. A. (2013). Curriculum- based measurement in science learning: Vocabulary-matching as an indicator of performance and progress. Assessment for Effective Intervention, 38(4), 203–213. Espin, C. A., Busch, T. W., Shin, J., & Kruschwitz, R. (2001). Curriculum- based measurement in the content areas: Validity of vocabulary-matching as an indicator of performance in social studies. Learning Disabilities Research and Practice, 16, 142–151. Espin, C. A., De La Paz, S., Scierka, B. J., & Roelofs, L. (2005). The relationship between curriculum- based measures in written expression and quality and completeness of expository writing for middle school students. Journal of Special Education, 38, 208–217. Espin, C. A., Shin, J., & Busch, T. W. (2005). Curriculum-based measurement in the content areas: Vocabulary matching as an indicator of progress in social studies learning. Journal of Learning Disabilities, 38(4), 353–363. Espin, C., Shin, J., Deno, S. L., Skare, S., Robinson S., & Benner, B. (2000). Identifying indicators of written expression proficiency for middle school students. Journal of Special Education, 34, 140–153. Espin, C., Wallace, T., Campbell, H., Lembke, E. S., Long, J. D., & Ticha, R. (2008). Curriculum-based measurement in writing: Predicting the success of high- school students on state standards tests. Exceptional Children, 74(2), 174–193. Evans, G. W. (1985). Building systems model as a strategy for target behavior in clinical assessment. Behavioral Assessment, 7, 21–32. Fantuzzo, J. W., King, J. A., & Heller, L. R. (1992). Effects of reciprocal peer tutoring on mathematics and school adjustment: A component analysis. Journal of Educational Psychology, 84, 331–339. Fantuzzo, J. W., & Polite, K. (1990). School- based, behavioral self-management: A review and analysis. School Psychology Quarterly, 5, 180–198. Ferreira-Valente, M. A., Pais-R ibeiro, J. L., & Jensen, M. P. (2011). Validity of four pain intensity rating scales. Pain, 152(10), 2399– 2404.
492
References
Ferster, C. B. (1965). Classification of behavioral pathology. In L. Krasner & L. P. Ullmann (Eds.), Research in behavior modification (pp. 6–26). New York: Holt, Rinehart & Winston. Fien, H., Nelson, N. J., Smolkowski, K., Kosty, D., Pilger, M., Baker, S. K., & Smith, J. L. M. (2020). A conceptual replication study of the Enhanced Core Reading Instruction MTSS- reading model. Exceptional Children, 87(3), article 0014402920953763. Filderman, M. J., Toste, J. R., Didion, L. A., Peng, P., & Clemens, N. H. (2018). Databased decision making in reading interventions: A synthesis and meta-analysis of the effects for struggling readers. Journal of Special Education, 52(3), 174–187. Filter, K. J., Ebsen, S. A., & Dibos, R. (2013). School psychology crossroads in America: Discrepancies between actual and preferred discrete practices and barriers to preferred practice. International Journal of Special Education, 28(1), 88–106. Fiorello, C. A., Flanagan, D. P., & Hale, J. B. (2014). The utility of the pattern of strengths and weaknesses approach. Learning Disabilities, 20, 57–61. Fisher, C., & Berliner, D. C. (1985). Perspectives on instructional time. New York: Longman. Flanagan, D. P., Fiorello, C. A., & Ortiz, S. O. (2010). Enhancing practice through application of Cattell–Horn–Carroll theory and research: A “third method” approach to specific learning disability identification. Psychology in the Schools, 47(7), 739–760. Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. (2013). Essentials of cross- battery assessment (Vol. 84). Hoboken, NJ: Wiley. Fletcher, J. M., Foorman, B. R., Boudousquie, A., Barnes, M. A., Schatschneider, C., & Francis, D. J. (2002). Assessment of reading and learning disabilities: A research- based intervention-oriented approach. Journal of School Psychology, 40, 27–63. Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (2019). Learning disabilities: From identification to intervention (2nd ed.). New York: Guilford Press. Fletcher, J. M., & Miciak, J. (2017). Comprehensive cognitive assessments are not necessary for the identification and treatment of
learning disabilities. Archives of Clinical Neuropsychology, 32(1), 2–7. Fletcher, J. M., Morris, R. D., & Lyon, G. R. (2003). Classification and definition of learning disabilities: An integrative perspective. In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (pp. 30–56). New York: Guilford Press. Fletcher, J. M., & Vaughn, S. (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3(1), 30–37. Flores, M. M. (2009). Teaching subtraction with regrouping to students experiencing difficulty in mathematics. Preventing School Failure: Alternative Education for Children and Youth, 53(3), 145–152. Flores, M. M., Hinton, V. M., & Schweck, K. B. (2014). Teaching multiplication with regrouping to students with learning disabilities. Learning Disabilities Research and Practice, 29(4), 171–183. Flores, M. M., Hinton, V., & Strozier, S. D. (2014). Teaching subtraction and multiplication with regrouping using the concrete- representational- abstract sequence and strategic instruction model. Learning Disabilities Research and Practice, 29(2), 75–88. Flynn, L. J., Zheng, X., & Swanson, H. L. (2012). Instructing struggling older readers: A selective meta-analysis of intervention research. Learning Disabilities Research and Practice, 27(1), 21–32. Foegen, A., Jiban, C., & Deno, S. (2007). Progress monitoring measuring in mathematics: A review of the literature. Journal of Special Education, 41, 121–129. Foorman, B. R. (1994). The relevance of a connectionist model of reading for “The great debate.” Educational Psychology Review, 6(1), 25–47. Foorman, B. R., Francis, D. J., Fletcher, J. M., Schatschneider, C., & Mehta, P. (1998). The role of instruction in learning to read: Preventing reading failure in at-risk children. Journal of Educational Psychology, 90, 37–55. Foorman, B. R., Francis, D. J., Novy, D. M., & Liberman, D. (1991). How letter-sound instruction mediates progress in first-grade reading and spelling. Journal of Educational Psychology, 83(4), 456–469.
References 493 Ford, J. W., Conoyer, S. J., Lembke, E. S., Smith, R. A., & Hosp, J. L. (2018). A comparison of two content area curriculum-based measurement tools. Assessment for Effective Intervention, 43(2), 121–127. Förster, N., Kawohl, E., & Souvignier, E. (2018). Short- and long-term effects of assessment- based differentiated reading instruction in general education on reading fluency and reading comprehension. Learning and Instruction, 56, 98–109. Forsyth, S. R., & Powell, S. R. (2017). Differences in the mathematics-vocabulary knowledge of fifth-grade students with and without learning difficulties. Learning Disabilities Research and Practice, 32(4), 231–245. Foulin, J. N. (2005). Why is letter-name know ledge such a good predictor of learning to read? Reading and Writing, 18(2), 129– 155. Fowler, S. A. (1984). Introductory comments: The pragmatics of self-management for the developmentally disabled. Analysis and Intervention in Developmental Disabilities, 4, 85–90. Fowler, S. A. (1986). Peer- monitoring and self- monitoring: Alternatives to traditional teacher management. Exceptional Children, 52, 573–582. Fox, D. E. C., & Kendall, P. C. (1983). Thinking through academic problems: Applications of cognitive behavior therapy to learning. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. III, pp. 269–301). Hillsdale, NJ: Erlbaum. Franca, V. M., Kerr, M. M., Reitz, A. L., & Lambert, D. (1990). Peer tutoring among behaviorally disordered students: Academic and social benefits to tutor and tutee. Education and Treatment of Children, 13, 109– 128. Francis, D. A., Caruana, N., Hudson, J. L., & McArthur, G. M. (2019). The association between poor reading and internalising problems: A systematic review and meta-analysis. Clinical Psychology Review, 67, 45–60. Francis, D. J., Fletcher, J. M., Stuebing, K. K., Lyon, G. R., Shaywitz, B. A., & Shaywitz, S. E. (2005). Psychometric approaches to the identification of LD: IQ and achievement scores are not sufficient. Journal of Learning Disabilities, 38, 98–108.
Francis, D. J., Santi, K. L., Barr, C., Fletcher, J. M., Varisco, A., & Foorman, B. R. (2008). Form effects on the estimation of students’ oral reading fluency using DIBELS. Journal of School Psychology, 46(3), 315–342. Francis, D. J., Shaywitz, S. E., Stuebing, K. K., Shaywitz, B. A., & Fletcher, J. M. (1996). Developmental lag versus deficit models of reading disability: A longitudinal, individual growth curves analysis. Journal of Educational Psychology, 88, 3–17. Francis, D. J., Snow, C. E., August, D., Carlson, C. D., Miller, J., & Iglesias, A. (2006). Measures of reading comprehension: A latent variable analysis of the diagnostic assessment of reading comprehension. Scientific Studies of Reading, 10(3), 301–322. Frederick, W. C., Walberg, H. J., & Rasher, S. P. (1979). Time, teacher comments, and achievement in urban high schools. Journal of Educational Research, 13, 63–65. Freeman, J., Sugai, G., Simonsen, B., & Everett, S. (2017). MTSS coaching: Bridging knowing to doing. Theory Into Practice, 56(1), 29–37. Fuchs, D., & Deschler, D. D. (2007). What we need to know about responsiveness to intervention (and shouldn’t be afraid to ask). Learning Disabilities Research and Practice, 22, 129–136. Fuchs, D., & Fuchs, L. S. (2017). Critique of the national evaluation of response to intervention: A case for simpler frameworks. Exceptional Children, 83(3), 255–268. Fuchs, D., Fuchs, L. S., Mathes, P. G., & Martinez, E. (2002). Preliminary evidence on the social standing of students with learning disabilities in PALS and no-PALS classrooms. Learning Disabilities Research and Practice, 17(4), 205–215. Fuchs, D., Fuchs, L. S., Thompson, A., Svennson, E., Yen, L., Otaiba, S. A., . . . Saenz, L. (2001). Peer-assisted learning strategies in reading: Extensions for kindergarten, first grade, and high school. Remedial and Special Education, 22, 15–21. Fuchs, D., Fuchs, L. S., & Vaughn, S. (2014). What is intensive instruction and why is it important? Teaching Exceptional Children, 46(4), 13–18 Fuchs, L. S. (2003). Assessing intervention responsiveness: Conceptual and technical
494
References
issues. Learning Disabilities Research and Practice, 18, 172–186. Fuchs, L. S. (2004). The past, present, and future of curriculum- based measurement research. School Psychology Review, 33(2), 188–192. Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488–500. Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15–24. Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum- based measures and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21, 449–460. Fuchs, L. S., & Fuchs, D. (1986a). Curriculum- based assessment of progress toward longterm and short-term goals. Journal of Special Education, 20, 69–82. Fuchs, L. S., & Fuchs, D. (1986b). Effects of systematic formative evaluation: A meta- analysis. Exceptional Children, 53, 199– 208. Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21, 45–58. Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research and Practice, 13, 204–219. Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28(4), 659–671. Fuchs, L. S., & Fuchs, D. (2004). Determining adequate yearly progress from kindergarten through grade 6 with curriculum-based measurement. Assessment for Effective Intervention, 29(4), 25–37. Fuchs, L. S., Fuchs, D., & Compton, D. L. (2004). Monitoring early reading development in first grade: Word identification fluency versus nonsense word fluency. Exceptional Children, 71(1), 7–21.
Fuchs, L. S., Fuchs, D., Compton, D. L., Hamlett, C. L., & Wang, A. Y. (2015). Is word- problem solving a form of text comprehension? Scientific Studies of Reading, 19(3), 204–223. Fuchs, L. S., Fuchs, D., Compton, D. L., Powell, S. R., Seethaler, P. M., Capizzi, A. M., . . . Fletcher, J. M. (2006). The cognitive correlates of third-grade skill in arithmetic, algorithmic computation, and arithmetic word problems. Journal of Educational Psychology, 98(1), 29–43. Fuchs, L. S., Fuchs, D., & Deno, S. L. (1985). Importance of goal ambitiousness and goal mastery to student achievement. Exceptional Children, 52(1), 63–71. Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989). Effects of alternative goal structures within curriculum- based measurement. Exceptional Children, 55(5), 429–438. Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Allinder, R. M. (1991). The contribution of skills analysis within curriculum-based measurement in spelling. Exceptional Children, 57, 443–452. Fuchs, L. S., Fuchs, D., Hamlett, C. L., Powell, S. R., Capizzi, A. M., & Seethaler, P. M. (2006). The effects of computer-assisted instruction on number combination skill in at-risk first graders. Journal of Learning Disabilities, 39(5), 467–475. Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27–48. Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256. Fuchs, L. S., Fuchs, D., & Karns, K. (2001). Enhancing kindergartner’s mathematical development: Effects of peer-assisted learning strategies. Elementary School Journal, 101, 495–510. Fuchs, L. S., Fuchs, D., & Kazdan, S. (1999). Effects of peer- assisted learning strategies on high school students with serious reading problems. Remedial and Special Education, 20, 309–318. Fuchs, L. S., Fuchs, D., Phillips, N. B., Ham-
References 495 lett, C. L., & Karns, K. (1995). Acquisition and transfer effects of classwide peer-assisted learning strategies in mathematics for students with varying learning histories. School Psychology Review, 24, 604–630. Fuchs, L. S., Fuchs, D., Powell, S. R., Seethaler, P. M., Cirino, P. T., & Fletcher, J. M. (2008). Intensive intervention for students with mathematics disabilities: Seven principles of effective practice. Learning Disability Quarterly, 31(2), 79–92. Fuchs, L. S., Fuchs, D., Prentice, K., Burch, M., Hamlett, C. L., Owen, R., & Schroeter, K. (2003). Enhancing third-grade student mathematical problem solving with self-regulated learning strategies. Journal of Educational Psychology, 95, 306–315. Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9(2), 20–28. Fuchs, L. S., Fuchs, D., Seethaler, P. M., Cutting, L. E., & Mancilla-Martinez, J. (2019). Connections between reading comprehension and word- problem solving via oral language comprehension: Implications for comorbid learning disabilities. New Directions for Child and Adolescent Development, 2019(165), 73–90. Fuchs, L. S., Fuchs, D., Yazdian, L., & Powell, S. R. (2002). Enhancing first-grade children’s mathematical development with peer-assisted learning strategies. School Psychology Review, 31, 569–583. Fuchs, L. S., Gilbert, J. K., Fuchs, D., Seethaler, P. M., & Martin, B. N. (2018). Text comprehension and oral language as predictors of word-problem solving: Insights into word- problem solving as a form of text comprehension. Scientific Studies of Reading, 22(2), 152–166. Fuchs, L. S., Hamlett, C. L., & Fuchs, D. (1998). MBSP: Monitoring basic skills progress: Basic math concepts and applications: Basic math kit (2nd ed.). Austin, TX: PROED. Fuchs, L. S., Malone, A. S., Schumacher, R. F., Namkung, J., Hamlett, C. L., Jordan, N. C., . . . Changas, P. (2016). Supported self-explaining during fraction intervention. Journal of Educational Psychology, 108(4), 493–508.
Fuchs, L. S., Malone, A. S., Schumacher, R. F., Namkung, J., & Wang, A. (2017). Fraction intervention for students with mathematics difficulties: Lessons learned from five randomized controlled trials. Journal of Learning Disabilities, 50(6), 631–639. Fuchs, L. S., Powell, S. R., Cirino, P. T., Schumacher, R. F., Marrin, S., Hamlett, C. L., . . . Changas, P. C. (2014). Does calculation or word-problem instruction provide a stronger route to prealgebraic knowledge? Journal of Educational Psychology, 106(4), 990–1006. Fuchs, L. S., Powell, S. R., & Fuchs, D. (2017). Math Wise. Retrieved from https://frg. vkcsites.org/what-are-i nterventions/math_ intervention_manuals. Fuchs, L. S., Powell, S. R., Seethaler, P. M., Cirino, P. T., Fletcher, J. M., Fuchs, D., & Hamlett, C. L. (2010). The effects of strategic counting instruction, with and without deliberate practice, on number combination skill among students with mathematics difficulties. Learning and Individual Differences, 20(2), 89–100. Fuchs, L. S., Powell, S. R., Seethaler, P. M., Cirino, P. T., Fletcher, J. M., Fuchs, D., . . . Zumeta, R. O. (2009). Remediating number combination and word problem deficits among students with mathematics difficulties: A randomized control trial. Journal of Educational Psychology, 101(3), 561–579. Fuchs, L. S., Schumacher, R., Malone, A., & Fuchs, D. (2015). Fraction Face-Off! Retrieved from https://frg.vkcsites.org/ what-are-interventions/math_intervention_ manuals. Furey, W. M., Marcotte, A. M., Hintze, J. M., & Shackett, C. M. (2016). Concurrent validity and classification accuracy of curriculum- based measurement for written expression. School Psychology Quarterly, 31(3), 369–381. Gansle, K. A., Noell, G. H., & Freeland, J. T. (2002). Can’t Jane read or won’t Jane read? An analysis of pre-reading skills designed to differentiate skill deficits from performance deficits. Behavior Analyst Today, 3(2), 161– 165. Gansle, K. A., Noell, G. H., VanDerHeyden, A. M., Naquin, G. M., & Slider, N. J. (2002). Moving beyond total words written: The reliability, criterion validity, and time cost
496
References
of alternate measures of curriculum- based measurement in writing. School Psychology Review, 31, 477–497. Gansle, K. A., VanDerHeyden, A. M., Noell, G. H., Resetar, J. L., & Williams, K. L. (2006). The technical adequacy of curriculum-based and rating-based measures of written expression for elementary school students. School Psychology Review, 35(3), 435–450. García, J. R., & Cain, K. (2014). Decoding and reading comprehension: A meta-analysis to identify which reader and assessment characteristics influence the strength of the relationship in English. Review of Educational Research, 84(1), 74–111. Gardill, M. C., & Jitendra, A. K. (1999). Advanced story map instruction: Effects on the reading comprehension of students with learning disabilities. Journal of Special Education, 33, 2–17. Gardner, D., & Davies, M. (2014). A new academic vocabulary list. Applied linguistics, 35(3), 305–327. Gardner, W. I., & Cole, C. L. (1988). Self- monitoring procedures. In E. S. Shapiro & T. R. Kratochwill (Eds.), Behavioral assessment in schools: Conceptual foundations and practical applications (pp. 206–246). New York: Guilford Press. Gaskins, I. W., Downer, M. A., & Gaskins, R. W. (1986). Introduction to the Benchmark School Word Identification/Vocabulary Development Program. Media, PA: Benchmark School. Geary, D. C. (1993). Mathematical disabilities: Cognitive, neuropsychological, and genetic components. Psychological Bulletin, 114(2), 345–362. Geary, D. C. (2011). Cognitive predictors of achievement growth in mathematics: A 5-year longitudinal study. Developmental Psychology, 47(6), 1539. Geary, D. C., Hoard, M. K., & Bailey, D. H. (2012). Fact retrieval deficits in low achieving children and children with mathematical learning disability. Journal of Learning Disability, 45, 291–307. Geary, D. C., Hoard, M. K., Byrd-Craven, J., Nugent, L., & Numtee, C. (2007). Cognitive mechanisms underlying achievement deficits in children with mathematical learning disability. Child Development, 78(4), 1343– 1359.
Gernsbacher, M. A., & Faust, M. E. (1991). The mechanism of suppression: A component of general comprehension skill. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(2), 245. Gersten, R., Beckmann, S., Clarke, B., Foegen, A., Marsh, L., Star, J. R., & Witzel, B. (2009). Assisting students struggling with mathematics: Response to intervention (RtI) for elementary and middle schools (NCEE 2009-4060). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies. ed.gov/ncee/wwc/ publications/practiceguides. Gersten, R., Chard, D. J., Jayanthi, M., Baker, S. K., Morphy, P., & Flojo, J. (2009). Mathematics instruction for students with learning disabilities: A meta-analysis of instructional components. Review of Educational Research, 79(3), 1202–1242. Gersten, R., Clarke, B., Jordan, N. C., Newman- G onchar, R., Haymond, K., & Wilkins, C. (2012). Universal screening in mathematics for the primary grades: Beginnings of a research base. Exceptional Children, 78(4), 423–445. Gersten, R., Compton, D., Connor, C. M., Dimino, J., Santoro, L., Linan- T hompson, S., & Tilly, W. D. (2009). Assisting students struggling with reading: Response to intervention and multi-tier intervention for reading in the primary grades. A practice guide (NCEE 2009-4045). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/ publications/practiceguides. Gersten, R., Haymond, K., Newman-G onchar, R., Dimino, J., & Jayanthi, M. (2020). Meta- analysis of the impact of reading interventions for students in the primary grades. Journal of Research on Educational Effectiveness, 13(2), 401–427. Gersten, R., Schumacher, R. F., & Jordan, N. C. (2017). Life on the number line: Routes to understanding fraction magnitude for students with difficulties learning mathematics. Journal of Learning Disabilities, 50(6), 655–657. Gickling, E. E., & Armstrong, D. L. (1978).
References 497 Levels of instructional difficulty as related to on-task behavior, task completion, and comprehension. Journal of Learning Disabilities, 11(9), 559–566. Gickling, E. E., & Havertape, J. (1981). Curriculum-based assessment. Minneapolis, MN: National School Psychology Inservice Training Network. Gickling, E. E., & Rosenfield, S. (1995). Best practices in curriculum-based assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 3, pp. 587– 595). Washington, DC: National Association of School Psychologists. Gickling, E. E., & Thompson, V. P. (1985). A personal view of curriculum- based assessment. Exceptional Children, 52, 205–218. Gilberts, G. H., Agran, M., Hughes, C., & Wehmeyer, M. (2001). The effects of peer delivered self- monitoring strategies on the participation of students with severe disabilities in general education classrooms. Journal of the Association for Persons with Severe Handicaps, 26, 25–36. Gillespie, A., & Graham, S. (2014). A meta- analysis of writing interventions for students with learning disabilities. Exceptional Children, 80(4), 454–473. Gillies, R. M. (2007). Cooperative learning: Integrating theory and practice. Thousand Oaks, CA: SAGE. Goffreda, C. T., & Clyde DiPerna, J. (2010). An empirical review of psychometric evidence for the Dynamic Indicators of Basic Early Literacy Skills. School Psychology Review, 39(3), 463–483. Goh, D. S., Teslow, C. J., & Fuller, G. B. (1981). The practices of psychological assessment among school psychologists. Professional Psychology, 12, 699–706. Gonzalez-Frey, S. M., & Ehri, L. C. (2020). Connected phonation is more effective than segmented phonation for teaching beginning readers to decode unfamiliar words. Scientific Studies of Reading, 1–14. Good, R. H., III, & Salvia, J. (1988). Curriculum bias in published, norm-referenced reading tests: Demonstrable effects. School Psychology Review, 17, 51–60. Good, R. H., III, Vollmer, M., Creek, R. J., Katz, L., & Chowdhri, S. (1993). Treatment utility of the Kaufman Assessment Battery for Children: Effects of matching instruction
and student processing strength. School Psychology Review, 22, 8–26. Goodman, L. (1990). Time and learning in the special education classroom. Albany: State University of New York Press. Goodwin, A. P., & Ahn, S. (2013). A meta- analysis of morphological interventions in English: Effects on literacy outcomes for school-age children. Scientific Studies of Reading, 17(4), 257–285. Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6–10. Graden, J. L., Casey, A., & Bonstrom, O. (1985). Implementing a prereferral intervention system: Part II. The data. Exceptional Children, 51, 487–496. Graden, J. L., Casey, A., & Christensen, S. L. (1985). Implementing a prereferral intervention system: Part I. The model. Exceptional Children, 51, 377–384. Graham, L., & Wong, B. Y. (1993). Comparing two modes of teaching a question-answering strategy for enhancing reading comprehension: Didactic and self- instructional training. Journal of Learning Disabilities, 26, 270–279. Graham, S. (1982). Composition research and practice: A unified approach. Focus on Exceptional Children, 14(8), 1–16. Graham, S., Collins, A. A., & Rigby-Wills, H. (2017). Writing characteristics of students with learning disabilities and typically achieving peers: A meta- analysis. Exceptional Children, 83(2), 199–218. Graham, S., & Harris, K. R. (1987). Improving composition skills of inefficient learners with self-instructional strategy training. Topics in Language Disorders, 7(4), 66–77. Graham, S., & Harris, K. R. (1989). A components analysis of cognitive strategy instruction: Effects on learning disabled students’ compositions and self- efficacy. Journal of Educational Psychology, 81, 353–361. Graham, S., & Harris, K. R. (2003). Students with learning disabilities and the process of writing: A meta- analysis of SRSD studies. In H. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (pp. 323–344). New York: Guilford Press. Graham, S., & Harris, K. R. (2006). Preventing writing difficulties: Providing additional handwriting and spelling instruction to at-
498
References
risk children in first grade. Teaching Exceptional Children, 38(5), 64–66. Graham, S., & Harris, K. R. (2009). Almost 30 years of writing research: Making sense of it all with The Wrath of Khan. Learning Disabilities Research and Practice, 24, 58–68. Graham, S., Harris, K. R., & Adkins, M. (2018). The impact of supplemental handwriting and spelling instruction with first grade students who do not acquire transcription skills as rapidly as peers: A randomized control trial. Reading and Writing, 31(6), 1273–1294. Graham, S., Harris, K. R., & Chorzempa, B. F. (2002). Contribution of spelling instruction to the spelling, writing, and reading of poor spellers. Journal of Educational Psychology, 94(4), 669–685. Graham, S., Harris, K. R., & Fink, B. (2000). Is handwriting causally related to learning to write? Treatment of handwriting problems in beginning writers. Journal of Educational Psychology, 92(4), 620. Graham, S., Harris, K. R., MacArthur, C. A., & Schwartz, S. (1991). Writing and writing instruction for students with learning disabilities: Review of a research program. Learning Disabilities Quarterly, 14, 89–114. Graham, S., Harris, K. R., & Troia, G. A. (2000). Self-regulated strategy development revisited: Teaching writing strategies to struggling writers. Topic in Language Disorders, 20, 1–14. Graham, S., & Hebert, M. (2011). Writing to read: A meta-analysis of the impact of writing and writing instruction on reading. Harvard Educational Review, 81(4), 710–744. Graham, S., MacArthur, C. A., & Fitzgerald, J. (2007). Best practices in writing instruction. New York: Guilford Press. Graham, S., MacArthur, C. A., Schwartz, S., & Page-Voth, V. (1992). Improving the compositions of students with learning disabilities using a strategy involving product and process goal setting. Exceptional Children, 58, 322–334. Graham, S., McKeown, D., Kiuhara, S., & Harris, K. R. (2012). A meta-analysis of writing instruction for students in the elementary grades. Journal of Educational Psychology, 104(4), 879–899. Graham, S., Olinghouse, N. G., & Harris, K.
R. (2009). Teaching composing to students with learning disabilities: Scientifically supported recommendations. In G. A. Troia (Ed.), Instruction and assessment for struggling writers: Evidence- based practices (pp. 165–186). New York: Guilford Press. Graham, S., & Perin, D. (2007). A meta- analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99(3), 445. Graham, S., & Santangelo, T. (2014). Does spelling instruction make students better spellers, readers, and writers? A meta- analytic review. Reading and Writing, 27(9), 1703–1743. Graney, S. B., Martinez, R. S., Missall, K. N., & Aricak, O. T. (2010). Universal screening of reading in late elementary school: R-CBM versus CBM maze. Remedial and Special Education, 31(5), 368–377. Gravois, T. A., & Gickling, E. E. (2002). Best practices in curriculum-based assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (Vols. 1–2, pp. 885–898). Washington, DC: National Association of School Psychologists. Gray, S. A., Chaban, P., Martinussen, R., Goldberg, R., Gotlieb, H., Kronitz, R., . . . Tannock, R. (2012). Effects of a computerized working memory training program on working memory, attention, and academics in adolescents with severe LD and comorbid ADHD: A randomized controlled trial. Journal of Child Psychology and Psychiatry, 53(12), 1277–1284. Green, S. (2016). Two for one: Using QAR to increase reading comprehension and improve test scores. The Reading Teacher, 70(1), 103–109. Greenwood, C. R. (1991). Longitudinal analysis of time, engagement, and achievement in at-risk versus non-risk students. Exceptional Children, 57, 521–535. Greenwood, C. R. (1996). The case for performance- based instructional models. School Psychology Quarterly, 11, 283–296. Greenwood, C. R., Carta, J. J., & Atwater, J. J. (1991). Ecobehavioral analysis in the classroom: Review and implications. Journal of Behavioral Education, 1, 59–77. Greenwood, C. R., Carta, J. J., Kamps, D., & Delquadri, J. (1993). Ecobehavioral Assessment Systems Software (EBASS): Obser-
References 499
vational instrumentation for school psychologists. Kansas City: Juniper Gardens Children’s Project, University of Kansas. Greenwood, C. R., Carta, J. J., Kamps, D., Terry, B., & Delquadri, J. (1994). Development and validation of standard classroom observation systems for school practitioners: Ecobehavioral Assessment Systems Software (EBASS). Exceptional Children, 61, 197– 210. Greenwood, C. R., Delquadri, J., & Carta, J. J. (1997). Together we can!: Classwide peer tutoring to improve basic academic skills. Longmont, CO: Sopris West. Greenwood, C. R., Delquadri, J. C., & Hall, R. V. (1984). Opportunity to respond and student academic performance. In U. L. Heward, T. E. Heron, D. S. Hill, & J. Trap- Porter (Eds.), Focus on behavior analysis in education (pp. 58–88). Columbus, OH: Merrill. Greenwood, C. R., Delquadri, J., & Hall, R. V. (1989). Longitudinal effects of classwide peer tutoring. Journal of Educational Psychology, 81, 371–383. Greenwood, C. R., Delquadri, J. C., Stanley, S. O., Terry, B., & Hall, R. V. (1985). Assessment of eco-behavioral interaction in school settings. Behavioral Assessment, 7, 331–347. Greenwood, C. R., Dinwiddie, G., Bailey, V., Carta, J. J., Kohler, F. W., Nelson, C., . . . Schulte, D. (1987). Field replication of classwide peer tutoring. Journal of Applied Behavior Analysis, 20, 151–160. Greenwood, C. R., Dinwiddie, G., Terry, B., Wade, L., Stanley, S. O., Thibadeau, S., & Delquadri, J. C. (1984). Teacher-versus peer- mediated instruction: An ecobehavioral analysis of achievement outcomes. Journal of Applied Behavior Analysis, 17, 521–538. Greenwood, C. R., Horton, B. T., & Utley, C. A. (2002). Academic engagement: Current perspectives on research and practice. School Psychology Review, 31, 326–349. Greenwood, C. R., Terry, B., Arreaga-Mayer, C., & Finney, R. (1992). The Classwide Peer Tutoring Program: Implementation factors moderating students’ achievement. Journal of Applied Behavior Analysis, 25, 101–116. Greenwood, C. R., Terry, B., Utley, C. A., Montagna, D., & Walker, D. (1993). Achievement, placement, and services: Middle school benefits of Classwide Peer Tutoring
used at the elementary school. School Psychology Review, 22, 497–516. Gresham, F. M. (1984). Behavioral interviews in school psychology: Issues in psychometric adequacy and research. School Psychology Review, 13, 17–25. Gresham, F. M. (2002). Responsiveness to intervention: An alternative approach to the identification of learning disabilities. In R. Bradley, L. Danielson, & D. P. Hallahan (Eds.), Identification of learning disabilities: Research to practice (pp. 467–519). Hillsdale, NJ: Erlbaum Gresham, F. M., & Elliott, S. N. (2008). Social Skills Improvement System—R ating Scales. Minneapolis, MN: Pearson Assessments. Griffin, C. C., & Jitendra, A. K. (2009). Word problem- solving instruction in inclusive third-grade mathematics classrooms. Journal of Educational Research, 102, 187–201. Grills- Taquechel, A. E., Fletcher, J. M., Vaughn, S. R., Denton, C. A., & Taylor, P. (2013). Anxiety and inattention as predictors of achievement in early elementary school children. Anxiety, Stress & Coping, 26(4), 391–410. Gumpel, T. P., & Frank, R. (1999). An expansion of the peer tutoring program: Cross-age peer tutoring of social skills among socially rejected boys. Journal of Applied Behavior Analysis, 32, 115–118. Gureasko-Moore, S., DuPaul, G. J., & White, G. P. (2006). The effects of self-management in general education classrooms on the organizational skills of adolescents with ADHD. Behavior Modification, 30(2), 159–183. Gureasko-Moore, S., DuPaul, G. J., & White, G. P. (2007). Self-management of classroom preparedness and homework: Effects on school functioning of adolescents with attention deficit hyperactivity disorder. School Psychology Review, 36(4), 647–664. Haager, D., Klingner, J., & Vaughn, S. (Eds.). (2007). Evidence- based reading practices for response to intervention. Baltimore, MD: Brookes. Haile- Griffey, L., Saudargas, R. A., Hulse- Trotter, K., & Zanolli, K. (1993). The classroom behavior of elementary school children during independent seatwork: Establishing local norms. Department of Psychology, University of Tennessee, Knoxville, TN. Unpublished manuscript.
500
References
Hale, J. B., & Fiorello, C. A. (2004). School neuropsychology: A practitioner’s handbook. New York: Guilford Press. Hale, J. B., Fiorello, C. A., Miller, J. A., Wenrich, K., Teodori, A., & Henzel, J. N. (2008). WISC-IV interpretation for specific learning disabilities identification and intervention: A cognitive hypothesis testing approach. In A. Prifitera, D. H. Saklofske, & L. G. Weiss (Eds.), WISC-IV clinical assessment and intervention (2nd ed., pp. 109–171). San Diego, CA: Elsevier Academic Press. Hall, R. V., Delquadri, J. C., Greenwood, C. R., & Thurston, L. (1982). The importance of opportunity to respond in children’s academic success. In E. B. Edgar, N. G. Haring, J. R. Jenkins, & C. G. Pious (Eds.), Mentally handicapped children: Education and training (pp. 107–140). Baltimore, MD: University Park Press. Hallahan, D. P., Lloyd, J. W., Kneedler, R. D., & Marshall, K. J. (1982). A comparison of the effects of self- versus teacher-assessment of on-task behavior. Behavior Therapy, 13, 715–723. Hamilton, C., & Shinn, M. R. (2003). Characteristics of word callers: An investigation of the accuracy of teachers’ judgments of reading comprehension and oral reading skills. School Psychology Review, 32, 228–240. Hammill, D. D. (2004). What we know about correlates of reading. Exceptional Children, 70(4), 453–469. Hammill, D., & Larsen, S. (2009). Test of Written Language—Fourth Edition. Austin, TX: PRO-ED. Hampton, D. D., Lembke, E. S., Lee, Y. S., Pappas, S., Chiong, C., & Ginsburg, H. P. (2012). Technical adequacy of early numeracy curriculum- based progress monitoring measures for kindergarten and first-grade students. Assessment for Effective Intervention, 37(2), 118–126. Harding, L. R., Howard, V. F., & McLaughlin, T. F. (1993). Using self-recording of on-task behavior by a preschool child with disabilities. Perceptual and Motor Skills, 77(3), 786. Harm, M. W., & Seidenberg, M. S. (1999). Phonology, reading acquisition, and dyslexia: Insights from connectionist models. Psychological Review, 106(3), 491–205. Harm, M. W., & Seidenberg, M. S. (2004).
Computing the meanings of words in reading: Cooperative division of labor between visual and phonological processes. Psychological Review, 111(3), 662–720. Harris, K. R., Danoff Friedlander, B., Saddler, B., Frizzelle, R., & Graham, S. (2005). Self- monitoring of attention versus self- monitoring of academic performance: Effects among students with ADHD in the general education classroom. Journal of Special Education, 39(3), 145–157. Harris, K. R., & Graham, S. (1996). Making the writing process work: Strategies for composition and self-regulation (2nd ed.). Cambridge, MA: Brookline. Harris, K. R., & Graham, S. (2016). Self- regulated strategy development in writing: Policy implications of an evidence- based practice. Policy Insights from the Behavioral and Brain Sciences, 3(1), 77–84. Harris, K. R., Graham, S., Aitken, A. A., Barkel, A., Houston, J., & Ray, A. (2017). Teaching spelling, writing, and reading for writing: Powerful evidence-based practices. Teaching Exceptional Children, 49(4), 262–272. Harris, K. R., Graham, S., & Mason, L. H. (2003). Self- regulated strategy development in the classroom: Part of a balanced approach to writing instruction for students with disabilities. Focus on Exceptional Children, 35(7), 1–16. Harris, K. R., Graham, S., Mason, L., & Friedlander, B. (2008). Powerful writing strategies for all students. Baltimore, MD: Brookes. Harris, K. R., Graham, S., Mason, L. H., & Saddler, B. (2002). Developing self-regulated writers. Theory into Practice, 41(2), 110– 115. Harris, K. R., Graham, S., Reid, R., McElroy, K., & Hamby, R. (1994). Self- monitoring of attention versus self- monitoring of performance: Replication and cross-task comparison. Learning Disability Quarterly, 17, 121–139. Hasbrouck, J. (2006). Drop everything and read—but how? For students who are not yet fluent, silent reading is not the best use of classroom time. American Educator, 30(2), 22–31. Hasbrouck, J., & Parker, R. (2001). Quick phonics screener. College Station, TX: Texas A&M Universtiy.
References 501 Hasbrouck, J., & Tindal, G. (2017). An update to compiled ORF norms. Eugene: University of Oregon, Behavioral Research & Teaching. Hattie, J. A. C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York: Routledge. Hayes, J. R. (2012). Modeling and remodeling writing. Written Communication, 29(3), 369–388. Haynes, M. C., & Jenkins, J. R. (1986). Reading instruction in special education resource rooms. American Educational Research Journal, 23, 161–190. Hebert, M., Bohaty, J. J., Nelson, J. R., & Brown, J. (2016). The effects of text structure instruction on expository reading comprehension: A meta- analysis. Journal of Educational Psychology, 108(5), 609. Heller, K. A., Holtzman, W., H., & Messick, S. (Eds.). (1982). Placing children in special education: A strategy for equity. Washington, DC: National Academy Press. Hiebert, E. H. (2005). The effects of text difficulty on second graders’ fluency development. Reading Psychology, 26(2), 183–209. Hilt-Panahon, A., Shapiro, E. S., Devlin, K., Gischlar, K. L., & Clemens, N. H. (2010, February). Data-based decision making in an RTI model: An analysis of team decisions. Paper presented at the Pacific Coast Research Conference, Coronado, CA. Hintze, J. M., & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33(2), 204–217. Hinzte, J. M., Owen, S. V., Shapiro, E. S., & Daly, E. J., III. (2000). Generalizability of oral reading fluency measures: Application of G theory to curriculum-based measurement. School Psychology Quarterly, 15, 52–68. Hintze, J. M., & Shapiro, E. S. (1997). Curriculum- based measurement and literature-based reading: Is curriculum-based measurement meeting the needs of changing reading curricula? Journal of School Psychology, 35, 351–375. Hintze, J. M., Volpe, R. J., & Shapiro, E. S. (2002). Best practices in the systematic direct observation of student behavior. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 4, pp. 993–1006).
Washington, DC: National Association of School Psychologists. Hintze, J. M., Wells, C. S., Marcotte, A. M., & Solomon, B. G. (2018). Decision-making accuracy of CBM progress-monitoring data. Journal of Psychoeducational Assessment, 36(1), 74–81. Hirsch, S. E., Ennis, R. P., & Driver, M. K. (2018). Three student engagement strategies to help elementary teachers work smarter, not harder, in mathematics. Beyond Behavior, 27(1), 5–14. Hitchcock, C., & Westwell, M. S. (2017). A cluster- randomised, controlled trial of the impact of Cogmed working memory training on both academic performance and regulation of social, emotional and behavioural challenges. Journal of Child Psychology and Psychiatry, 58(2), 140–150. Hitchcock, J. H., Johnson, R. B., & Schoonenboom, J. (2018). Idiographic and nomothetic causal inference in special education research and practice: Mixed methods perspectives. Research in the Schools, 25(2), 56–67. Hively, W., & Reynolds, M. C. (Eds.). (1975). Domain-reference testing in special education. Minneapolis: University of Minnesota, Leadership Training Institute. Hogan, T. P., Catts, H. W., & Little, T. D. (2005). The relationship between phonological awareness and reading. Language, Speech, and Hearing Services in Schools, 36(4), 285–293. Hojnoski, R. L., Missall, K. N., & Wood, B. K. (2020). Measuring engagement in early education: Preliminary evidence for the behavioral observation of students in schools— early education. Assessment for Effective Intervention, 45(4), 243–254. Holman, J., & Baer, D. M. (1979). Facilitating generalization of on-task behavior through self- monitoring of academic tasks. Journal of Autism and Developmental Disabilities, 9, 429–446. Homan, S. P., Klesius, J. P., & Hite, C. (1993). Effects of repeated readings and nonrepetitive strategies on students’ fluency and comprehension. Journal of Educational Research, 87, 94–99. Hoover, W. A., & Gough, P. B. (1990). The simple view of reading. Reading and Writing, 2(2), 127–160.
502
References
Hosp, J. L., Hosp, M. K., Howell, K. W., & Allison, R. (2014). The ABCs of curriculum- based evaluation: A practical guide to effective decision making. New York: Guilford Press. Houghton, S., & Bain, A. (1993). Peer tutoring with ESL and below-average readers. Journal of Behavioral Education, 3, 125–142. Howell, K. W., Fox, S. L., & Morehead, M. K. (1993). Curriculum-based evaluation: Teaching and decision making (2nd ed.). Pacific Grove, CA: Brooks/Cole. Howell, K. W., Kurns, S., & Antil, L. (2002). Best practices in curriculum- based evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 4, pp. 753–771). Washington, DC: National Association of School Psychologists. Howell, K. W., & Nolet, V. (1999). Curriculum- based evaluation: Teaching and decision making (3rd ed.). Belmont, CA: Wadsworth. Hresko, W., Schlieve, P., Herron, S., Swain, C., & Sherbenau, R. (2003). Comprehensive Mathematical Abilities Test. Austin, TX: PRO-ED. Hsueh-Chao, M. H., & Nation, P. (2000). Unknown vocabulary density and reading comprehension. Reading in a Foreign Language, 13, 403–413. Hudson, R. F., Pullen, P. C., Lane, H. B., & Torgesen, J. K. (2008). The complex nature of reading fluency: A multidimensional view. Reading and Writing Quarterly, 25(1), 4–32. Hudson, T. M., & McKenzie, R. G. (2016). Evaluating the use of RTI to identify SLD: A survey of state policy, procedures, data collection, and administrator perceptions. Contemporary School Psychology, 20(1), 31–45. Hughes, C. A., Copeland, S. R., Agran, M., Wehneyer, M. L., Rodi, M. S., & Presley, J. A. (2002). Using self- monitoring to improve performance in general education high school classes. Education and Training in Mental Retardation and Developmental Disabilities, 37, 262–272. Hughes, C. A., Korinek, L., & Gorman, J. (1991). Self- management for students with mental retardation in public school settings: A research review. Education and Training in Mental Retardation, 26, 271–291. Hughes, C. A., Morris, J. R., Therrien, W. J., & Benson, S. K. (2017). Explicit instruc-
tion: Historical and contemporary contexts. Learning Disabilities Research and Practice, 32(3), 140–148. Hughes, E. M., Powell, S. R., & Lee, J. Y. (2020). Development and psychometric report of a middle- school mathematics vocabulary measure. Assessment for Effective Intervention, 45(3), 226–234. Hughes, E. M., Powell, S. R., & Stevens, E. A. (2016). Supporting clear and concise mathematics language: Instead of that, say this. Teaching Exceptional Children, 49(1), 7–17. Hughes, J., & Kwok, O. M. (2007). Influence of student-teacher and parent-teacher relationships on lower achieving readers’ engagement and achievement in the primary grades. Journal of Educational Psychology, 99(1), 39–51. Hulme, C., Bowyer- Crane, C., Carroll, J. M., Duff, F. J., & Snowling, M. J. (2012). The causalrole of phoneme awareness and letter-sound knowledge in learning to read: Combining intervention studies with mediation analyses. Psychological Science, 23(6), 572–577. Hultquist, A. M., & Metzke, L. K. (1993). Potential effects of curriculum bias in individual norm-referenced reading and spelling achievement tests. Journal of Psychoeducational Assessment, 11, 337–344. Hutton, J. B., Dubes, R., & Muir, S. (1992). Assessment practices of school psychologists: Ten years later. School Psychology Review, 21, 271–284. Hyatt, K. J. (2007). Brain Gym®: Building stronger brains or wishful thinking? Remedial and Special Education, 28(2), 117–124. Hyatt, K. J., Stephenson, J., & Carter, M. (2009). A review of three controversial educational practices: Perceptual motor programs, sensory integration, and tinted lenses. Education and Treatment of Children, 32(2), 313–342. Idol, L. (1987). Group story mapping: A comprehension strategy for both skilled and unskilled readers. Journal of Learning Disabilities, 20, 196–205. Idol, L., Nevin, A., & Paolucci-W hitcomb, P. (1996). Models of curriculum-based assessment: A blueprint for learning (2nd ed.). Austin, TX: PRO-ED. Idol- Maestas, L. (1983). Special educator’s consultation handbook. Rockville, MD: Aspen Systems Corporation.
References 503 Idol-Maestas, L., & Croll, V. J. (1987). The effects of training in story mapping procedures on the reading comprehension of poor readers. Learning Disability Quarterly, 10, 214–229. Invernizzi, M., Meier, J., & Juel, C. (2015). Phonological Awareness Literacy Screening (PALS). Charlottesville: University of Virginia. Irannejad, S., & Savage, R. (2012). Is a cerebellar deficit the underlying cause of reading disabilities? Annals of Dyslexia, 62(1), 22–52. Irlen, H., & Lass, M. J. (1989). Improving reading problems due to symptoms of scotopic sensitivity syndrome using Irlen lenses and overlays. Education, 109(4), 413–417. Iwata, B., Dorsey, M., Slifer, K., Bauman, K., & Richman, G. (1982). Toward a functional analysis of self- injury. Analysis and Intervention in Developmental Disabilities, 2, 3–20. Iwata, B. A., Vollmer, T. R., & Zarcone, J. R. (1990). The experimental (functional) analysis of behavior disorders: Methodology, applications, and limitations. In A. C. Repp & N. N. Singh (Eds.), Perspectives on the use of nonaversive and aversive interventions for persons with developmental disabilities (pp. 301–330). Sycamore, IL: Sycamore. Jacob, R., & Parkinson, J. (2015). The potential for school-based interventions that target executive function to improve academic achievement: A review. Review of Educational Research, 85(4), 512–552. January, S. A. A., & Ardoin, S. P. (2015). Technical adequacy and acceptability of curriculum- based measurement and the measures of academic progress. Assessment for Effective Intervention, 41(1), 3–15. January, S. A. A., & Klingbeil, D. A. (2020). Universal screening in grades K-2: A systematic review and meta-analysis of early reading curriculum-based measures. Journal of School Psychology, 82, 103–122. January, S.-A. A., Lovelace, M. E., Foster, T. E., & Ardoin, S. P. (2017). A comparison of two flashcard interventions for teaching sight words to early readers. Journal of Behavioral Education, 26, 151–168. Jastak, S., & Wilkinson, G. S. (1984). Wide Range Achievement Test—Revised. Wilmington, DE: Jastak Associates.
Jenkins, J. R., Fuchs, L. S., van den Broek, P., Espin, C., & Deno, S. L. (2003). Sources of individual differences in reading comprehension and reading fluency. Journal of Educational Psychology, 95(4), 719–729. Jenkins, J. R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and maze. Exceptional Children, 59(5), 421–432. Jenkins, J. R., & Pany, D. (1978). Standardized achievement tests: How useful for special education? Exceptional Children, 44, 448–453. Jenkins, J., & Terjeson, K. J. (2011). Monitoring reading growth: Goal setting, measurement frequency, and methods of evaluation. Learning Disabilities Research and Practice, 26(1), 28–35. Jenson, W. R., Rhode, G., & Reavis, H. K. (1994). The tough kid tool box. Longmont, CO: Sopris West. Jewell, J., & Malecki, C. K. (2005). The utility of CBM written language indices: An investigation of production-dependent, production- independent, and accurate- production scores. School Psychology Review, 34(1), 27–44. Jimerson, S. R., Burns, M. K., & VanDerHeyden, A. M. (Eds.). (2007). Handbook of response to intervention: The science and practice of assessment and intervention. New York: Springer. Jitendra, A. K. (2002). Teaching students math problem-solving through graphic representations. Teaching Exceptional Children, 34(4), 34–38. Jitendra, A. K., Alghamdi, A., Edmunds, R., McKevett, N. M., Mouanoutoua, J., & Roesslein, R. (2020). The effects of Tier 2 mathematics interventions for students with mathematics difficulties: A meta- analysis. Exceptional Children, article 0014402920969187. Jitendra, A. K., Dupuis, D. N., & Zaslofsky, A. F. (2014). Curriculum- based measurement and standards- based mathematics: Monitoring the arithmetic word problem-solving performance of third-grade students at risk for mathematics difficulties. Learning Disability Quarterly, 37(4), 241–251. Jitendra, A. K., & Gajria, M. (2011). Main idea and summarization instruction to improve reading comprehension. In R. E. O’Connor
504
References
& P. F. Vadasy (Eds.), Handbook of reading interventions (pp. 198–219). New York: Guilford Press. Jitendra, A. K., Griffin, C. C., Deatline- Buchman, A., & Sczesniak, E. (2007). Mathematical word problem solving in thirdgrade classrooms. Journal of Educational Research, 100, 283–302. Jitendra, A. K., Griffin, C. C., Haria, P., Adams, A., Kaduvettoor, A., & Leh, J. (2007). A comparison of single and multiple strategy instruction on third-grade students’ mathematical problem solving. Journal of Educational Psychology, 99, 115–127. Jitendra, A. K., Griffin, C. C., McGoey, K., Gardill, M. C., Bhat, P., & Riley, T. (1998). Effects of mathematical word problem solving by students at risk or with mild disabilities. Journal of Educational Research, 91, 345–355. Jitendra, A. K., & Hoff, K. (1996). The effects of schema- based instruction on the mathematics word-problem-solving performance of students with learning disabilities. Journal of Learning Disabilities, 29, 422–431. Jitendra, A. K., Hoff, K., & Beck, M. M. (1997, April). The role of schema-based instruction on solving multistep word problems. Paper presented at the annual convention of the Council for Exceptional Children, Salt Lake City, UT. Jitendra, A. K., Hoff, K., & Beck, M. M. (1999). Teaching middle school students with learning disabilities to solve word problems using a schema- based approach. Remedial and Special Education, 20, 50–64. Jitendra, A. K., Sczesniak, E., & Deatline- Buchman, A. (2005). An exploratory validation of curriculum-based mathematical word problem-solving tasks as indicators of mathematics proficiency for third graders. School Psychology Review, 34(3), 358–371. Jitendra, A. K., & Star, J. R. (2011). Meeting the needs of students with learning disabilities in inclusive mathematics classrooms: The role of schema-based instruction on mathematical problem-solving. Theory into Practice, 50(1), 12–19. Jitendra, A. K., Star, J. R., Starosta, K., Leh, J. M., Sood, S., Caskie, G., . . . Mack, T. R. (2009). Improving seventh grade students’ learning of ratio and proportion: The role
of schema-based instruction. Contemporary Educational Psychology, 34, 250–264. Johansson, M., Biglan, A., & Embry, D. (2020). The PAX Good Behavior Game: One model for evolving a more nurturing society. Clinical Child and Family Psychology Review, 23(4), 462–482. Johnson, D. W., & Johnson, R. T. (1985). Cooperative learning and adaptive education. In M. C. Wang & H. J. Walberg (Eds.), Adapting instruction to individual differences (pp. 105–134). Berkley, CA: McCutchan. Johnson, D. W., & Johnson, R. T. (1986). Mainstreaming and cooperative learning strategies. Exceptional Children, 52, 553– 561. Johnson, D. W., Maruyama, G., Johnson, R., Nelson, D., & Skon, L. (1981). The effects of cooperative, competitive, and individualistic goal structures on achievement: A meta- analysis. Psychological Bulletin, 89, 47–62. Johnson, E. S., Jenkins, J. R., Petscher, Y., & Catts, H. W. (2009). How can we improve the accuracy of screening instruments? Learning Disabilities Research and Practice, 24(4), 174–185. Johnson, L. J., & Idol- Maestes, L. (1986). Peer tutoring as a reinforcer for appropriate tutee behavior. Journal of Special Education Technology, 7(4), 14–21. Johnston, M. B., Whitman, T. L., & Johnson, M. (1980). Teaching addition and subtraction to mentally retarded children: A self- instructional program. Applied Research in Mental Retardation, 1, 141–160. Jordan, N. C., Glutting, J., Ramineni, C., & Watkins, M. W. (2010). Validating a number sense screening tool for use in kindergarten and first grade: Prediction of mathematics proficiency in third grade. School Psychology Review, 39(2), 181–195. Jordan, N. C., Hanich, L. B., & Kaplan, D. (2003). Arithmetic fact mastery in young children: A longitudinal investigation. Journal of Experimental Child Psychology, 85, 103–119. Joseph, L. M. (2000). Using word boxes as a large group phonics approach in a first grade classroom. Reading Horizons, 41(2), 117– 126. Joseph, L. M., Alber-Morgan, S., Cullen, J., & Rouse, C. (2016). The effects of self-
References 505 questioning on reading comprehension: A literature review. Reading and Writing Quarterly, 32(2), 152–173. Joseph, L. M., & Eveleigh, E. L. (2011). A review of the effects of self-monitoring on reading performance of students with disabilities. Journal of Special Education, 45(1), 43–53. Joseph, L. M., Konrad, M., Cates, G., Vajcner, T., Eveleigh, E., & Fishley, K. M. (2012). A meta-analytic review of the covercopy- compare and variations of this self- management procedure. Psychology in the Schools, 49(2), 122–136. Joshi, R. M. (2005). Vocabulary: A critical component of comprehension. Reading and Writing Quarterly, 21(3), 209–219. Joshi R. M. (2018). Simple view of reading (SVR) in different orthographies: Seeing the forest with the trees. In T. Lachmann & T. Weis (Eds.), Reading and dyslexia: Literacy studies 16 (pp. 71–80). New York: Springer. Joshi, R. M., & Aaron, P. G. (2000). The component model of reading: Simple view of reading made a little more complex. Reading Psychology, 21(2), 85–97. Juel, C., & Minden-Cupp, C. (2000). Learning to read words: Linguistic units and instructional strategies. Reading Research Quarterly, 35, 458–492. Jung, P. G., McMaster, K. L., Kunkel, A. K., Shin, J., & Stecker, P. M. (2018). Effects of data-based individualization for students with intensive learning needs: A meta- analysis. Learning Disabilities Research and Practice, 33(3), 144–155. Junod, R. E. V., DuPaul, G. J., Jitendra, A. K., Volpe, R. J., & Cleary, K. S. (2006). Classroom observations of students with and without ADHD: Differences across types of engagement. Journal of School Psychology, 44, 87–104. Kame’enui, E. J., Carnine, D. W., Dixon, R. C., Simmons, D. C., & Coyne, M. D. (2002). Effective teaching strategies that accommodate diverse learners (2nd ed.). Upper Saddle River, NJ: Prentice-Hall. Kame’enui, E. J., Simmons, D. C., Chard, D., & Dickson, S. (1997). Direct instruction reading. In S. Stahl & D. A. Hayes (Eds.), Instructional models in reading (pp. 59–64). Hillsdale, NJ: Erlbaum.
Kamhi, A. G. (2014). Improving clinical practices for children with language and learning disorders. Language, Speech, and Hearing Services in Schools, 45(2), 92–103. Kamhi, A. G., & Catts, H. W. (2017). Epilogue: Reading comprehension is not a single ability— I mplications for assessment and instruction. Language, Speech, and Hearing Services in Schools, 48(2), 104–107. Kamps, D. M., Dugan, E., Potucek, J., & Collins, A. (1999). Effects of cross-age peer tutoring networks among students with autism and general education students. Journal of Behavioral Education, 9, 97–115. Kamps, D. M., Leonard, B. R., Dugan, E. P., & Boland, B. (1991). The use of ecobehavioral assessment to identify naturally occurring effective procedures in classrooms serving students with autism and developmental disabilities. Journal of Behavioral Education, 4, 367–397. Kanfer, F. H. (1971). The maintenance of behavior by self-generated stimuli and reinforcement. In A. Jacobs & L. B. Sachs (Eds.), The psychology of private events (pp. 39–58). New York: Academic Press. Karlsen, B., Madden, R., & Gardner, E. F. (1975). Stanford Diagnostic Reading Test (Green level form B). New York: Harcourt, Brace Jovanovich. Karweit, N. L. (1983). Time on task: A research review (Report No. 332). Baltimore, MD: Johns Hopkins University, Center for Social Organization of Schools. Karweit, N. L., & Slavin, R. E. (1981). Measurement and modeling choices in studies of time and learning. American Educational Research Journal, 18, 157–171. Kassai, R., Futo, J., Demetrovics, Z., & Takacs, Z. K. (2019). A meta-analysis of the experimental evidence on the near- and far-transfer effects among children’s executive function skills. Psychological Bulletin, 145(2), 165– 188. Katusic, S. K., Colligan, R. C., Weaver, A. L., & Barbaresi, W. J. (2009). The forgotten learning disability: Epidemiology of written- language disorder in a population- based birth cohort (1976–1982), Rochester, Minnesota. Pediatrics, 123(5), 1306–1313. Kavale, K. A., & Forness, S. R. (1987). Substance over style: Assessing the efficacy of
506
References
modality testing and teaching. Exceptional Children, 54, 228–239. Kavale, K., & Mattson, P. D. (1983). “One jumped off the balance beam” meta-analysis of perceptual- motor training. Journal of Learning Disabilities, 16(3), 165–173. Kavale, K. A., & Spaulding, L. S. (2008). Is response to intervention good policy for specific learning disabilities? Learning Disabilities Research and Practice, 23, 169–179. Kazdin, A. E. (1982). The token economy: A decade later. Journal of Applied Behavior Analysis, 15(3), 431–445. Kazdin, A. E. (1985). Selection of target behaviors: The relationship of the treatment focus to clinical dysfunction. Behavioral Assessment, 7, 33–48. Kearns, D. M., & Al Ghanem, R. (2019). The role of semantic information in children’s word reading: Does meaning affect readers’ ability to say polysyllabic words aloud? Journal of Educational Psychology, 111(6), 933. Kearns, D. M., & Fuchs, D. (2013). Does cognitively focused instruction improve the academic performance of low- achieving students? Exceptional Children, 79(3), 263– 290. Kearns, D. M., Rogers, H. J., Koriakin, T., & Al Ghanem, R. (2016). Semantic and phonological ability to adjust recoding: A unique correlate of word reading skill? Scientific Studies of Reading, 20(6), 455–470. Kearns, D. M., & Whaley, V. M. (2019). Helping students with dyslexia read long words: Using syllables and morphemes. Teaching Exceptional Children, 51(3), 212–225. Keenan, J. M., Betjemann, R. S., & Olson, R. K. (2008). Reading comprehension tests vary in the skills they assess: Differential dependence on decoding and oral comprehension. Scientific Studies of Reading, 12(3), 281– 300. Keenan, J. M., & Meenan, C. E. (2014). Test differences in diagnosing reading comprehension deficits. Journal of Learning Disabilities, 47(2), 125–135. Kellam, S. G., Brown, C. H., Poduska, J. M., Ialongoc, N. S., Wang, W., Toyinbo, P., . . . Wilcox, C. H. (2008). Effects of a universal classroom behavior management program in first and second grades on young adult behavioral, psychiatric, and social outcomes.
Drug and Alcohol Dependence, 95, SS5– S28. Keller- Margulis, M. A., Mercer, S. H., & Matta, M. (2021). Validity of automated text evaluation tools for written-expression curriculum-based measurement: a comparison study. Reading and Writing, 34(10), 2461–2480. Keller-Margulis, M. A., Mercer, S. H., Payan, A., & McGee, W. (2015). Measuring annual growth using written expression curriculum- based measurement: An examination of seasonal and gender differences. School Psychology Quarterly, 30(2), 276. Keller-Margulis, M. A., Ochs, S., Reid, E. K., Faith, E. L., & Schanding Jr, G. T. (2019). Validity and diagnostic accuracy of early written expression screeners in kindergarten. Journal of Psychoeducational Assessment, 37(5), 539–552. Keller- Margulis, M., Payan, A., Jaspers, K. E., & Brewton, C. (2016). Validity and diagnostic accuracy of written expression curriculum-based measurement for students with diverse language backgrounds. Reading and Writing Quarterly, 32(2), 174–198. Kelley, B., Hosp, J. L., & Howell, K. W. (2008). Curriculum-based evaluation and math: An overview. Assessment for Effective Instruction, 33, 250–256. Kendeou, P., & van den Broek, P. (2007). The effects of prior knowledge and text structure on comprehension processes during reading of scientific texts. Memory and Cognition, 35(7), 1567–1577. Kendeou, P., van den Broek, P., Helder, A., & Karlsson, J. (2014). A cognitive view of reading comprehension: Implications for reading difficulties. Learning Disabilities Research and Practice, 29(1), 10–16. Kent, S. C., Wanzek, J., & Al Otaiba, S. (2012). Print reading in general education kindergarten classrooms: What does it look like for students at-risk for reading difficulties? Learning Disabilities Research and Practice, 27(2), 56–65. Kern, L., & Clemens, N. H. (2007). Antecedent strategies to promote appropriate classroom behavior. Psychology in the Schools, 44(1), 65–75. Ketterlin-G eller, L. R., & Chard, D. J. (2011). Algebra readiness for students with learning
References 507 difficulties in grades 4–8: Support through the study of number. Australian Journal of Learning Difficulties, 16(1), 65–78. Ketterlin-G eller, L. R., Gifford, D. B., & Perry, L. (2015). Measuring middle school students’ algebra readiness: Examining validity evidence for three experimental measures. Assessment for Effective Intervention, 41(1), 28–40. Ketterlin- G eller, L. R., Shivraj, P., Basaraba, D., & Schielack, J. (2019). Universal screening for algebra readiness in middle school: Why, what, and does it work? Investigations in Mathematics Learning, 11(2), 120–133. Ketterlin-G eller, L. R., & Yovanoff, P. (2009). Diagnostic assessments in mathematics to support instructional decision making. Practical Assessment, Research, and Evaluation, 14(1), 16. Kettler, R. J., Glover, T. A., Albers, C. A., & Feeney-Kettler, K. A. (Eds.). (2014). School psychology book series. Universal screening in educational settings: Evidence-based decision making for schools. Washington, DC: American Psychological Association. Kibby, M. Y., Fancher, J. B., Markanen, R., & Hynd, G. W. (2008). A quantitative magnetic resonance imaging analysis of the cerebellar deficit hypothesis of dyslexia. Journal of Child Neurology, 23, 368–380. Kilgus, S. P., & Eklund, K. R. (2016, March). Consideration of base rates within universal screening for behavioral and emotional risk: A novel procedural framework. School Psychology Forum, 10(1), 120–130. Kilgus, S. P., Methe, S. A., Maggin, D. M., & Tomasula, J. L. (2014). Curriculum- based measurement of oral reading (R-CBM): A diagnostic test accuracy meta- analysis of evidence supporting use in universal screening. Journal of School Psychology, 52(4), 377–405. Kilgus, S. P., Sims, W. A., von der Embse, N. P., & Riley-Tillman, T. C. (2015). Confirmation of models for interpretation and use of the Social and Academic Behavior Risk Screener (SABRS). School Psychology Quarterly, 30(3), 335. Kilpatrick, D. A. (2015). Essentials of assessing, preventing, and overcoming reading difficulties. Hoboken, NJ: Wiley. Kilpatrick, K. D., Maras, M. A., Brann, K. L.,
& Kilgus, S. P. (2018). Universal screening for social, emotional, and behavioral risk in students: DESSA-mini risk stability over time and its implications for screening procedures. School Psychology Review, 47(3), 244–257. Kim, M. K., Bryant, D. P., Bryant, B. R., & Park, Y. (2017). A synthesis of interventions for improving oral reading fluency of elementary students with learning disabilities. Preventing School Failure: Alternative Education for Children and Youth, 61(2), 116–125. Kim, Y. S., Petscher, Y., Foorman, B. R., & Zhou, C. (2010). The contributions of phonological awareness and letter- name knowledge to letter- sound acquisition— a cross- classified multilevel model approach. Journal of Educational Psychology, 102(2), 313–329. Kim, Y. S., Petscher, Y., Schatschneider, C., & Foorman, B. (2010). Does growth rate in oral reading fluency matter in predicting reading comprehension achievement? Journal of Educational Psychology, 102(3), 652. Kim, Y. S. G., & Schatschneider, C. (2017). Expanding the developmental models of writing: A direct and indirect effects model of developmental writing (DIEW). Journal of Educational Psychology, 109(1), 35–53. Kinnunen, R., & Vauras, M. (1995). Comprehension monitoring and the level of comprehension in high-and low-achieving primary school children’s reading. Learning and Instruction, 5(2), 143–165. Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction- integration model. Psychological Review, 95(2), 163. Kirby, J. R., Parrila, R. K., & Pfeiffer, S. L. (2003). Naming speed and phonological awareness as predictors of reading development. Journal of Educational Psychology, 95(3), 453. Kirby, J. R., & Savage, R. S. (2008). Can the simple view deal with the complexities of reading? Literacy, 42(2), 75–82. Kittelman, A., Goodman, S., & Rowe, D. A. (2021). Effective teaming to implement evidence-based practices. Teaching Exceptional Children, 53(4), 264–267. Klingbeil, D. A., Moeyaert, M., Archer, C.
508
References
T., Chimboza, T. M., & Zwolski Jr., S. A. (2017). Efficacy of peer-mediated incremental rehearsal for English language learners. School Psychology Review, 46(1), 122–140. Klingbeil, D. A., Nelson, P. M., Van Norman, E. R., & Birr, C. (2017). Diagnostic accuracy of multivariate universal screening procedures for reading in upper elementary grades. Remedial and Special Education, 38(5), 308–320. Klingbeil, D. A., Van Norman, E. R., Nelson, P. M., & Birr, C. (2018). Evaluating screening procedures across changes to the statewide achievement test. Assessment for Effective Intervention, 44(1), 17–31. Klingner, J. K., & Vaughn, S. (1998). Using collaborative strategic reading. Teaching Exceptional Children, 30(6), 32–37. Knuth, E. J., Stephens, A. C., McNeil, N. M., & Alibali, M. W. (2006). Does understanding the equal sign matter? Evidence from solving equations. Journal for Research in Mathematics Education, 37(4), 297–312. Koegel, L. K., Harrower, J. K., & Koegel, R. L. (1999). Support for children with developmental disabilities in full inclusion classrooms through self-management. Journal of Positive Behavior Interventions, 1, 26–34. Kosiewicz, M. M., Hallahan, D. P., Lloyd, J., & Graves, A. W. (1982). Effects of self- instruction and self- correction procedures on handwriting performance. Learning Disability Quarterly, 5, 71–78. Kranzler, J. H., Floyd, R. G., Benson, N., Zaboski, B., & Thibodaux, L. (2016a). Classification agreement analysis of cross-battery assessment in the identification of specific learning disorders in children and youth. International Journal of School and Educational Psychology, 4(3), 124–136. Kranzler, J. H., Floyd, R. G., Benson, N., Zaboski, B., & Thibodaux, L. (2016b). Cross- battery assessment pattern of strengths and weaknesses approach to the identification of specific learning disorders: Evidence- based practice or pseudoscience? International Journal of School and Educational Psychology, 4(3), 146–157. Kroesbergen, E. H., & Van Luit, J. E. (2003). Mathematics interventions for children with special educational needs: A meta-analysis. Remedial and Special Education, 24(2), 97–114.
Kroese, J. M., Hynd, G. W., Knight, D. F., Hiemenz, J. R., & Hall, J. (2000). Clinical appraisal of spelling ability and its relationship to phonemic awareness (blending, segmenting, elision, and reversal), phonological memory, and reading in reading disabled, ADHD, and normal children. Reading and Writing, 13(1), 105–131. Kuhn, M. R., Schwanenflugel, P. J., & Meisinger, E. B. (2010). Aligning theory and assessment of reading fluency: Automaticity, prosody, and definitions of fluency. Reading Research Quarterly, 45, 230–251, Kunsch, C. A., Jitendra, A. K., & Sood, S. (2007). The effects of peer-mediated instruction in mathematics for students with learning problems: A research synthesis. Learning Disabilities Research and Practice, 22, 1–12. Kunzelmann, H. D. (Ed.). (1970). Precision teaching. Seattle, WA: Special Child Publications. Kupzyk, S., Daly, E. J., III, & Andersen, M. N. (2011). A comparison of two flash-card methods for improving sight-word reading. Journal of Applied Behavior Analysis, 44, 781–792. Kuster, S. M., van Weerdenburg, M., Gompel, M., & Bosman, A. M. (2018). Dyslexie font does not benefit reading in children with or without dyslexia. Annals of Dyslexia, 68(1), 25–42. Lam, A., Cole, C. L., Shapiro, E. S., & Bambara, L. M. (1994). Relative effects of self- monitoring on-task behavior, academic accuracy, and disruptive behavior in students with behavior disorders. School Psychology Review, 23, 44–58. Lane, K. L., Givner, C. C., & Pierson, M. R. (2004). Teacher expectations of student behavior: Social skills necessary for success in elementary school classrooms. Journal of Special Education, 38(2), 104–110. Lane, K. L., Oakes, W. P., Cantwell, E. D., Common, E. A., Royer, D. J., Leko, M. M., . . . Allen, G. E. (2019). Predictive validity of Student Risk Screening Scale—I nternalizing and Externalizing (SRSS-IE) scores in elementary schools. Journal of Emotional and Behavioral Disorders, 27(4), 221–234. Lane, K. L., Oakes, W. P., Cantwell, E. D., Schatschneider, C., Menzies, H., Crittenden, M., & Messenger, M. (2016). Student Risk Screening Scale for internalizing and exter-
References 509 nalizing behaviors: Preliminary cut scores to support data- informed decision making in middle and high schools. Behavioral Disorders, 42(1), 271–284. Lane, K. L., Oakes, W. P., Harris, P. J., Menzies, H. M., Cox, M., & Lambert, W. (2012). Initial evidence for the reliability and validity of the Student Risk Screening Scale for internalizing and externalizing behaviors at the elementary level. Behavioral Disorders, 37(2), 99–122. Leahy, L. R. F., Miller, F. G., & Schardt, A. A. (2019). Effects of teacher-directed opportunities to respond on student behavioral outcomes: A quantitative synthesis of single- case design research. Journal of Behavioral Education, 28(1), 78–106. Lee, J., Bryant, D. P., Ok, M. W., & Shin, M. (2020). A systematic review of interventions for algebraic concepts and skills of secondary students with learning disabilities. Learning Disabilities Research and Practice, 35(2), 89–99. Lee, J., & Yoon, S. Y. (2017). The effects of repeated reading on reading fluency for students with reading disabilities: A meta- analysis. Journal of Learning Disabilities, 50(2), 213–224. Leh, J. M., Jitendra, A. K., Caskie, G. I., & Griffin, C. C. (2007). An evaluation of curriculum- based measurement of mathematics word problem-solving measures for monitoring third-grade students’ mathematics competence. Assessment for Effective Intervention, 32(2), 90–99. Lein, A. E., Jitendra, A. K., & Harwell, M. R. (2020). Effectiveness of mathematical word problem solving interventions for students with learning disabilities and/or mathematics difficulties: A meta-analysis. Journal of Educational Psychology, 112(7), 1388–1408. Leinhardt, G., Zigmond, N., & Cooley, W. W. (1981). Reading instruction and its effects. American Educational Research Journal, 18, 343–361. Lembke, E. S., Allen, A., Cohen, D., Hubbuch, C., Landon, D., Bess, J., & Bruns, H. (2017). Progress monitoring in social studies using vocabulary matching curriculum-based measurement. Learning Disabilities Research and Practice, 32(2), 112–120. Lembke, E., Deno, S. L., & Hall, K. (2003). Identifying an indicator of growth in early
writing proficiency for elementary school students. Assessment for Effective Intervention, 28(3–4), 23–35. Lembke, E., & Foegen, A. (2009). Identifying early numeracy indicators for kindergarten and first-grade students. Learning Disabilities Research and Practice, 24, 12–20. Lembke, E. S., McMaster, K. L., Smith, R. A., Allen, A., Brandes, D., & Wagner, K. (2018). Professional development for data-based instruction in early writing: Tools, learning, and collaborative support. Teacher Education and Special Education, 41(2), 106–120. Lemons, C. J., Kearns, D. M., & Davidson, K. A. (2014). Data-based individualization in reading: Intensifying interventions for students with significant reading disabilities. Teaching Exceptional Children, 46(4), 20–29. Lentz, F. E., Jr., & Shapiro, E. S. (1985). Behavioral school psychology: A conceptual model for the delivery of psychological services. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. 4, pp. 191–232). Hillsdale, NJ: Erlbaum. Lentz, F. E., Jr., & Shapiro, E. S. (1986). Functional assessment of the academic environment. School Psychology Review, 15, 346– 357. Lentz, F. E., Jr., & Wehmann, B. A. (1995). Interviewing. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 3, pp. 637–650). Washington, DC: National Association of School Psychologists. Lenz, B. K., Schumaker, J. B., Deshler, D. D., & Beals, V. L. (1984). Learning strategies curriculum: The word identification strategy. Lawrence: University of Kansas. Leong, H. M., Carter, M., & Stephenson, J. R. (2015). Meta- analysis of research on sensory integration therapy for individuals with developmental and learning disabilities. Journal of Developmental and Physical Disabilities, 27(2), 183–206. Levendowski, L. S., & Cartledge, G. (2000). Self-monitoring for elementary school children with serious emotional disturbances: Classroom applications for increased academic responding. Behavioral Disorders, 25, 211–224. Levinson, H. N. (1988). The cerebellar- vestibular basis of learning disabilities in
510
References
children, adolescents and adults: Hypothesis and study. Perceptual and Motor Skills, 67(3), 983–1006. Levy, B. A., Abello, B., & Lysynchuk, L. (1997). Transfer from word training to reading in context: Gains in reading fluency and comprehension. Learning Disability Quarterly, 20(3), 173–188. Lewis, M. F., Truscott, S. D., & Volker, M. A. (2008). Demographics and professional practices of school psychologists: A comparison of NASP members and non-NASP school psychologists by telephone survey. Psychology in the Schools, 45, 467–482. Liew, J., Chen, Q., & Hughes, J. N. (2010). Child effortful control, teacher–student relationships, and achievement in academically at-risk children: Additive and interactive effects. Early Childhood Research Quarterly, 25(1), 51–64. Limpo, T., Alves, R. A., & Connelly, V. (2017). Examining the transcription- writing link: Effects of handwriting fluency and spelling accuracy on writing performance via planning and translating in middle grades. Learning and Individual Differences, 53, 26–36. Lin, C. H. J., Knowlton, B. J., Chiang, M. C., Iacoboni, M., Udompholkul, P., & Wu, A. D. (2011). Brain–behavior correlates of optimizing learning through interleaved practice. Neuroimage, 56(3), 1758–1772. Lin, X. (2021). Investigating the unique predictors of word- problem solving using meta- analytic structural equation modeling. Educational Psychology Review, 33(3), 1097–1124. Lin, X., Peng, P., & Zeng, J. (2021). Understanding the relation between mathematics vocabulary and mathematics performance: A meta-analysis. Elementary School Journal, 121(3), 504–540. Little, S. G., Akin- Little, A., & O’Neill, K. (2015). Group contingency interventions with children—1980–2010: A meta-analysis. Behavior Modification, 39(2), 322–341. Liu, M., Bryant, D. P., Kiru, E., & Nozari, M. (2019). Geometry interventions for students with learning disabilities: A research synthesis. Learning Disability Quarterly, article 0731948719892021. Lloyd, J. W., Hallahan, D. P., Kosiewicz, M. M., & Kneedler, R. D. (1982). Reactive effects of self-assessment and self-recording
on attention to task and academic productivity. Learning Disability Quarterly, 5, 216–227. Lochman, J. E., & Curry, J. F. (1986). Effects of social problem-solving training and self- instruction training with aggressive boys. Journal of Clinical Child Psychology, 15, 159–164. Locuniak, M. N., & Jordan, N. C. (2008). Using kindergarten number sense to predict calculation fluency in second grade. Journal of Learning Disabilities, 41(5), 451–459. Lonigan, C. J. (2007). Vocabulary development and the development of phonological awareness skills in preschool children. In R. K. Wagner, A. E. Muse, & K. R. Tannenbaum (Eds.), Vocabulary acquisition: Implications for reading comprehension (pp. 15–31). New York: Guilford Press. Lonigan, C. J., Anthony, J. L., Phillips, B. M., Purpura, D. J., Wilson, S. B., & McQueen, J. D. (2009). The nature of preschool phonological processing abilities and their relations to vocabulary, general cognitive abilities, and print knowledge. Journal of Educational Psychology, 101(2), 345–365. Lonigan, C. J., Burgess, S. R., & Schatschneider, C. (2018). Examining the simple view of reading with elementary school children: Still simple after all these years. Remedial and Special Education, 39(5), 260–273. Lovett, M. W., Frijters, J. C., Wolf, M., Steinbach, K. A., Sevcik, R. A., & Morris, R. D. (2017). Early intervention for children at risk for reading disabilities: The impact of grade at intervention and individual differences on intervention outcomes. Journal of Educational Psychology, 109(7), 889. Lyon, A. R., Connors, E., Jensen- Doss, A., Landes, S. J., Lewis, C. C., McLeod, B. D., . . . Weiner, B. J. (2017). Intentional research design in implementation science: Implications for the use of nomothetic and idiographic assessment. Translational Behavioral Medicine, 7(3), 567–580. Lysynchuk, L. M., Pressley, M., & Vye, N. J. (1990). Reciprocal instruction improves standardized reading comprehension performance in poor grade-school comprehenders. Elementary School Journal, 90, 469–484. Maag, J. W. (1990). Social skills training in schools. Special Services in the Schools, 6, 1–19.
References 511 Maag, J. W., Reid, R., & DiGangi, S. A. (1993). Differential effects of self-monitoring attention, accuracy, and productivity. Journal of Applied Behavior Analysis, 26, 329–344. Mabbott, D. J., & Bisanz, J. (2008). Computational skills, working memory, and conceptual knowledge in older children with mathematics learning disabilities. Journal of Learning Disabilities, 41, 15–28. Mace, F. C., & West, B. J. (1986). Unresolved theoretical issues in self-management: Implications for research and practice. Professional School Psychology, 1, 149–163. MacGregor, M., & Price, E. (1999). An exploration of aspects of language proficiency and algebra learning. Journal for Research in Mathematics Education, 30(4), 449–467. MacQuarrie, L. L., Tucker, J. A., Burns, M. L., & Hartman, B. (2002). Comparison of retention rates using traditional, drill sandwich, and incremental rehearsal flash card methods. School Psychology Review, 31, 584–595. MacSuga-Gage, A. S., & Simonsen, B. (2015). Examining the effects of teacher- directed opportunities to respond on student outcomes: A systematic review of the literature. Education and Treatment of Children, 38(2), 211–239. Madrid, D., Terry, B., Greenwood, C., Whaley, M., & Webber, N. (1998). Active vs. passive peer tutoring: Teaching spelling to at-risk students. Journal of Research and Development in Education, 31, 236–244. Maggin, D. M., Chafouleas, S. M., Goddard, K. M., & Johnson, A. H. (2011). A systematic evaluation of token economies as a classroom management tool for students with challenging behavior. Journal of School Psychology, 49(5), 529–554. Maggin, D. M., Johnson, A. H., Chafouleas, S. M., Ruberto, L. M., & Berggren, M. (2012). A systematic evidence review of school-based group contingency interventions for students with challenging behavior. Journal of School Psychology, 50(5), 625–654. Maggin, D. M., Pustejovsky, J. E., & Johnson, A. H. (2017). A meta- analysis of school- based group contingency interventions for students with challenging behavior: An update. Remedial and Special Education, 38(6), 353–370. Maheady, L., Harper, G., Mallette, B., & Win-
stanley, N. (1991). Training and implementation requirements associated with the use of a classwide peer tutoring system. Education and Treatment of Children, 14, 177–198. Mahn, C., & Greenwood, G. E. (1990). Cognitive behavior modification: Use of self- instruction strategies by first graders on academic tasks. Journal of Educational Research, 83, 158–161. Maki, K. E., & Adams, S. R. (2020). Specific learning disabilities identification: Do the identification methods and data matter? Learning Disability Quarterly, 43(2), 63–74. Malone, A. S., Fuchs, L. S., Sterba, S. K., Fuchs, D., & Foreman-Murray, L. (2019). Does an integrated focus on fractions and decimals improve at-risk students’ rational number magnitude performance? Contemporary Educational Psychology, 59, 101782. Maloney, E. A., Ramirez, G., Gunderson, E. A., Levine, S. C., & Beilock, S. L. (2015). Intergenerational effects of parents’ math anxiety on children’s math achievement and anxiety. Psychological Science, 26(9), 1480–1488. Manning, B. H. (1990). Cognitive self- instruction for an off-task fourth grader during independent academic tasks: A case study. Contemporary Educational Psychology, 15, 36–46. Mano, Q. R., & Kloos, H. (2018). Sensitivity to the regularity of letter patterns within print among preschoolers: Implications for emerging literacy. Journal of Research in Childhood Education, 32(4), 379–391. Marcotte, A. M., Clemens, N. H., Parker, C., & Whitcomb, S. A. (2016). Examining the classification accuracy of a vocabulary screening measure with preschool children. Assessment for Effective Intervention, 41(4), 230–242. Marcotte, A. M., & Hintze, J. M. (2009). Incremental and predictive utility of formative assessment methods of reading comprehension. Journal of School Psychology, 47, 315–335. Marcotte, A. M., Parker, C., Furey, W., & Hands, J. L. (2014). An examination of the validity of the Dynamic Indicators of Vocabulary Skills (DIVS). Journal of Psychoeducational Assessment, 32(2), 133–145. Markwardt, F. C. (1997). Peabody Individual Achievement Test— Revised/Normative
512
References
Update. Circle Pines, MN: American Guidance Service. Marston, D., Lau, M., & Muyskens, P. (2007). Implementation of the problem- solving model in the Minneapolis public schools. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 279–287). New York: Springer. Marston, D., & Magnusson, D. (1988). Curriculum- based measurement: District level implementation. In J. L. Graden, J. E. Zins, & M. J. Curtis (Eds.), Alternative educational delivery systems: Enhancing instructional options for all students (pp. 137–177). Washington, DC: National Association of School Psychologists. Marston, D., Muyskens, P., Lau, M. Y., & Canter, A. (2003). Problem- solving model for decision making with high- incidence disabilities: The Minneapolis experience. Learning Disabilities Research and Practice, 18, 187–200. Marston, D., & Tindal, G. (1995). Best practices in performance monitoring. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 3, pp. 597–607). Washington, DC: National Association of School Psychologists. Martens, B. K., Steele, E. S., Massie, D. R., & Diskin, M. T. (1995). Curriculum bias in standardized tests of reading decoding. Journal of School Psychology, 33, 287–296. Martin, A. J., & Elliot, A. J. (2016). The role of personal best (PB) goal setting in students’ academic achievement gains. Learning and Individual Differences, 45, 222–227. Martin-Chang, S. L., & Levy, B. A. (2005). Fluency transfer: Differential gains in reading speed and accuracy following isolated word and context training. Reading and Writing, 18(4), 343–376. Martinez, R. S., Missall, K. N., Graney, S. B., Aricak, O. T., & Clarke, B. (2009). Technical adequacy of early numeracy curriculum-based measurement in kindergarten. Assessment for Effective Intervention, 34(2), 116–125. Marulis, L. M., & Neuman, S. B. (2010). The effects of vocabulary intervention on young children’s word learning: A meta- analysis. Review of Educational Research, 80(3), 300–335.
Maslen, J. R., & Maslen, B. L. (2006). Bob books. New York: Scholastic. Mason, L. H., Harris, K. R., & Graham, S. (2002). Every child has a story to tell: Self- regulated strategy development for story writing. Education and Treatment of Children, 25, 496–506. Mathes, P. G., Fuchs, D., & Fuchs, L. S. (1997). Cooperative story mapping. Remedial and Special Education, 18, 20–27. Mayer, R. (2002). Learning and instruction. Upper Saddle River, NJ: Prentice-Hall. McAuley, S. M., & McLaughlin, T. F. (1992). Comparison of Add-a-Word and Compu Spell programs with low-achieving students. Journal of Educational Research, 85, 362–369. McBride-Chang, C. (1999). The ABCs of the ABCs: The development of letter-name and letter-sound knowledge. Merrill–Palmer Quarterly, 45(2), 285–308. McCandliss, B., Beck, I. L., Sandak, R., & Perfetti, C. (2003). Focusing attention on decoding for children with poor reading skills: Design and preliminary tests of the word building intervention. Scientific Studies of Reading, 7, 75–104. McCarthy, P. A. (2008). Using sound boxes systematically to develop phonemic awareness. The Reading Teacher, 62(4), 346–349. McClelland, M. M., Acock, A. C., & Morrison, F. J. (2006). The impact of kindergarten learning-related skills on academic trajectories at the end of elementary school. Early Childhood Research Quarterly, 21(4), 471– 490. McConaughy, S. H., & Achenbach, T. M. (1989). Empirically- based assessment of serious emotional disturbances. Journal of School Psychology, 27, 91–117. McConaughy, S. H., & Achenbach, T. M. (1996). Contributions of a child interview to multimethod assessment of children with EBD and LD. School Psychology Review, 25, 24–39. McCurdy, B. L., Lannie, A. L., & Barnabas, E. (2009). Reducing disruptive behavior in an urban school cafeteria: An extension of the Good Behavior Game. Journal of School Psychology, 47(1), 39–54. McGill, R. J., & Busse, R. T. (2017). When theory trumps science: A critique of the PSW model for SLD identification. Contemporary School Psychology, 21(1), 10–18.
References 513 McGill, R. J., Conoyer, S. J., & Fefer, S. (2018, December). Elaborating on the linkage between cognitive and academic weaknesses: Using diagnostic efficiency statistics to inform PSW assessment. School Psychology Forum, 12(4), 118–132. McIntosh, K., & Goodman, S. (2016). Integrated multi- tiered systems of support: Blending RTI and PBIS. New York: Guilford Press. McKenna, J. W., & Ciullo, S. (2016). Typical reading instructional practices provided to students with emotional and behavioral disorders in a residential and day treatment setting: A mixed methods study. Residential Treatment for Children and Youth, 33(3–4), 225–246. McKenna, M. C., Walpole, S., & Jang, B. G. (2017). Validation of the informal decoding inventory. Assessment for Effective Intervention, 42(2), 110–118. McKenzie, M. L., & Budd, K. S. (1981). A peer tutoring package to increase mathematics performance: Examination of generalized changes in classroom behavior. Education and Treatment of Children, 4, 1–15. McKevett, N. M., & Codding, R. S. (2021). Brief experimental snalysis of math interventions: A synthesis of evidence. Assessment for Effective Intervention, 46, 217–427. McLaughlin, K. A., & King, K. (2015). Developmental trajectories of anxiety and depression in early adolescence. Journal of Abnormal Child Psychology, 43(2), 311–323. McLaughlin, T. F., Burgess, N., & Sackville- West, L. (1982). Effects of self-recording and self-recording + matching on academic performance. Child Behavior Therapy, 3(2/3), 17–27. McLaughlin, T. F., Reiter, S. M., Mabee, W. S., & Byram, B. J. (1991). An analysis of the Add-a-Word spelling program with mildly handicapped middle school students. Journal of Behavioral Education, 1, 413–426. McMaster, K. L., & Campbell, H. (2008). New and existing curriculum-based writing measures: Technical features within and across grades. School Psychology Review, 37, 550– 556. McMaster, K. L., Du, X., & Pétursdóttir, A. (2009). Technical features of curriculum- based measures for beginning writers. Journal of Learning Disabilities, 42, 41–60.
McMaster, K. L., Du, X., Yeo, S., Deno, S. L., Parker, D., & Ellis, T. (2011). Curriculum- based measures of beginning writing: Technical features of the slope. Exceptional Children, 77(2), 185–206. McMaster, K. L., & Espin, C. (2007). Technical features of curriculum- based measurement in writing. Journal of Special Education, 41, 68–84. McMaster, K. L., Fuchs, D., & Fuchs, L. S. (2006). Research on peer- assisted learning strategies: The promise and limitations of peer-mediated instruction. Reading and Writing Quarterly: Overcoming Learning Difficulties, 22, 5–25. McMaster, K. L., Kunkel, A., Shin, J., Jung, P. G., & Lembke, E. (2018). Early writing intervention: A best evidence synthesis. Journal of Learning Disabilities, 51(4), 363–380. McMaster, K. L., Parker, D., & Jung, P. G. (2012). Using curriculum- based measurement for beginning writers within a response to intervention framework. Reading Psychology, 33(1–2), 190–216. McMaster, K. L., Shin, J., Espin, C. A., Jung, P. G., Wayman, M. M., & Deno, S. L. (2017). Monitoring elementary students’ writing progress using curriculum- based measures: Grade and gender differences. Reading and Writing, 30(9), 2069–2091. McMaster, K. L., & Wagner, D. (2007). Monitoring response to general education instruction. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 223–233). New York: Springer. McVancel, S. M., Missall, K. N., & Bruhn, A. L. (2018). Examining incremental rehearsal: Multiplication fluency with fifth-grade students with math IEP goals. Contemporary School Psychology, 22(3), 220–232. Meichenbaum, D. H., & Goodman, J. (1971). Training impulsive children to talk to themselves: A means of developing self-control. Journal of Abnormal Psychology, 77, 117– 126. Meisinger, E. B., Bradley, B. A., Schwanenflugel, P. J., Kuhn, M. R., & Morris, R. D. (2009). Myth and reality of the word caller: The relation between teacher nominations and prevalence among elementary school
514
References
children. School Psychology Quarterly, 24, 147–159. Melby-Lervåg, M., & Hulme, C. (2013). Is working memory training effective? A meta- analytic review. Developmental Psychology, 49(2), 270. Melby-Lervåg, M., Lyster, S. A. H., & Hulme, C. (2012). Phonological skills and their role in learning to read: A meta-analytic review. Psychological Bulletin, 138(2), 322–340. Melby-Lervåg, M., Redick, T. S., & Hulme, C. (2016). Working memory training does not improve performance on measures of intelligence or other measures of “far transfer” evidence from a meta-analytic review. Perspectives on Psychological Science, 11(4), 512–534. Mercer, S. H., Harpole, L. L., Mitchell, R. R., McLemore, C., & Hardy, C. (2012). The impact of probe variability on brief experimental analysis of reading skills. School Psychology Quarterly, 27(4), 223–235. Mercer, S. H., Keller-Margulis, M. A., Faith, E. L., Reid, E. K., & Ochs, S. (2019). The potential for automated text evaluation to improve the technical adequacy of written expression curriculum- based measurement. Learning Disability Quarterly, 42(2), 117– 128. Mercer, S. H., Martínez, R. S., Faust, D., & Mitchell, R. R. (2012). Criterion- related validity of curriculum- based measurement in writing with narrative and expository prompts relative to passage copying speed in 10th grade students. School Psychology Quarterly, 27(2), 85–99. Miciak, J., Cirino, P. T., Ahmed, Y., Reid, E., & Vaughn, S. (2019). Executive functions and response to intervention: Identification of students struggling with reading comprehension. Learning Disability Quarterly, 42(1), 17–31. Miciak, J., & Fletcher, J. M. (2020). The critical role of instructional response for identifying dyslexia and other learning disabilities. Journal of Learning Disabilities, 53(5), 343–353. Miciak, J., Fletcher, J. M., & Stuebing, K. K. (2016). Accuracy and validity of methods for identifying learning disabilities in a response- to-intervention service delivery framework. In Handbook of response to intervention (pp. 421–440). Boston: Springer.
Miciak, J., Fletcher, J. M., Stuebing, K. K., Vaughn, S., & Tolar, T. D. (2014). Patterns of cognitive strengths and weaknesses: Identification rates, agreement, and validity for learning disabilities identification. School Psychology Quarterly, 29(1), 21–40. Miciak, J., Taylor, W. P., Denton, C. A., & Fletcher, J. M. (2015). The effect of achievement test selection on identification of learning disabilities within a pattern of strengths and weaknesses framework. School Psychology Quarterly, 30(3), 321. Miller, G., Giovenco, A., & Rentiers, K. A. (1987). Fostering comprehension monitoring in below average readers through self- instruction training. Journal of Reading Behavior, 19, 379–394. Miltenberger, R. G. (1990). Assessment of treatment acceptability: A review of the literature. Topics in Early Childhood Special Education, 10(3), 24–38. Mirkin, P., Deno, S., Tindal, G., & Kuehnle, K. (1982). Frequency of measurement and data utilization as factors in standardized behavioral assessment of academic skill. Journal of Behavioral Assessment, 4(4), 361–370. Moeller, A. J., Theiler, J. M., & Wu, C. (2012). Goal setting and student achievement: A longitudinal study. Modern Language Journal, 96(2), 153–169. Montague, M. (1989). Strategy instruction and mathematical problem solving. Journal of Reading, Writing, and Learning Disabilities International, 4, 275–290. Montague, M. (2007). Self- regulation and mathematics instruction. Learning Disabilities Research and Practice, 22(1), 75–83. Montague, M. (2008). Self-regulation strategies to improve mathematical problem solving for students with learning disabilities. Learning Disability Quarterly, 31(1), 37–44. Montague, M., & Bos, C. S. (1986). Verbal mathematical problem solving and learning disabilities: A review. Focus on Learning Problems in Mathematics, 8(2), 7–21. Mooney, P., McCarter, K. S., Schraven, J., & Callicoatte, S. (2013). Additional performance and progress validity findings targeting the content-focused vocabulary matching. Exceptional Children, 80(1), 85–100. Mooney, P., Ryan, J. P., Uhling, B. M., Reid, R., & Epstein, M. (2005). A review of self- management interventions targeting aca-
References 515
demic outcomes for students with emotional and behavior disorders. Journal of Behavioral Education, 14, 203–221. Morgan, P. L., Farkas, G., Wang, Y., Hillemeier, M. M., Oh, Y., & Maczuga, S. (2019). Executive function deficits in kindergarten predict repeated academic difficulties across elementary school. Early Childhood Research Quarterly, 46, 20–32. Morgan, P. L., & Sideridis, G. D. (2006). Contrasting the effectiveness of fluency interventions for students with or at risk for learning disabilities: A multilevel random coefficient modeling meta-analysis. Learning Disabilities Research and Practice, 21(4), 191–210. Morris, D., Trathen, W., Frye, E. M., Kucan, L., Ward, D., Schlagal, R., & Hendrix, M. (2013). The role of reading rate in the informal assessment of reading ability. Literacy Research and Instruction, 52(1), 52–64. Morrison, F. J., Ponitz, C. C., & McClelland, M. M. (2010). Self-regulation and academic achievement in the transition to school. In S. D. Calkins & M. A. Bell (Eds.), Child development at the intersection of emotion and cognition (pp. 203–224). Human Brain Development. Washington, DC: American Psychological Association. Mortweet, S. L., Utley, C. A., Walker, D., Dawson, H. L., Delquadri, J. C., Reddy, S. B., . . . Ledford, D. (1999). Classwide peer tutoring: Teaching students with mild mental retardation in inclusive classrooms. Exceptional Children, 65, 524–536. Muijselaar, M. M., Kendeou, P., de Jong, P. F., & van den Broek, P. W. (2017). What does the CBM-maze test measure? Scientific Studies of Reading, 21(2), 120–132. Muter, V., Hulme, C., Snowling, M., & Taylor, S. (1998). Segmentation, not rhyming, predicts early progress in learning to read. Journal of Experimental Child Psychology, 71(1), 3–27. Muyskens, P., Marston, D., & Reschly, A. L. (2007). The use of response to intervention practices for behavior: An examination of the validity of a screening instrument. TheCalifornia School Psychologist, 12(1), 31–45. Myers, D., Freeman, J., Simonsen, B., & Sugai, G. (2017). Classroom management with exceptional learners. Teaching Exceptional Children, 49(4), 223–230. Myers, S. S. (1990). The management of cur-
riculum time as it relates to student engaged time. Educational Review, 42, 13–23. Naglieri, J. A., & Das, J. P. (1997). Das– Naglieri Cognitive Assessment System (CAS). Itasca, IL: Riverside. Namkung, J. M., & Bricko, N. (2020). The effects of algebraic equation solving intervention for students with mathematics learning difficulties. Journal of Learning Disabilities, article 0022219420930814. Namkung, J. M., & Fuchs, L. S. (2012). Early numerical competencies of students with different forms of mathematics difficulty. Learning Disabilities Research and Practice, 27(1), 2–11. Namkung, J. M., Fuchs, L. S., & Koziol, N. (2018). Does initial learning about the meaning of fractions present similar challenges for students with and without adequate whole- number skill? Learning and Individual Differences, 61, 151–157. Nastasi, B. K., & Clements, D. H. (1991). Research on cooperative learning: Implications for practice. School Psychology Review, 20, 110–131. Nation, K., & Hulme, C. (1997). Phonemic segmentation, not onset-rime segmentation, predicts early reading and spelling skills. Reading Research Quarterly, 32(2), 154–167. Nation, K., & Snowling, M. (1997). Assessing reading difficulties: The validity and utility of current measures of reading skill. British Journal of Educational Psychology, 67(3), 359–370. National Association of State Directors of Special Education. (2006). Response to intervention: Policy considerations and implementation. Washington, DC: Author. National Center for Education Statistics. (2022). Students with disabilities. Retrieved from https://nces.ed.gov/programs/coe/ indicator_cgg.asp. National Center on Intensive Intervention. (2013). Data-based individualization: A framework for intensive intervention. Retrieved from https:// intensiveintervention.org/sites/default/files/ DBI_Framework.pdf. National Early Literacy Panel. (2008). Developing early literacy: Report of the National Early Literacy Panel. Retrieved from https://lincs.ed.gov/publications/pdf/ NELPReport09.pdf.
516
References
National Governors Association Center for Best Practices & Council of Chief State School Officers. (2010). Common Core State Standards. Washington, DC: Authors. National Mathematics Advisory Panel. (2008). Foundations for success: The final report of the National Mathematics Advisory Panel. Washington, DC: U.S. Department of Education. National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading. Retrieved from www.nichd.nih.gov/publications/nrp/ smallbook.htm. National Research Council. (2001). Adding it up: Helping children learn mathematics. Washington, DC: National Academies Press. Nelson, J., Benner, G. J., & Gonzalez, J. (2003). Learner characteristics that influence the treatment effectiveness of early literacy interventions: A meta-analytic review. Learning Disabilities Research and Practice, 18(4), 255–267. Nelson, J. M., & Manset- Williamson, G. (2006). The impact of explicit, self-regulatory reading comprehension strategy instruction on the reading-specific self-efficacy, attributions, and affect of students with reading disabilities. Learning Disability Quarterly, 29, 213–230. Nelson, R. O. (1977). Methodological issues in assessment via self-monitoring. In J. D. Cone & R. P. Hawkins (Eds.), Behavioral assessment: New directions in clinical psychology (pp. 217–240). New York: Brunner/Mazel. Nelson, R. O. (1985). Behavioral assessment in the school setting. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. 4, pp. 45–88). Hillsdale, NJ: Erlbaum. Nelson, R. O., & Hayes, S. C. (1981). Theoretical explanations for reactivity in self- monitoring. Behavior Modification, 5, 3–14. Nelson, R. O., & Hayes, S. C. (1986). Conceptual foundations of behavioral assessment. New York: Guilford Press. Nelson-Walker, N. J., Fien, H., Kosty, D. B., Smolkowski, K., Smith, J. L. M., & Baker, S. K. (2013). Evaluating the effects of a systemic intervention on first-grade teachers’ explicit reading instruction. Learning Disability Quarterly, 36(4), 215–230.
Neural Assembly. (2021). Cogmed. Retrieved from www.cogmed.com. Newman, B., Reinecke, D. R., & Meinberg, D. L. (2000). Self- management of varied responding in three students with autism. Behavioral Interventions, 15, 145–151. Noll, M. B., Kamps, D., & Seaborn, C. F. (1993). Prereferral intervention for students with emotional or behavioral risks: Use of a behavioral consultation model. Journal of Emotional and Behavioral Disorders, 1, 203–214. Norton, E. S., & Wolf, M. (2012). Rapid automatized naming (RAN) and reading fluency: Implications for understanding and treatment of reading disabilities. Annual Review of Psychology, 63, 427–452. Nugiel, T., Roe, M. A., Taylor, W. P., Cirino, P. T., Vaughn, S. R., Fletcher, J. M., . . . Church, J. A. (2019). Brain activity in struggling readers before intervention relates to future reading gains. Cortex, 111, 286–302. Oakes, W. P., Lane, K. L., Menzies, H. M., & Buckman, M. M. (2018). Instructional feedback: An effective, efficient, low- intensity strategy to support student success. Beyond Behavior, 27(3), 168–174. O’Bryon, E. C., & Rogers, M. R. (2010). Bilingual school psychologists’ assessment practices with English language learners. Psychology in the Schools, 47(10), 1018–1034. O’Connor, R. E. (2011). Phoneme awareness and the alphabetic principle. In R. E. O’Connor & P. F. Vadasy (Eds.), Handbook of reading interventions (pp. 9–26). New York: Guilford Press. O’Connor, R. E. (2014). Reading multisyllabic words. In R. E. O’Connor (Ed.), Teaching word recognition: Strategies for students with learning difficulties (2nd ed., pp. 96–114). New York: Guilford Press. O’Connor, R. E. (2018). Reading fluency and students with reading disabilities: How fast is fast enough to promote reading comprehension? Journal of Learning Disabilities, 51(2), 124–136. O’Connor, R. E., Beach, K. D., Sanchez, V. M., Bocian, K. M., & Flynn, L. J. (2015). Building BRIDGES: A design experiment to improve reading and United States history knowledge of poor readers in eighth grade. Exceptional Children, 81(4), 399–425.
References 517 O’Connor, R. E., Bell, K. M., Harty, K. R., Larkin, L. K., Sackor, S. M., & Zigmond, N. (2002). Teaching reading to poor readers in the intermediate grades: A comparison of text difficulty. Journal of Educational Psychology, 94(3), 474–494. O’Connor, R. E., Bocian, K. M., Beach, K. D., Sanchez, V., & Flynn, L. J. (2013). Special education in a 4-year Response to Intervention (RtI) environment: Characteristics of students with learning disability and grade of identification. Learning Disabilities Research and Practice, 28(3), 98–112. O’Connor, R. E., & McCartney, K. (2007). Examining teacher– child relationships and achievement as part of an ecological model of development. American Educational Research Journal, 44(2), 340–369. O’Connor, R. E., Notari- Syverson, A., & Vadasy, P. F. (1996). Ladders to literacy: An activity book for kindergarten children. Seattle: Washington Research Institute. O’Connor, R. E., Sanchez, V., & Kim, J. J. (2017). Responsiveness to intervention and multi-tiered systems of support for reducing reading difficulties and identifying learning disability. In Handbook of special education (2nd ed., pp. 189–202). New York: Routledge. O’Connor, R. E., Swanson, H. L., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology, 102(1), 2–18. Oliver, R. M., Wehby, J. H., & Reschly, D. J. (2011). Teacher classroom management practices: Effects on disruptive or aggressive student behavior. Campbell Systematic Reviews, 7(1), 1–55. Ollendick, T. H., & Hersen, M. (1984). Child behavior assessment: Principles and procedures. New York: Pergamon Press. Olulade, O. A., Napoliello, E. M., & Eden, G. F. (2013). Abnormal visual motion processing is not a cause of dyslexia. Neuron, 79(1), 180–190. Ouellette, G., & Fraser, J. R. (2009). What exactly is a yait anyway: The role of semantics in orthographic learning. Journal of Experimental Child Psychology, 104(2), 239–251.
Ouellette, G. P., & Sénéchal, M. (2008). A window into early literacy: Exploring the cognitive and linguistic underpinnings of invented spelling. Scientific Studies of Reading, 12(2), 195–219. Ownby, R. L., Wallbrown, F., D’Atri, A., & Armstrong, B. (1985). Patterns of referrals for school psychological services: Replication of the referral problems category system. Special Services in the School, 1(4), 53–66. Paly, B. J., Klingbeil, D. A., Clemens, N. H., & Osman, D. J. (2021). A cost-effectiveness analysis of four approaches to universal screening for academic risk in reading in upper elementary and middle school. Manuscript submitted for publication. Paquette, K. R. (2009). Integrating the 6 + 1 writing traits model with cross-age tutoring: An investigation of elementary students’ writing development. Literacy Research and Instruction, 48, 28–38. Parker, C. (2000). Identifying technically adequate measures of vocabulary for young children at risk for reading disabilities. Unpublished PhD dissertation, University of Oregon, Eugene. Parker, D. C., Dickey, B. N., Burns, M. K., & McMaster, K. L. (2012). An application of brief experimental analysis with early writing. Journal of Behavioral Education, 21(4), 329–349. Parker, D. C., Van Norman, E., & Nelson, P. M. (2018). Decision rules for progress monitoring in reading: Accuracy during a largescale Tier II intervention. Learning Disabilities Research and Practice, 33(4), 219–228. Partanen, M., Siegel, L. S., & Giaschi, D. E. (2019). Effect of reading intervention and task difficulty on orthographic and phonological reading systems in the brain. Neuropsychologia, 130, 13–25. Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2009). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105–119. Paulesu, E., Danelli, L., & Berlingeri, M. (2014). Reading the dyslexic brain: Multiple dysfunctional routes revealed by a new meta- analysis of PET and fMRI activation studies. Frontiers in Human Neuroscience, 8, 830–842. Pearson, P. D., & Johnson, D. D. (1978). Teach-
518
References
ing reading comprehension. New York: Holt, Rinehart and Winston. Peltier, C., Morin, K. L., Bouck, E. C., Lingo, M. E., Pulos, J. M., Scheffler, F. A., . . . Deardorff, M. E. (2020). A meta- analysis of single- case research using mathematics manipulatives with students at risk or identified with a disability. Journal of Special Education, 54(1), 3–15. Peltier, C. J., Vannest, K. J., & Marbach, J. J. (2018). A meta-analysis of schema instruction implemented in single-case experimental designs. Journal of Special Education, 52(2), 89–100. Peng, P., Fuchs, D., Fuchs, L. S., Elleman, A. M., Kearns, D. M., Gilbert, J. K., . . . Patton, S., III. (2019). A longitudinal analysis of the trajectories and predictors of word reading and reading comprehension development among at-risk readers. Journal of Learning Disabilities, 52(3), 195–208. Peng, P., Lee, K., Luo, J., Li, S., Joshi, R. M., & Tao, S. (2020). Simple view of reading in Chinese: A one-stage meta- analytic structural equation modeling. Review of Educational Research, 91(1). Peng, P., & Lin, X. (2019). The relation between mathematics vocabulary and mathematics performance among fourth graders. Learning and Individual Differences, 69, 11–21. Peng, P., Lin, X., Ünal, Z. E., Lee, K., Namkung, J., Chow, J., & Sales, A. (2020). Examining the mutual relations between language and mathematics: A meta-analysis. Psychological Bulletin, 146(7), 595–634. Perfetti, C. A. (1985). Reading Ability. Oxford, UK: Oxford University Press. Perfetti, C. (2007). Reading ability: Lexical quality to comprehension. Scientific Studies of Reading, 11(4), 357–383. Perfetti, C. A., Beck, I., Bell, L. C., & Hughes, C. (1987). Phonemic knowledge and learning to read are reciprocal: A longitudinal study of first grade children. Merrill–Palmer Quarterly 33(3), 283–319. Perfetti, C., & Stafura, J. (2014). Word knowledge in a theory of reading comprehension. Scientific Studies of Reading, 18(1), 22–37. Petersen- Brown, S., & Burns, M. K. (2011). Adding a vocabulary component to incremental rehearsal to enhance retention and generalization. School Psychology Quarterly, 26(3), 245.
Peterson, D. W., Prasse, D. P., Shinn, M. R., & Swerdlick, M. E. (2007). In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention: The science and practice of assessment and intervention (pp. 300–318). New York: Springer. Peterson, K. M. H., & Shinn, M. R. (2002). Severe discrepancy models: Which best explains school identification practices for learning disabilities? School Psychology Review, 31, 459–476. Phillips, B. M., & Torgesen, J. K. (2007). Phonemic awareness and reading: Beyond the growth of initial reading accuracy. In D. K. Dickinson & S. B. Neuman (Eds.), Handbook of early literacy research (Vol. 2, pp. 101–112). New York: Guilford Press. Pianta, R. C., Belsky, J., Vandergrift, N., Houts, R., & Morrison, F. J. (2008). Classroom effects on children’s achievement trajectories in elementary school. American Educational Research Journal, 45, 365–397. Piasta, S. B., Purpura, D. J., & Wagner, R. K. (2010). Fostering alphabet knowledge development: A comparison of two instructional approaches. Reading and Writing, 23(6), 607–626. Piasta, S. B., & Wagner, R. K. (2010). Learning letter names and sounds: Effects of instruction, letter type, and phonological processing skill. Journal of Experimental Child Psychology, 105(4), 324–344. Pokorski, E. A. (2019). Group contingencies to improve classwide behavior of young children. Teaching Exceptional Children, 51(5), 340–349. Powell, S. R., Berry, K. A., & Barnes, M. A. (2020). The role of pre- algebraic reasoning within a word-problem intervention for third-grade students with mathematics difficulty. ZDM, 52(1), 151–163. Powell, S. R., Berry, K. A., Fall, A.-M., Roberts, G., Fuchs, L. S., & Barnes, M. A. (2021). Alternative paths to improved word-problem performance: An advantage for embedding prealgebraic reasoning instruction within word-problem intervention. Journal of Educational Psychology, 113(5), 898–910. Powell, S. R., & Fuchs, L. S. (2010). Contribution of equal-sign instruction beyond word- problem tutoring for third-grade students with mathematics difficulty. Journal of Educational Psychology, 102(2), 381.
References 519 Powell, S. R., & Fuchs, L. S. (2012). Early numerical competencies and students with mathematics difficulty. Focus on Exceptional Children, 44(5), 1–12. Powell, S. R., & Fuchs, L. S. (2018). Effective word-problem instruction: Using schemas to facilitate mathematical reasoning. Teaching Exceptional Children, 51(1), 31–42. Powell, S. R., & Nelson, G. (2017). An investigation of the mathematics- vocabulary knowledge of first-grade students. Elementary School Journal, 117, 664–686. Powell, S. R., Stevens, E. A., & Hughes, E. M. (2019). Math language in middle school: Be more specific. Teaching Exceptional Children, 51(4), 286–295. Powell- Smith, K. A., & Bradley- K lug, K. L. (2001). Another look at the “C” in CBM: Does it really matter if curriculum- based measurement reading probes are curriculum- based? Psychology in the Schools, 38, 299– 312. Prater, M. A., Hogan, S., & Miller, S. R. (1992). Using self-monitoring to improve ontask behavior and academic skills of an adolescent with mild handicaps across special and regular education settings. Education and Treatment of Children, 15, 43–55. Pratt-Struthers, J., Struthers, B., & Williams, R. L. (1983). The effects of the Add-a-Word spelling program on spelling accuracy during creative writing. Education and Treatment of Children, 6, 277–283. Protopapas, A., Katopodi, K., Altani, A., & Georgiou, G. K. (2018). Word reading fluency as a serial naming task. Scientific Studies of Reading, 22(3), 248–263. Purpura, D. J., & Logan, J. A. (2015). The nonlinear relations of the approximate number system and mathematical language to early mathematics development. Developmental Psychology, 51(12), 1717. Rakes, C. R., Valentine, J. C., McGatha, M. B., & Ronau, R. N. (2010). Methods of instructional improvement in algebra: A systematic review and meta-analysis. Review of Educational Research, 80(3), 372–400. Ramirez, G., Fries, L., Gunderson, E., Schaeffer, M. W., Maloney, E. A., Beilock, S. L., & Levine, S. C. (2019). Reading anxiety: An early affective impediment to children’s success in reading. Journal of Cognition and Development, 20(1), 15–34.
Rapport, M. D., Orban, S. A., Kofler, M. J., & Friedman, L. M. (2013). Do programs designed to train working memory, other executive functions, and attention benefit children with ADHD? A meta-analytic review of cognitive, academic, and behavioral outcomes. Clinical Psychology Review, 33(8), 1237–1252. Rashotte, C. A., & Torgesen, J. K. (1985). Repeated reading and reading fluency in learning disabled children. Reading Research Quarterly, 20, 180–188. Rathvon, N. (2008). Effective school interventions: Strategies for enhancing academic achievement and social competence (2nd ed.). New York: Guilford Press. Ray, A. B., Graham, S., & Liu, X. (2019). Effects of SRSD college entrance essay exam instruction for high school students with disabilities or at-risk for writing difficulties. Reading and Writing, 32(6), 1507–1529. Redick, T. S. (2019). The hype cycle of working memory training. Current Directions in Psychological Science, 28(5), 423–429. Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., . . . Engle, R. W. (2013). No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. Journal of Experimental Psychology: General, 142(2), 359–373. Reed, D. K., & Sturges, K. M. (2013). An examination of assessment fidelity in the administration and interpretation of reading tests. Remedial and Special Education, 34(5), 259–268. Reed, D. K., Zimmermann, L. M., Reeger, A. J., & Aloe, A. M. (2019). The effects of varied practice on the oral reading fluency of fourth-grade students. Journal of School Psychology, 77, 24–35. Rehfeld, D. M., Kirkpatrick, M., O’Guinn, N., & Renbarger, R. (2022). A meta-analysis of phonemic awareness instruction provided to children suspected of having a reading disability. Language, Speech, and Hearing Services in Schools, 53(4), 1177–1201. Reid, R. (1996). Research in self- monitoring with students with learning disabilities: The present, the prospects, the pitfalls. Journal of Learning Disabilities, 29, 317–331. Reid, R., & Harris, K. R. (1993). Self-monitoring of attention versus self-monitoring of perfor-
520
References
mance: Effects on attention and academic performance. Exceptional Children, 60, 29–40. Reid, R., Trout, A. L., & Schartz, M. (2005). Self- regulation interventions for children with attention deficit/hyperactivity disorder. Exceptional Children, 71(4), 361–380. Reimers, T. M., Wacker, D. P., Cooper, L. J., & deRaad, A. O. (1992). Acceptability of behavioral treatments for children: Analog and naturalistic evaluations by parents. School Psychology Review, 21, 628–643. Reimers, T. M., Wacker, D. P., Derby, K. M., & Cooper, L. J. (1995). Relation between parental attributions and the acceptability of behavioral treatments for their child’s behavior problems. Behavioral Disorders, 20, 171–178. Reitsma, P. (1983). Printed word learning in beginning readers. Journal of Experimental Child Psychology, 36(2), 321–339. Renick, M. J., & Harter, S. (1989). Impact of social comparisons on the developing self- perceptions of learning disabled students. Journal of Educational Psychology, 81(4), 631–638. Renshaw, T. L., & Cook, C. R. (2018). Initial development and validation of the youth internalizing problems screener. Journal of Psychoeducational Assessment, 36(4), 366– 378. Reschly, A. L., Busch, T. W., Betts, J., Deno, S. L., & Long, J. D. (2009). Curriculum-based measurement oral reading as an indicator of reading achievement: A meta-analysis of the correlational evidence. Journal of School Psychology, 47(6), 427–469. Reynolds, C. R., & Shaywitz, S. E. (2009). Response to intervention: Ready or not? Or, wait-to-fail to watch-them-fail. School Psychology Quarterly, 42, 130–145. Rhode, G., Morgan, D. P., & Young, K. R. (1983). Generalization and maintenance of treatment gains of behaviorally handicapped students from resource rooms to regular classrooms using self-evaluation procedures. Journal of Applied Behavior Analysis, 16, 171–188. Riccomini, P. J., Smith, G. W., Hughes, E. M., & Fries, K. M. (2015). The language of mathematics: The importance of teaching and learning mathematical vocabulary. Reading and Writing Quarterly, 31(3), 235–252.
Richards, T. L., Berninger, V. W., Stock, P., Altemeier, L., Trivedi, P., & Maravilla, K. R. (2011). Differences between good and poor child writers on fMRI contrasts for writing newly taught and highly practiced letter forms. Reading and Writing, 24(5), 493–516. Richlan, F. (2019). The functional neuroanatomy of letter-speech sound integration and its relation to brain abnormalities in developmental dyslexia. Frontiers in Human Neuroscience, 13, 21–33. Richlan, F. (2020). The functional neuroanatomy of developmental dyslexia across languages and writing systems. Frontiers in Psychology, 11, 155–165. Richlan, F., Kronbichler, M., & Wimmer, H. (2009). Functional abnormalities in the dyslexic brain: A quantitative meta-analysis of neuroimaging studies. Human Brain Mapping, 30(10), 3299–3308. Riedel, B. W. (2007). The relation between DIBELS, reading comprehension, and vocabulary in urban first-grade students. Reading Research Quarterly, 42, 546–567. Rief, S. (2007). Strategies to improve self- regulation. In S. Goldstein, & R. B. Brooks (Eds.), Understanding and managing children’s classroom behavior: Creating sustainable, resilient classrooms (2nd ed., pp. 322– 360). Hoboken, NJ: Wiley. Riley-Tillman, T. C., Chafouleas, S. M., Sassu, K. A., Chanese, J. A., & Glazer, A. D. (2008). Examining the agreement of direct behavior ratings and systematic direct observation data for on-task and disruptive behavior. Journal of Positive Behavior Interventions, 10(2), 136–143. Ritchey, K. D. (2006). Learning to write: Progress-monitoring tools for beginning and at-risk writers. Teaching Exceptional Children, 39(2), 22–27. Ritchey, K. D. (2008). The building blocks of writing: Learning to write letters and spell words. Reading and Writing, 21(1), 27–47. Ritchey, K. D., Coker Jr., D. L., & McCraw, S. B. (2010). A comparison of metrics for scoring beginning spelling. Assessment for Effective Intervention, 35(2), 78–88. Ritchey, K. D., Silverman, R. D., Schatschneider, C., & Speece, D. L. (2015). Prediction and stability of reading problems in middle childhood. Journal of Learning Disabilities, 48(3), 298–309.
References 521 Ritchey, K. D., & Speece, D. L. (2006). From letter names to word reading: The nascent role of sublexical fluency. Contemporary Educational Psychology, 31(3), 301–327. Robacker, C. M., Rivera, C. J., & Warren, S. H. (2016). A token economy made easy through ClassDojo. Intervention in School and Clinic, 52(1), 39–43. Roberts, R. N., Nelson, R. O., & Olson, T. W. (1987). Self-instruction: An analysis of the differential effects of instruction and reinforcement. Journal of Applied Behavior Analysis, 20, 235–242. Robertson, S. J., Simon, S. J., Pachman, J. S., & Drabman, R. S. (1980). Self-control and generalization procedures in a classroom of disruptive retarded children. Child Behavior Therapy, 1, 347–362. Rock, M. L. (2005). Use of strategic self- monitoring to enhance academic engagement, productivity, and accuracy of students with and without disabilities. Journal of Positive Behavioral Interventions, 7, 3–17. Rock, M. L., & Thead, B. K. (2007). The effects of fading a strategic self-monitoring intervention on students’ academic engagement, accuracy, and productivity. Journal of Behavioral Education, 16, 389–412. Rodgers, E., D’Agostino, J. V., Kelly, R. H., & Mikita, C. (2018). Oral reading accuracy: Findings and implications from recent research. The Reading Teacher, 72(2), 149– 157. Roehling, J. V., Hebert, M., Nelson, J. R., & Bohaty, J. J. (2017). Text structure strategies for improving expository reading com prehension. The Reading Teacher, 71(1), 71–82. Rohrbeck, C. A., Ginsburg-Block, M. D., Fantuzzo, J. W., & Miller, T. R. (2003). Peer- assisted learning interventions with elementary school students: A meta-analytic review. Journal of Educational Psychology, 95, 240–257. Rohrer, D., Dedrick, R. F., Hartwig, M. K., & Cheung, C.-N. (2020). A randomized controlled trial of interleaved mathematics practice. Journal of Educational Psychology, 112(1), 40–52. Rohrer, D., Dedrick, R. F., & Stershic, S. (2015). Interleaved practice improves mathematics learning. Journal of Educational Psychology, 107(3), 900–908.
Romig, J. E., Miller, A. A., Therrien, W. J., & Lloyd, J. W. (2021). Meta-analysis of prompt and duration for curriculum-based measurement of written language. Exceptionality, 29(2), 133–149. Romig, J. E., Therrien, W. J., & Lloyd, J. W. (2017). Meta- analysis of criterion validity for curriculum-based measurement in written language. Journal of Special Education, 51(2), 72–82. Rosenfield, S. A. (1987). Instructional consultation. Hillsdale, NJ: Erlbaum. Rosenfield, S. A., & Gravois, T. (1995). Organizational consultation. New York: Guilford Press. Rosenshine, B. V. (1979). Content, time, and direct instruction. In P. L. Peterson & H. J. Walberg (Eds.), Research on teaching (pp. 28–56). Berkeley, CA: McCutchan. Rosenshine, B. V. (1981). Academic engaged time, content covered, and direct instruction. Journal of Education, 3, 38–66. Rosenshine, B. (1987). Explicit teaching and teacher training. Journal of Teacher Education, 38(3), 34–36. Rosenshine, B. V. (2009). The empirical support for direct instruction. Constructivist instruction: Success or failure? In S. Tobias & T. M. Duffy (Eds.), Constructivist instruction: Success or failure? (pp. 201–220). New York: Routledge/Taylor & Francis. Rosenshine, B., Meister, C., & Chapman, S. (1996). Teaching students to generate questions: A review of the intervention studies. Review of Educational Research, 66, 181– 221. Roswell, F. G., Chall, J. S., Curtis, M. E., & Kearns, G. (2006). Diagnostic Assessments of Reading— Second Edition. Austin, TX: PRO-ED. Royer, D. J., Lane, K. L., Dunlap, K. D., & Ennis, R. P. (2019). A systematic review of teacher-delivered behavior-specific praise on K–12 student performance. Remedial and Special Education, 40(2), 112–128. Ruffini, S. J., Miskell, R., Lindsay, J., McInerney, M., & Waite, W. (2016). Measuring the implementation fidelity of the response to intervention framework in Milwaukee public schools (REL 2017-192). Naperville, IL: Regional Educational Laboratory Midwest. Runge, T. J., & Watkins, M. W. (2006). The structure of phonological awareness among
522
References
kindergarten students. School Psychology Review, 35(3), 370–386. Rupley, W. H., Blair, T. R., & Nichols, W. D. (2009). Effective reading instruction for struggling readers: The role of direct/explicit teaching. Reading and Writing Quarterly, 25(2–3), 125–138. Sabatini, J., O’Reilly, T., Halderman, L. K., & Bruce, K. (2014). Integrating scenario-based and component reading skill measures to understand the reading behavior of struggling readers. Learning Disabilities Research and Practice, 29, 36–43. Sabatini, J., Wang, Z., & O’Reilly, T. (2019). Relating reading comprehension to oral reading performance in the NAEP fourth- grade special study of oral reading. Reading Research Quarterly, 54(2), 253–271. Sáez, L., Nese, J. F., Alonzo, J., & Tindal, G. (2016). Individual differences in kindergarten through grade 2 fluency relations. Learning and Individual Differences, 49, 100–109. Sala, G., & Gobet, F. (2017). Working memory training in typically developing children: A meta- analysis of the available evidence. Developmental Psychology, 53(4), 671. Salvia, J. A., & Hughes, C. (1990). Curriculum- based assessment: Testing what is taught. New York: Macmillan. Salvia, J. A., & Ysseldyke, J. E. (2001). Assessment in special and remedial education (8th ed.). Boston: Houghton Mifflin. Salvia, J. A., Ysseldyke, J. E., & Bolt, S. (2007). Assessment in special and inclusive education (10th ed.). Boston: Houghton Mifflin. Salvia, J. A., Ysseldyke, J. E., & Witmer, S. (2016). Assessment in special and inclusive education (13th ed.). Boston: Cengage Learning. Samuels, S. J. (1979). The method of repeated readings. The Reading Teacher, 32(4), 403– 408. Sanders, S. (2020). Using the self- regulated strategy development framework to teach reading comprehension strategies to elementary students with disabilities. Education and Treatment of Children, 43, 57–70. Santangelo, T. (2014). Why is writing so difficult for students with learning disabilities? Learning Disabilities: A Contemporary Journal, 12(1), 5–20. Santangelo, T., Harris, K. R., & Graham, S. (2008). Using self- regulated strategy
development to support students who have “trubol giting thangs into werds.” Remedial and Special Education, 29, 78–89. Santogrossi, D. A., O’Leary, K. D., Romanczyk, R. G., & Kaufman, K. F. (1973). Self- evaluation by adolescents in a psychiatric hospital school token program. Journal of Applied Behavior Analysis, 6, 277–287. Satsangi, R., Billman, R. H., Raines, A. R., & Macedonia, A. M. (2020). Studying the impact of video modeling for algebra instruction for students with learning disabilities. Journal of Special Education, article 0022466920937467. Satsangi, R., & Bouck, E. C. (2015). Using virtual manipulative instruction to teach the concepts of area and perimeter to secondary students with learning disabilities. Learning Disability Quarterly, 38(3), 174–186. Satsangi, R., Bouck, E. C., Taber- Doughty, T., Bofferding, L., & Roberts, C. A. (2016). Comparing the effectiveness of virtual and concrete manipulatives to teach algebra to secondary students with learning disabilities. Learning Disability Quarterly, 39, 240–253. Satsangi, R., Hammer, R., & Evmenova, A. S. (2018). Teaching multistep equations with virtual manipulatives to secondary students with learning disabilities. Learning Disabilities Research and Practice, 33, 99–111. Satsangi, R., Hammer, R., & Hogan, C. D. (2018). Studying virtual manipulatives paired with explicit instruction to teach algebraic equations to students with learning disabilities. Learning Disability Quarterly, 41, 227–242. Satsangi, R., Hammer, R., & Hogan, C. D. (2019). Video modeling and explicit instruction: A comparison of strategies for teaching mathematics to students with learning disabilities. Learning Disabilities Research and Practice, 34(1), 35–46. Saudargas, R. A. (1992). State–Event Classroom Observation System (SECOS). Knoxville: Department of Psychology, University of Tennessee. Saudargas, R. A., & Creed, V. (1980). State– Event Classroom Observation System. Knoxville: University of Tennessee, Department of Psychology. Saudargas, R. A., & Lentz, F. E. (1986). Estimating percent of time and rate via direct
References 523
observation: A suggested observational procedure and format. School Psychology Review, 15, 36–48. Savage, R., Georgiou, G., Parrila, R., & Maiorino, K. (2018). Preventative reading interventions teaching direct mapping of graphemes in texts and set-for-variability aid at-risk learners. Scientific Studies of Reading, 22(3), 225–247. Scammacca, N., Roberts, G., Vaughn, S., Edmonds, M., Wexler, J., Reutebuch, C. K., & Torgesen, J. K. (2007). Interventions for adolescent struggling readers: A meta- analysis with implications for practice. Portsmouth NH: RMC Research Corporation, Center on Instruction. Scammacca, N. K., Roberts, G., Vaughn, S., & Stuebing, K. K. (2015). A meta-analysis of interventions for struggling readers in grades 4–12: 1980–2011. Journal of Learning Disabilities, 48(4), 369–390. Schatschneider, C., Francis, D. J., Foorman, B. R., Fletcher, J. M., & Mehta, P. (1999). The dimensionality of phonological awareness: An application of item response theory. Journal of Educational Psychology, 91(3), 439. Schatschneider, C., & Torgesen, J. K. (2004). Using our current understanding of dyslexia to support early identification and intervention. Journal of Child Neurology, 19(10), 759–765. Schermerhorn, P. K., & McLaughlin, T. F. (1997). Effects of the Add-a-Word spelling program on test accuracy, grades, and retention of spelling words with fifth and sixth grade regular education students. Child and Family Behavior Therapy, 19, 23–35. Schmitt, N., Jiang, X., & Grabe, W. (2011). The percentage of words known in a text and reading comprehension. Modern Language Journal, 95(1), 26–43. Schneider, M., Beeres, K., Coban, L., Merz, S., Susan Schmidt, S., Stricker, J., & De Smedt, B. (2017). Associations of non-symbolic and symbolic numerical magnitude processing with mathematical competence: A meta- analysis. Developmental Science, 20(3), e12372. Schneider, W. J., & Kaufman, A. S. (2017). Let’s not do away with comprehensive cognitive assessments just yet. Archives of Clinical Neuropsychology, 32(1), 8–20. Schneider, W., Roth, E., & Ennemoser, M.
(2000). Training phonological skills and letter knowledge in children at risk for dyslexia: A comparison of three kindergarten intervention programs. Journal of Educational Psychology, 92(2), 284. Schniedewind, N., & Salend, S. J. (1987). Cooperative learning works. Teaching Exceptional Children, 19(2), 22–25. Schrank, F., Mather, N., McGrew, K., Wendling, B., & Woodcock, R. W. (2014). Woodcock Johnson IV Test of Achievement. Boston: Houghton Mifflin Harcourt. Schumaker, J. B., Denton, P. H., & Deshler, D. D. (1984). The paraphrasing strategy. Lawrence: University of Kansas Press. Schumaker, J. B., Deshler, D. D., Alley, G. R., & Denton, P. H. (1982). Multipass: A learning strategy for improving reading comprehension. Learning Disabilities Quarterly, 5, 295–304. Schumaker, J. B., Deshler, D. D., Alley, G. R., & Warner, M. M. (1983). Toward the development of an intervention model for learning disabled adolescents. Exceptional Education Quarterly, 3(4), 45–50. Schunk, D. H. (2003). Self-efficacy for reading and writing: Influence of modeling, goal setting, and self-evaluation. Reading and Writing Quarterly, 19(2), 159–172. Schunk, D. H., & Rice, J. M. (1992). Influence of reading-comprehension strategy information on children’s achievement outcomes. Learning Disability Quarterly, 15, 51–64. Schunk, D. H., & Zimmerman, B. J. (1997). Social origins of self-regulatory competence. Educational Psychologist, 32(4), 195–208. Scruggs, T. E., & Mastropieri, M. A. (2002). On babies and bathwater: Addressing the problems of identification of learning disabilities. Learning Disability Quarterly, 25, 155–168. Scruggs, T. E., Mastropieri, M., Veit, D. T., & Osguthorpe, R. T. (1986). Behaviorally disordered students as tutors: Effects on social behavior. Behavioral Disorders, 11(4), 36–43. Seidenberg, M. S. (2005). Connectionist models of word reading. Current Directions in Psychological Science, 14(5), 238–242. Seidenberg, M. (2017). Language at the speed of sight: How we read, why so many can’t, and what can be done about it. New York: Basic Books.
524
References
Seidenberg, M. S., & McClelland, J. L. (1989). A distributed, developmental model of word recognition and naming. Psychological Review, 96(4), 523. Sexton, M., Harris, K. R., & Graham, S. (1998). Self-regulated strategy development and the writing process: Effects on essay writing and attributions. Exceptional Children, 64, 295–311. Shanahan, T. (2021). 3P versus 3-cueing: Why recommend one and shun the other? Retrieved from https://shanahanonliteracy. com/blog/3p-versus-3-cueing-why- recommend-one-and-shun-the-other. Shankweiler, D., Lundquist, E., Katz, L., Stuebing, K. K., Fletcher, J. M., Brady, S., . . . Shaywitz, B. A. (1999). Comprehension and decoding: Patterns of association in children with reading difficulties. Scientific Studies of Reading, 3(1), 69–94. Shapiro, E. S. (1981). Self-control procedures with the mentally retarded. In M. Hersen, R. M. Eisler, & P. M. Miller (Eds.), Progress in behavior modification (Vol. 12, pp. 265– 297). New York: Academic Press. Shapiro, E. S. (1984). Self-monitoring. In T. H. Ollendick & M. Hersen (Eds.), Child behavior assessment: Principles and procedures (pp. 148–165). New York: Pergamon Press. Shapiro, E. S. (1987). Behavioral assessment in school psychology. Hillsdale, NJ: Erlbaum. Shapiro, E. S. (1989). Academic skills problems: Direct assessment and intervention. New York: Guilford Press. Shapiro, E. S. (1990). An integrated model for curriculum-based assessment. School Psychology Review, 19, 331–349. Shapiro, E. S. (1992). Gickling’s model of curriculum- based assessment to improve reading in elementary age students. School Psychology Review, 21, 168–176. Shapiro, E. S. (1996a). Academic skills problems: Direct assessment and intervention (2nd ed.). New York: Guilford Press. Shapiro, E. S. (1996b). Academic skills problems workbook. New York: Guilford Press. Shapiro, E. S. (2003a). Behavioral Observation of Students in Schools— BOSS [Computer software]. San Antonio, TX: Pearson. Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). New York: Guilford Press. Shapiro, E. S. (2011). Academic skills prob-
lems: Direct assessment and intervention (4th ed.). New York: Guilford Press. Shapiro, E. S., Angello, L. M., & Eckert, T. L. (2004). Has curriculum- based assessment become a staple of school psychology practice?: An update and extension of knowledge, use, and attitudes from 1990 to 2000. School Psychology Review, 33, 243–252. Shapiro, E. S., & Bradley, K. L. (1995). Treatment of academic problems. In M. A. Reinecke, F. M. Datillio, & A. Freeman (Eds.), Cognitive therapy with children and adolescents (pp. 344–366). New York: Guilford Press. Shapiro, E. S., Browder, D. M., & D’Huyvetters, K. K. (1984). Increasing academic productivity of severely multi- handicapped children with self-management: Idiosyncratic effects. Analysis and Intervention in Developmental Disabilities, 4, 171–188. Shapiro, E. S., & Clemens, N. H. (2005). Conducting systematic direct classroom observations to define school- related problems. In R. Brown-Chidsey (Ed.), Assessment for intervention: A problem- solving approach (pp. 175–199). New York: Guilford Press. Shapiro, E. S., & Clemens, N. H. (2009). A conceptual model for evaluating systems effects of RTI. Assessment for Effective Intervention, 35, 3–16. Shapiro, E. S., & Cole, C. L. (1994). Behavior change in the classroom: Self-m anagement interventions. New York: Guilford Press. Shapiro, E. S., & Cole, C. L. (1999). Self- monitoring in assessing children’s problems. Psychological Assessment, 11, 448–457. Shapiro, E. S., Dennis, M. S., & Fu, Q. (2015). Comparing computer adaptive and curriculum-based measures of math in progress monitoring. School Psychology Quarterly, 30(4), 470. Shapiro, E. S., & Derr, T. F. (1987). An examination of overlap between reading curricula and standardized achievement tests. Journal of Special Education, 21, 59–67. Shapiro, E. S., DuPaul, G. J., & Bradley-K lug, K. L. (1998). Self-management as a strategy to improve the classroom behavior of adolescents with ADHD. Journal of Learning Disabilities, 31, 545–555. Shapiro, E. S., Durnan, S. L., Post, E. E., & Levinson, T. S. (2002). Self-monitoring procedures for children and adolescents. In A.
References 525 Thomas & J. Grimes (Eds.), Best practices in school psychology IV (Vol. 1, pp. 433–454). Bethesda, MD: National Association of School Psychologists. Shapiro, E. S., & Eckert, T. L. (1993). Curriculum-based assessment among school psychologists: Knowledge, attitudes, and use. Journal of School Psychology, 31, 375–384. Shapiro, E. S., & Eckert, T. L. (1994). Acceptability of curriculum- based assessment by school psychologists. Journal of School Psychology, 32, 167–184. Shapiro, E. S., & Gebhardt, S. N. (2012). Comparing computer- adaptive and curriculum- based measurement methods of assessment. School Psychology Review, 41(3), 295–305. Shapiro, E. S., & Heick, P. (2004). School psychologist assessment practices in the evaluation of students referred for social/behavioral/emotional problems. Psychology in the Schools, 41, 551–561. Shapiro, E. S., Hilt-Panahon, A., Clemens, N., Gischlar, K., Devlin, K., Leichman, E., & Bowles, S. (2009, February). Outcomes of team decision making within an RTI model. Paper presented at the Pacific Coast Research Conference, Coronado, CA. Shapiro, E. S., Hilt-Panahon, A., & Gischlar, K. L. (2010). Implementing proven research in school-based practices: Progress monitoring within a response-to-intervention model. In M. R. Shinn & H. M. Walker (Eds.), Interventions for achievement and behavior problems in a three-tier model including RTI (pp. 175–192). Washington, DC: National Association of School Psychologists. Shapiro, E. S., & Kratochwill, T. R. (Eds.). (2000). Behavioral assessment in schools: Theory, research, and clinical foundations (2nd ed.). New York: Guilford Press. Shapiro, E. S., & Lentz, F. E. (1985). Assessing academic behavior: A behavioral approach. School Psychology Review, 14, 325–338. Shapiro, E. S., & Lentz, F. E. (1986). Behavioral assessment of academic behavior. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. 5, pp. 87–139). Hillsdale, NJ: Erlbaum. Share, D. L. (1995). Phonological recoding and self-teaching: Sine qua non of reading acquisition. Cognition, 55(2), 151–218. Share, D. L. (2008). Orthographic learning, phonological recoding, and self- teaching.
In Advances in child development and behavior (Vol. 36, pp. 31–82). Chennai, India: JAI. Share, D. L., & Stanovich, K. E. (1995). Cognitive processes in early reading development: Accommodating individual differences into a model of acquisition. Educational Psychology, 1, 1–57. Sheffield, K. I. M., & Walter, R. J. (2010). A review of single- case studies utilizing self- monitoring interventions to reduce problem classroom behaviors. Beyond Behavior, 19(2), 7–13. Shelton, A., Lemons, C. J., & Wexler, J. (2020). Supporting main idea identification and text summarization in middle school co- taught classes. Intervention in School and Clinic, article 1053451220944380. Shimabukuro, S. M., Prater, M. A., Jenkins, A., & Edelen-Smith, P. (1999). The effects of self-monitoring of academic performance on students with learning disabilities and ADD/ ADHD. Education and Treatment of Children, 22, 397–414. Shin, J., Deno, S. L., & Espin, C. (2000). Technical adequacy of the maze task for curriculum- based measurement of reading growth. Journal of Special Education, 34(3), 164–172. Shinn, M. R. (1988). Development of curriculum-based local norms for use in special education decision-making. School Psychology Review, 17, 61–80. Shinn, M. R. (Ed.). (1989a). Curriculum-based measurement: Assessing special children. New York: Guilford Press. Shinn, M. R. (Ed.). (1998). Advanced applications of curriculum-based measurement. New York: Guilford Press. Shinn, M. R., Good, R. H., III, & Stein, S. (1989). Summarizing trend in student achievement: A comparison of models. School Psychology Review, 18, 356–370. Shinn, M. R., Habedank, L., Rodden- Nord, L., & Knutson, N. (1993). Using curriculum- based measurement to identify potential candidates for reintegration into general education. Journal of Special Education, 27, 202–221. Shinn, M. R., Knutson, N., Good, R. H., III, Tilly, W. D., III, & Collins, V. L. (1992). Curriculum- based measurement of oral reading fluency: A confirmatory analysis of
526
References
its relation to reading. School Psychology Review, 21(3), 459–479. Shinn, M. R., & Walker, H. M. (Eds.). (2010). Interventions for achievement and behavior problems in a three-tier model including RTI. Washington, DC: National Association of School Psychologists. Shinn, M. R., Walker, H. M., & Stoner, G. (Eds.). (2002). Interventions for academic and behavior problems: II. Preventive and remedial approaches. Washington, DC: National Association of School Psychologists. Shipstead, Z., Hicks, K. L., & Engle, R. W. (2012). Cogmed working memory training: Does the evidence support the claims? Journal of Applied Research in Memory and Cognition, 1(3), 185–193. Shipstead, Z., Redick, T. S., & Engle, R. W. (2012). Is working memory training effective? Psychological Bulletin, 138(4), 628. Shriner, J., & Salvia, J. (1988). Chronic noncorrespondence between elementary math curricula and arithmetic tests. Exceptional Children, 55, 240–248. Siegler, R. S., & Lortie- Forgues, H. (2017). Hard lessons: Why rational number arithmetic is so difficult for so many people. Current Directions in Psychological Science, 26(4), 346–351. Siegler, R. S., Thompson, C. A., & Schneider, M. (2011). An integrated theory of whole number and fractions development. Cognitive Psychology, 62(4), 273–296. Silver, R. B., Measelle, J. R., Armstrong, J. M., & Essex, M. J. (2005). Trajectories of classroom externalizing behavior: Contributions of child characteristics, family characteristics, and the teacher–child relationship during the school transition. Journal of School Psychology, 43(1), 39–60. Simmons, D. C., Fuchs, L. S., Fuchs, D., Mathes, P., & Hodge, J. P. (1995). Effects of explicit teaching and peer tutoring on the reading achievement of learning- disabled and low- performing students in regular classrooms. Elementary School Journal, 95(5), 387–408. Simmons, D., & Kame’enui, E. (1999). Optimize. Eugene: College of Education, Institute for Development of Educational Achievement, University of Oregon. Simonsen, B., Myers, D., & Briere III, D. E. (2011). Comparing a behavioral Check-In/
Check-Out (CICO) intervention to standard practice in an urban middle school setting using an experimental group design. Journal of Positive Behavior Interventions, 13(1), 31–48. Skinner, C. H., Dittmer, K. I., & Howell, L. A. (2000). Direct observation in school settings: Theoretical issues. In E. S. Shapiro & T. R. Kratochwill (Eds.), Behavioral assessment in schools: Theory, research, and clinical foundations (2nd ed., pp. 19–45). New York: Guilford Press. Skinner, C. H., Fletcher, P. A., & Henington, C. (1996). Increasing learning rates by increasing student response rates: A summary of research. School Psychology Quarterly, 11(4), 313. Skinner, C. H., McLaughlin, T. F., & Logan, P. (1997). Cover, copy, and compare: A self- managed academic intervention effective across skills, students, and settings. Journal of Behavioral Education, 7, 295–306. Slate, J. R., & Saudargas, R. A. (1986). Differences in learning disabled and average students’ classroom behaviors. Learning Disability Quarterly, 9, 61–67. Slavin, R. E. (1977). Classroom reward structure: An analytic and practical review. Review of Educational Research, 47, 633– 650. Slavin, R. E. (1980). Cooperative learning. Review of Educational Research, 50, 315– 342. Slavin, R. E. (1983a). Cooperative learning. New York: Longman. Slavin, R. E. (1983b). Team assisted individualization: A cooperative learning solution for adaptive instruction in mathematics (Center for Organization of Schools Report No. 340). Baltimore, MD: Johns Hopkins University. Slavin, R. E., Madden, N. A., & Leavey, M. (1984). Effects of team- assisted individuation on the mathematics achievement of academically handicapped and nonhandicapped students. Journal of Educational Psychology, 76, 813–819. Smith, C., & Arnold, V. (1986). Macmillan– R series. New York: Macmillan. Smith, D. J., Young, K. R., Nelson, J. R., & West, R. P. (1992). The effect of a self- management procedure on the classroom academic behavior of students with mild
References 527 handicaps. School Psychology Review, 21, 59–72. Smith, D. J., Young, K. R., West, R. P., Morgan, D. P., & Rhode, G. (1988). Reducing the disruptive behavior of junior high school students: A classroom self-management procedure. Behavioral Disorders, 13, 231–239. Smith, J. L. M., Cummings, K. D., Nese, J. F., Alonzo, J., Fien, H., & Baker, S. K. (2014). The relation of word reading fluency initial level and gains with reading outcomes. School Psychology Review, 43(1), 30–40. Smith, J. L. M., Nelson, N. J., Fien, H., Smolkowski, K., Kosty, D., & Baker, S. K. (2016). Examining the efficacy of a multitiered intervention for at-risk readers in grade 1. Elementary School Journal, 116(4), 549–573. Smith, S., Barajas, K., Ellis, B., Moore, C., McCauley, S., & Reichow, B. (2019). A meta- analytic review of randomized controlled trials of the good behavior game. Behavior Modification, article 0145445519878670 Smolkowski, K., & Gunn, B. (2012). Reliability and validity of the Classroom Observations of Student– Teacher Interactions (COSTI) for kindergarten reading instruction. Early Childhood Research Quarterly, 27(2), 316– 328. Snow, C. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, CA: RAND. Snow, C. E., Burns, M. S., & Griffin, P. (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press. Snowling, M. J. (2013). Early identification and interventions for dyslexia: A contemporary view. Journal of Research in Special Educational Needs, 13(1), 7–14. Solis, M., Ciullo, S., Vaughn, S., Pyle, N., Hassaram, B., & Leroux, A. (2012). Reading comprehension interventions for middle school students with learning disabilities: A synthesis of 30 years of research. Journal of Learning Disabilities, 45(4), 327–340. Solomon, B. G., VanDerHeyden, A. M., Solomon, E. C., Korzeniewski, E. R., Payne, L. L., Campaña, K. V., & Dillon, C. R. (2022). Mastery measurement in mathematics and the Goldilocks Effect. School Psychology, 37(3), 213–224. Soveri, A., Antfolk, J., Karlsson, L., Salo, B.,
& Laine, M. (2017). Working memory training revisited: A multi-level meta-analysis of n-back training studies. Psychonomic Bulletin and Review, 24(4), 1077–1096. Spaulding, L. S., Mostert, M. P., & Beam, A. P. (2010). Is Brain Gym® an effective educational intervention? Exceptionality, 18(1), 18–30. Spear-Swerling, L. (2004). Fourth graders’ performance on a state- mandated assessment involving two different measures of reading comprehension. Reading Psychology, 25(2), 121–148. Speece, D. L., Mills, C., Ritchey, K. D., & Hillman, E. (2003). Initial evidence that letter fluency tasks are valid indicators of early reading skill. Journal of Special Education, 36(4), 223–233. Speece, D. L., Ritchey, K. D., Silverman, R., Schatschneider, C., Walker, C. Y., & Andrusik, K. N. (2010). Identifying children in middle childhood who are at risk for reading problems. School Psychology Review, 39(2), 258–276. Spencer, M., Fuchs, L. S., & Fuchs, D. (2020). Language-related longitudinal predictors of arithmetic word problem solving: A structural equation modeling approach. Contemporary Educational Psychology, 60, 101825. Spencer, M., Quinn, J. M., & Wagner, R. K. (2014). Specific reading comprehension disability: Major problem, myth, or misnomer? Learning Disabilities Research and Practice, 29(1), 3–9. Stage, S. A., Sheppard, J., Davidson, M. M., & Browning, M. M. (2001). Prediction of first-graders’ growth in oral reading fluency using kindergarten letter fluency. Journal of School Psychology, 39(3), 225–237. Stanley, S. D., & Greenwood, C. R. (1983). Assessing opportunity to respond in classroom environments through direct observation: How much opportunity to respond does the minority, disadvantaged student receive in school? Exceptional Children, 49, 370–373. Stanovich, K. E., Cunningham, A. E., & Cramer, B. B. (1984). Assessing phonological awareness in kindergarten children: Issues of task comparability. Journal of Experimental Child Psychology, 38(2), 175–190. Steacy, L. M., Compton, D. L., Petscher, Y., Elliott, J. D., Smith, K., Rueckl, J. G., . . .
528
References
Pugh, K. R. (2019). Development and prediction of context-dependent vowel pronunciation in elementary readers. Scientific Studies of Reading, 23(1), 49–63. Steacy, L. M., Elleman, A. M., Lovett, M. W., & Compton, D. L. (2016). Exploring differential effects across two decoding treatments on item-level transfer in children with significant word reading difficulties: A new approach for testing intervention elements. Scientific Studies of Reading, 20(4), 283– 295. Steacy, L. M., Fuchs, D., Gilbert, J. K., Kearns, D. M., Elleman, A. M., & Edwards, A. A. (2020). Sight word acquisition in first grade students at risk for reading disabilities: An item-level exploration of the number of exposures required for mastery. Annals of Dyslexia, 70(2), 259–274. Steacy, L. M., Wade-Woolley, L., Rueckl, J. G., Pugh, K. R., Elliott, J. D., & Compton, D. L. (2019). The role of set for variability in irregular word reading: Word and child predictors in typically developing readers and students at-risk for reading disabilities. Scientific Studies of Reading, 23(6), 523–532. Stecker, P. M., & Fuchs, L. S. (2000). Effecting superior achievement using curriculum-based measurement: The importance of individual progress monitoring. Learning Disabilities Research and Practice, 15, 128–134. Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum- based measurement to improve student achievement: Review of research. Psychology in the Schools, 42(8), 795–819. Stecker, P. M., & Lembke, E. S. (2011). Advanced applications of CBM in reading (K-6): Instructional decision-m aking strategies manual. Washington, DC: National Center on Student Progress Monitoring. Stein, J. (2001). The magnocellular theory of developmental dyslexia. Dyslexia, 7(1), 12–36. Stein, M., Kinder, D., Silbert, J., Carnine, D., & Rolf, K. (2018). Direct instruction mathematics (5th ed.). Boston: Pearson. Steiner, N. J., Frenette, E. C., Rene, K. M., Brennan, R. T., & Perrin, E. C. (2014). In- school neurofeedback training for ADHD: Sustained improvements from a randomized control trial. Pediatrics, 133(3), 483–492. Steiner, N. J., Sheldrick, R. C., Frenette, E. C., Rene, K. M., & Perrin, E. C. (2014). Class-
room behavior of participants with ADHD compared with peers: Influence of teaching format and grade level. Journal of Applied School Psychology, 30(3), 209–222. Sternberg, R. J., & Grigorenko, E. L. (2002). Difference scores in the identification of children with learning disabilities: It’s time to use a different method. Journal of School Psychology, 40, 65–83. Stevens, E. A., Murray, C. S., Fishstrom, S., & Vaughn, S. (2020). Using question generation to improve reading comprehension for middle-grade students. Journal of Adolescent and Adult Literacy, 64(3), 311–322. Stevens, E. A., Park, S., & Vaughn, S. (2019). A review of summarizing and main idea interventions for struggling readers in grades 3 through 12: 1978–2016. Remedial and Special Education, 40(3), 131–149. Stevens, E. A., Walker, M. A., & Vaughn, S. (2017). The effects of reading fluency interventions on the reading fluency and reading comprehension performance of elementary students with learning disabilities: A synthesis of the research from 2001 to 2014. Journal of Learning Disabilities, 50(5), 576–590. Stinnett, T. A., Havey, J. M., & Oehler-Stinnett, J. (1994). Current test usage by practicing school psychologists: A national survey. Journal of Psychoeducational Assessment, 12, 351–350. Stockard, J., Wood, T. W., Coughlin, C., & Rasplica Khoury, C. (2018). The effectiveness of direct instruction curricula: A meta- analysis of a half century of research. Review of Educational Research, 88(4), 479–507. Stocker, J. D., Jr., & Kubin, R. M., Jr. (2017). Impact of cover, copy, and compare on fluency outcomes for students with disabilities and math deficits: A review of the literature. Preventing School Failure: Alternative Education for Children and Youth, 61(1), 56–68. Storch, S. A., & Whitehurst, G. J. (2002). Oral language and code- related precursors to reading: Evidence from a longitudinal structural model. Developmental Psychology, 38(6), 934–947. Stowitschek, C. E., Hecimovic, A., Stowitschek, J. J., & Shores, R. E. (1982). Behaviorally disordered adolescents as peer tutors: Immediate and generative effects on instructional performance and spelling achievement. Behavior Disorders, 7, 136–147. Strickland, T. K., & Maccini, P. (2013a). Explo-
References 529 ration of quadratic expressions through multiple representations for students with mathematics difficulties. Learning Disabilities: A Multidisciplinary Journal, 19, 61– 71. Strickland, T. K., & Maccini, P. (2013b). The effects of the concrete– representational– abstract integration strategy on the ability of students with learning disabilities to multiply linear expressions within area problems. Remedial and Special Education, 34, 142– 153. Strickland, W. D., Boon, R. T., & Spencer, V. G. (2013). The effects of repeated reading on the fluency and comprehension skills of elementary- age students with learning disabilities (LD), 2001–2011: A review of research and practice. Learning Disabilities: A Contemporary Journal, 11(1), 1–33. Strong, G. K., Torgerson, C. J., Torgerson, D., & Hulme, C. (2011). A systematic meta- analytic review of evidence for the effectiveness of the “Fast ForWord” language intervention program. Journal of Child Psychology and Psychiatry, 52(3), 224–235. Struthers, J. P., Bartlamay, H., Bell, S., & McLaughlin, T. F. (1994). An analysis of the Add-a-Word spelling program and public posting across three categories of children with special needs. Reading Improvement, 31(1), 28–36. Struthers, J. P., Bartlamay, H., Williams, R. L. O., & McLaughlin, T. F. (1989). Effects of the Add-a-Word spelling program on spelling accuracy during creative writing: A replication across two classrooms. British Columbia Journal of Special Education, 13(2), 151–158. Stuebing, K. K., Barth, A. E., Trahan, L. H., Reddy, R. R., Miciak, J., & Fletcher, J. M. (2015). Are child cognitive characteristics strong predictors of responses to intervention? A meta- analysis. Review of Educational Research, 85(3), 395–429. Stuebing, K. K., Fletcher, J. M., Branum- Martin, L., Francis, D. J., & VanDerHeyden, A. (2012). Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41(1), 3–22. Stuebing, K. K., Fletcher, J. M., LeDoux, J. M., Lyon, G. R., Shaywitz, S. E., & Shaywitz, B. A. (2002). Validity of IQ- discrepancy classification of reading disabilities: A meta-
analysis. American Educational Research Journal, 39, 469–518. Suggate, S. P. (2016). A meta-analysis of the long-term effects of phonemic awareness, phonics, fluency, and reading comprehension interventions. Journal of Learning Disabilities, 49(1), 77–96. Sulzer- A zaroff, B., & Mayer, G. R. (1986). Achieving educational excellence: Using behavioral strategies. New York: Holt, Rinehart & Winston. Sutherland, K. S., & Wehby, J. H. (2001). Exploring the relationship between increased opportunities to respond to academic requests and the academic and behavioral outcomes of students with EBD: A review. Remedial and Special Education, 22(2), 113–121. Swanson, E. A., & Vaughn, S. (2010). An observation study of reading instruction provided to elementary students with learning disabilities in the resource room. Psychology in the Schools, 47(5), 481–492. Swanson, E., Vaughn, S., Fall, A. M., Stevens, E. A., Stewart, A. A., Capin, P., & Roberts, G. (2021). The differential efficacy of a professional development model on reading outcomes for students with and without disabilities. Exceptional Children, article 00144029211007149. Swanson, E., Wanzek, J., McCulley, L., Stillman- Spisak, S., Vaughn, S., Simmons, D., . . . Hairrell, A. (2016). Literacy and text reading in middle and high school social studies and English language arts classrooms. Reading and Writing Quarterly, 32(3), 199–222. Swanson, H. L. (1999). Reading research for students with LD: A meta-analysis of intervention outcomes. Journal of Learning Disabilities, 32(6), 504–532. Swanson, H. L. (2000). What instruction works for students with learning disabilities? Summarizing the results from a meta-analysis of intervention studies. In R. M. Gersten, E. P. Schiller, & S. Vaughn (Eds.), Contemporary special education research: Syntheses of the knowledge base on critical instructional issues (pp. 1–30). LEA Series on Special Education and Disability. Hiilsdale, NJ: Erlbaum. Swanson, H. L., Harris, K. R., & Graham, S. (Eds.). (2014). Handbook of learning disabilities (2nd ed.). New York: Guilford Press.
530
References
Swanson, H. L., & Hoskyn, M. (1998). Experimental intervention research on students with learning disabilities: A meta-analysis of treatment outcomes. Review of Educational Research, 68(3), 277–321. Swanson, H. L., & Scarpati, S. (1984). Self- instruction training to increase academic performance of educationally handicapped children. Child and Family Behavior Therapy, 6(4), 23–39. Swoszowski, N. C. (2014). Adapting a Tier 2 behavioral intervention, Check-In/CheckOut, to meet students’ needs. Intervention in School and Clinic, 49(4), 211–218. Tabacek, D. A., McLaughlin, T. F., & Howard, V. F. (1994). Teaching preschool children with disabilities tutoring skills: Effects on preacademic behaviors. Child and Family Behavior Therapy, 16(2), 43–63. Tafti, M. A., Boyle, J. R., & Crawford, C. M. (2014). Meta- analysis of visual- spatial deficits in dyslexia. International Journal of Brain and Cognitive Sciences, 3(1), 25–34. Tallal, P. (1980). Auditory temporal perception, phonics, and reading disabilities in children. Brain and Language, 9(2), 182–198. Tallal, P. (2000). The science of literacy: From the laboratory to the classroom. Proceedings of the National Academy of Sciences, 97(6), 2402–2404. Taylor, L. K., Alber, S. R., & Walker, D. W. (2002). The comparative effects of a modified self-questioning strategy and story mapping on the reading comprehension of elementary students with learning disabilities. Journal of Behavioral Education, 11, 69–87. Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading: A meta-analysis. Remedial and Special Education, 25(4), 252–261. Therrien, W. J., & Kubina, R. M., Jr. (2006). Developing reading fluency with repeated reading. Intervention in School and Clinic, 41(3), 156–160. Thurlow, M. L., Ysseldyke, J. E., Graden, J. L., & Algozzine, B. (1983). What’s “special” about the special education resource room for learning disabled students? Learning Disability Quarterly, 6, 283–288. Thurlow, M. L., Ysseldyke, J. E., Graden, J., & Algozzine, B. (1984). Opportunity to learn for LD students receiving different levels of special education services. Learning Disabilities Quarterly, 7, 55–67.
Thurlow, M. L., Ysseldyke, J. E., Wotruba, J. W., & Algozzine, B. (1993). Instruction in special education classrooms under varying student–teacher ratios. Elementary School Journal, 93, 305–320. Tichá, R., Espin, C. A., & Wayman, M. M. (2009). Reading progress monitoring for secondary-school students: Reliability, validity, and sensitivity to growth of reading- aloud and maze-selection measures. Learning Disabilities Research and Practice, 24(3), 132–142. Tighe, E. L., & Schatschneider, C. (2014). A dominance analysis approach to determining predictor importance in third, seventh, and tenth grade reading comprehension skills. Reading and Writing, 27(1), 101–127. Toll, S. W., & Van Luit, J. E. (2014). The developmental relationship between language and low early numeracy skills throughout kindergarten. Exceptional Children, 81(1), 64–78. Topping, K. J., & Bryce, A. (2004). Cross-age peer tutoring of reading and thinking: Influence on thinking skills. Educational Psychology, 24, 595–621. Topping, K. J., & Whiteley, M. (1993). Sex differences in the effectiveness of peer tutoring. School Psychology International, 14, 57–67. Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K. K. S., & Conway, T. (2001). Intensive remedial instruction for children with severe reading disabilities: Immediate and long-term outcomes from two instructional approaches. Journal of Learning Disabilities, 34, 33–58. Torgesen, J. K., & Bryant, B. T. (1994). Phonological awareness training for reading. Austin, TX: PRO-ED. Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. (2012). Test of Word Reading Efficiency—Second Edition. Austin, TX: PROED. Torgesen, J. K., Wagner, R. K., Rashotte, C. A., Rose, E., Lindamood, P., Conway, T., & Garvan, C. (1999). Preventing reading failure in young children with phonological processing disabilities: Group and individual responses to instruction. Journal of Educational Psychology, 91(4), 579–593. Tralli, R., Colombo, B., Deshler, D. D., & Schumaker, J. B. (1996). The strategies intervention model: A model for supported inclusion
References 531 at the secondary level. Remedial and Special Education, 17, 204–216. Trammel, D. L., Schloss, P. J., & Alper, S. (1994). Using self-recording, evaluation, and graphing to increase completion of homework assignments. Journal of Learning Disabilities, 27, 75–81. Treiman, R. (1998). Why spelling? The benefits of incorporating spelling into beginning reading instruction. In J. L. Metsala & L. C. Ehri (Eds.), Word recognition in beginning literacy (pp. 289–313). Hillsdale, NJ: Erlbaum. Treiman, R. (2018). Statistical learning and spelling. Language, Speech, and Hearing Services in Schools, 49(3S), 644–652. Treiman, R., & Kessler, B. (2003). The role of letter names in the acquisition of literacy. Advances in Child Development and Behavior, 31, 105–138. Treiman, R., Mullennix, J., Bijeljac-Babic, R., & Richmond-Welty, E. D. (1995). The special role of rimes in the description, use, and acquisition of English orthography. Journal of Experimental Psychology: General, 124(2), 107–136. Treiman, R., & Rodriguez, K. (1999). Young children use letter names in learning to read words. Psychological Science, 10(4), 334– 338. Treptow, M. A., Burns, M. K., & McComas, J. J. (2007). Reading at the frustration, instructional, and independent levels: The effects on students’ reading comprehension and time on task. School Psychology Review, 36(1), 159–166. Troia, G. A. (2018, August). Relations between teacher pedagogical content knowledge and student writing outcomes. Paper presented at the Biennial Meeting of the Special Interest Group on Writing of the European Association for Research on Learning and Instruction, Antwerp, Belgium. Troia, G. A., Harbaugh, A. G., Shankland, R. K., Wolbers, K. A., & Lawrence, A. M. (2013). Relationships between writing motivation, writing activity, and writing performance: Effects of grade, sex, and ability. Reading and Writing, 26(1), 17–44. Trovato, J., & Bucher, B. (1980). Peer tutoring with or without home-based reinforcement for reading remediation. Journal of Applied Behavior Analysis, 13, 129–141. Truckenmiller, A. J., Eckert, T. L., Codding,
R. S., & Petscher, Y. (2014). Evaluating the impact of feedback on elementary aged students’ fluency growth in written expression: A randomized controlled trial. Journal of School Psychology, 52(6), 531–548. Truckenmiller, A. J., McKindles, J. V., Petscher, Y., Eckert, T. L., & Tock, J. (2020). Expanding curriculum-based measurement in written expression for middle school. Journal of Special Education, 54(3), 133–145. Tucker, J. A. (1985). Curriculum-based assessment: An introduction. Exceptional Children, 52, 199–204. Tucker, J. A. (1989). Basic flashcard technique when vocabulary is the goal. Unpublished teaching materials, Berrien Springs, MI. Tunmer, W. E., & Chapman, J. W. (2012). Does set for variability mediate the influence of vocabulary knowledge on the development of word recognition skills? Scientific Studies of Reading, 16(2), 122–140. Ukrainetz, T. A., Nuspl, J. J., Wilkerson, K., & Beddes, S. R. (2011). The effects of syllable instruction on phonemic awareness in preschoolers. Early Childhood Research Quarterly, 26(1), 50–60. Ursache, A., Blair, C., & Raver, C. C. (2012). The promotion of self-regulation as a means of enhancing school readiness and early achievement in children at risk for school failure. Child Development Perspectives, 6(2), 122–128. Vaac, N. N., & Cannon, S. J. (1991). Crossage tutoring in mathematics: Sixth graders helping students who are moderately handicapped. Education and Training in Mental Retardation, 26, 89–97. Vadasy, P., Wayne, S., O’Connor, R. E., Jenkins, J., Paul, L., Fitzbough, M., & Peyon, J. (2005). Sound partners: A tutoring program in evidence-based early reading. Dallas, TX: Voyager/Sopris. Vallecorsa, A. L., & deBettencourt, L. U. (1997). Using a mapping procedure to teach reading and writing skills to middle grade students with learning disabilities. Education and Treatment of Children, 20, 173– 188. VanDerHeyden, A. M. (2013). Universal screening may not be for everyone: Using a threshold model as a smarter way to determine risk. School Psychology Review, 42(4), 402–414. VanDerHeyden, A. M., Broussard, C., &
532
References
Burns, M. K. (2019). Classification agreement for gated screening in mathematics: Subskill mastery measurement and classwide intervention. Assessment for Effective Intervention, article 1534508419882484. VanDerHeyden, A. M., & Burns, M. K. (2010). Essentials of response to intervention. Hoboken, NJ: Wiley. VanDerHeyden, A. M., & Burns, M. K. (2018). Improving decision making in school psychology: Making a difference in the lives of students, not just a prediction about their lives. School Psychology Review, 47(4), 385–395. VanDerHeyden, A. M., Burns, M. K., & Bonifay, W. (2018). Is more screening better? The relationship between frequent screening, accurate decisions, and reading proficiency. School Psychology Review, 47(1), 62–82. VanDerHeyden, A. M., & Codding, R. S. (2015). Practical effects of classwide mathematics intervention. School Psychology Review, 44(2), 169–190. VanDerHeyden, A. M., Codding, R. S., & Martin, R. (2017). Relative value of common screening measures in mathematics. School Psychology Review, 46(1), 65–87. VanDerHeyden, A., McLaughlin, T., Algina, J., & Snyder, P. (2012). Randomized evaluation of a supplemental grade-wide mathematics intervention. American Educational Research Journal, 49(6), 1251–1284. VanDerHeyden, A. M., & Witt, J. C. (2008). Best practices in can’t do/won’t do assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (Vol. 5, pp. 195–208). Washington, DC: National Association of School Psychologists. VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response-to-intervention (RTI) model on the identification of children for special education. Journal of School Psychology, 45, 225–256. Van Norman, E. R., & Christ, T. J. (2016a). Curriculum-based measurement of reading: Accuracy of recommendations from threepoint decision rules. School Psychology Review, 45(3), 296–309. Van Norman, E. R., & Christ, T. J. (2016b). How accurate are interpretations of curriculum- based measurement progress monitoring data? Visual analysis versus decision rules. Journal of School Psychology, 58, 41–55.
Van Norman, E. R., Klingbeil, D. A., & Nelson, P. M. (2017). Posttest probabilities: An empirical demonstration of their use in evaluating the performance of universal screening measures across settings. School Psychology Review, 46(4), 349–362. Van Norman, E. R., Nelson, P. M., & Parker, D. C. (2017). Technical adequacy of growth estimates from a computer adaptive test: Implications for progress monitoring. School Psychology Quarterly, 32(3), 379. Van Norman, E. R., Nelson, P. M., Shin, J. E., & Christ, T. J. (2013). An evaluation of the effects of graphic aids in improving decision accuracy in a continuous treatment design. Journal of Behavioral Education, 22(4), 283–301. Van Norman, E. R., & Parker, D. C. (2016). An evaluation of the linearity of curriculum- based measurement of oral reading (CBM-R) progress monitoring data: Idiographic considerations. Learning Disabilities Research and Practice, 31(4), 199–207. Varma, S., & Schleisman, K. B. (2014). The cognitive underpinnings of incremental rehearsal. School Psychology Review, 43(2), 222–228. Vaughn, S., Capin, P., Scammacca, N., Roberts, G., Cirino, P., & Fletcher, J. M. (2020). The critical role of word reading as a predictor of response to intervention. Journal of Learning Disabilities, 53(6), 415–427. Vaughn, S., & Fletcher, J. M. (2012). Response to intervention with secondary school students with reading difficulties. Journal of Learning Disabilities, 45(3), 244–256. Vaughn, S., Gersten, R., & Chard, D. J. (2000). The underlying message in LD intervention research: Findings from research syntheses. Exceptional Children, 67(1), 99–114. Vaughn, S., Klingner, J. K., & Bryant, D. P. (2001). Collaborative strategic reading as a means to enhance peer-mediated instruction for reading comprehension and content-area learning. Remedial and Special Education, 22(2), 66–74. Vaughn, S., Klingner, J. K., Swanson, E. A., Boardman, A. G., Roberts, G., Mohammed, S. S., & Stillman-Spisak, S. J. (2011). Efficacy of collaborative strategic reading with middle school students. American Educational Research Journal, 48(4), 938–964. Vaughn, S., Levy, S., Coleman, M., & Bos, C. S. (2002). Reading instruction for students
References 533 with LD and EBD: A synthesis of observation studies. Journal of Special Education, 36(1), 2–13. Vaughn, S., Linan-T hompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391–409. Vaughn, S., Wanzek, J., Wexler, J., Barth, A., Cirino, P. T., Fletcher, J., . . . Francis, D. (2010). The relative effects of group size on reading progress of older students with reading difficulties. Reading and Writing, 23(8), 931–956. Vaughn, S., Wanzek, J., Woodruff, A. L., & Linan-T hompson, S. (2007). Prevention and early identification of students with reading disabilities. In D. Haager, J. Klingner, & S. Vaughn (Eds.), Evidence-based reading practices for response to intervention (pp. 11–27). Baltimore, MD: Brookes. Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Pratt, A., Chen, R., & Denckla, M. B. (1996). Cognitive profiles of difficult- to- remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experiential deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88, 601–638. Verhoeven, L., & Van Leeuwe, J. (2008). Prediction of the development of reading comprehension: A longitudinal study. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 22(3), 407–423. Volpe, R. J., Burns, M. K., DuBois, M., & Zaslofsky, A. F. (2011). Computer- assisted tutoring: Teaching letter sounds to kindergarten students using incremental rehearsal. Psychology in the Schools, 48(4), 332–342. Volpe, R. J., DiPerna, J. C., Hintze, J. M., & Shapiro, E. S. (2005). Observing students in classroom settings: A review of seven coding schemes. School Psychology Review, 34, 454–474. Volpe, R. J., Mulé, C. M., Briesch, A. M., Joseph, L. M., & Burns, M. K. (2011). A comparison of two flashcard drill methods targeting word recognition. Journal of Behavioral Education, 20(2), 117–137. Wagner, D. L., Coolong-Chaffin, M., & Deris, A. R. (2017). Comparing brief experimental analysis and teacher judgment for select-
ing early reading interventions. Journal of Behavioral Education, 26(4), 348–370. Wagner, D. L., & Espin, C. A. (2015). The reading fluency and comprehension of fifth- and sixth-grade struggling readers across brief tests of various intervention approaches. Reading Psychology, 36(7), 545–578. Wagner, R. K. (1988). Causal relations between the development of phonological processing abilities and the acquisition of reading skills: A meta-analysis. Merrill–Palmer Quarterly, 34(3), 261–279. Wagner, R. K., & Torgesen, J. K. (1987). The nature of phonological processing and its causal role in the acquisition of reading skills. Psychological Bulletin, 101(2), 192. Wagner, R. K., Torgesen, J. K., & Rashotte, C. A. (1994). Development of reading- related phonological processing abilities: New evidence of bidirectional causality from a latent variable longitudinal study. Developmental Psychology, 30(1), 73. Wagner, R., Torgesen, J., Rashotte, C., & Pearson, N. (2013). The Comprehensive Test of Phonological Processing—Second Edition. Austin, TX: PRO-ED. Walczyk, J. J., Wei, M., Griffeth- Ross, D. A., Goubert, S. E., Cooper, A. L., & Zha, P. (2007). Development of the interplay between automatic processes and cognitive resources in reading. Journal of Educational Psychology, 99, 867–887. Wallot, S., O’Brien, B. A., Haussmann, A., Kloos, H., & Lyby, M. S. (2014). The role of reading time complexity and reading speed in text comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(6), 1745–1765. Walpole, S., McKenna, M. C., & Philippakos, Z. (2011). Differentiated reading instruction in grades 4 and 5: Strategies and resources. New York: Guilford Press. Wang, M. T., & Eccles, J. S. (2013). School context, achievement motivation, and academic engagement: A longitudinal study of school engagement using a multidimensional perspective. Learning and Instruction, 28, 12–23. Wanzek, J., Roberts, G., Al Otaiba, S., & Kent, S. C. (2014). The relationship of print reading in Tier I instruction and reading achievement for kindergarten students at risk of reading difficulties. Learning Disability Quarterly, 37(3), 148–160.
534
References
Wanzek, J., Stevens, E. A., Williams, K. J., Scammacca, N., Vaughn, S., & Sargent, K. (2018). Current evidence on the effects of intensive early reading interventions. Journal of Learning Disabilities, 51(6), 612–624. Wanzek, J., Vaughn, S., Scammacca, N. K., Metz, K., Murray, C. S., Roberts, G., & Danielson, L. (2013). Extensive reading interventions for students with reading difficulties after grade 3. Review of Educational Research, 83(2), 163–195. Wayman, M. M., Wallace, T., Wiley, H. I., Espin, C. A., & Tichá, R. (2007). Literature synthesis on curriculum-based measurement in reading. Journal of Special Education, 41, 85–120. Webber, J., Scheuermann, B., McCall, C., & Coleman, M. (1993). Research on self-monitoring as a behavior management technique in special education classrooms: A descriptive review. Remedial and Special Education, 14(2), 38–56. Wechsler, D. (2001). Wechsler Individual Achievement Test—II. San Antonio, TX: Psychological Corporation/Harcourt, Brace Jovanovich. Wechsler, D. (2003). Wechsler Intelligence Scale for Children—IV. San Antonio, TX: Psychological Corporation/Harcourt, Brace Jovanovich. Wechsler, D. (2009). Wechsler Individual Achievement Test—III. Boston: Pearson. Weiser, B., & Mathes, P. (2011). Using encoding instruction to improve the reading and spelling performances of elementary students at risk for literacy difficulties: A best-evidence synthesis. Review of Educational Research, 81(2), 170–200. Weissenburger, J. W., & Espin, C. A. (2005). Curriculum- based measures of writing across grade levels. Journal of School Psychology, 43, 153–169. Weist, M. D., Ollendick, T. H., & Finney, J. W. (1991). Toward the empirical validation of treatment targets in children. Clinical Psychology Review, 11, 515–538. Wery, J. J., & Diliberto, J. A. (2017). The effect of a specialized dyslexia font, OpenDyslexic, on reading rate and accuracy. Annals of Dyslexia, 67(2), 114–127. Westendorp, M., Hartman, E., Houwen, S., Smith, J., & Visscher, C. (2011). The relationship between gross motor skills and aca-
demic achievement in children with learning disabilities. Research in Developmental Disabilities, 32(6), 2773–2779. Wexler, J., Vaughn, S., Edmonds, M., & Reutebuch, C. K. (2008). A synthesis of fluency interventions for secondary struggling readers. Reading and Writing, 21, 317–347. White, W. A. T. (1988). A meta-analysis of the effects of direct instruction in special education. Education and Treatment of Children, 11, 364–374. Whitman, T., & Johnston, M. B. (1983). Teaching addition and subtraction with regrouping to educable mentally retarded children: A group self-instructional training program. Behavior Therapy, 14, 127–143. Wiggins, G. (1989). A true test: Toward a more authentic and equitable assessment. Phi Delta Kappan, 70, 703–713. Willcutt, E. G., McGrath, L. M., Pennington, B. F., Keenan, J. M., DeFries, J. C., Olson, R. K., & Wadsworth, S. J. (2019). Understanding comorbidity between specific learning disabilities. New Directions for Child and Adolescent Development, 2019(165), 91–109. Wilson, M. S., & Reschly, D. J. (1996). Assessment in school psychology training and practice. School Psychology Review, 25, 9–23. Windsor Learning Systems. (2009). The Sondy System. St. Paul, MN: Author. Witt, J. C. (1990). Complaining, precopernican thought, and the univariate linear mind: Questions for school-based behavioral consultation research. School Psychology Review, 19, 367–377. Witt, J. C., & Elliott, S. N. (1983). Assessment in behavioral consultation: The initial interview. School Psychology Review, 12, 42–49. Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom intervention strategies. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. IV, pp. 251–288). Hillsdale, NJ: Erlbaum. Witt, J. C., Erchul, W. P., McKee, W. T., Pardue, M., & Wickstrom, K. F. (1991). Conversational control in school-based consultation: The relationship between consultant and consultee topic determination and consultation outcome. Journal of Educational and Psychological Consultation, 2, 101–116. Witt, J. C., & Martens, B. K. (1983). Assess-
References 535 ing the acceptability of behavioral interventions used in classrooms. Psychology in the Schools, 20, 510–517. Witt, J. C., & & VanDerHeyden, A. M. (2007). The System to Enhance Educational Performance (STEEP): Using science to improve achievement. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook of response to intervention (pp. 343–353). New York: Springer. Witzel, B. S., & Little, M. E. (2016). Teaching elementary mathematics to struggling learners. New York: Guilford Press. Witzel, B. S., Riccomini, P. J., & Schneider, E. (2008). Implementing CRA with secondary students with learning disabilities in mathematics. Intervention in School and Clinic, 43(5), 270–276. Wixson, K. K. (2017). An interactive view of reading comprehension: Implications for assessment. Language, Speech, and Hearing Services in Schools, 48(2), 77–83. Wood, B. K., Hojnoski, R. L., Laracy, S. D., & Olson, C. L. (2016). Comparison of observational methods and their relation to ratings of engagement in young children. Topics in Early Childhood Special Education, 35, 211–222. Wood, S. J., Murdock, J. Y., & Cronin, M. E. (2002). Self- monitoring and at-risk middle school students: Academic performance improves, maintains, and generalizes. Behavior Modification, 26, 605–626. Wood, S. J., Murdock, J. Y., Cronin, M. E., Dawson, N. M., & Kirby, P. C. (1998). Effects of self-monitoring on on-task behaviors of at-risk middle school students. Journal of Behavioral Education, 8(2), 263–279. Woodcock, R. E., McGrew, K., & Mather, N. (2001). Woodcock–Johnson— III (WJ-III) Tests of Achievement. Rolling Meadows, IL: Riverside. Woodcock, R. W. (1987). Woodcock Reading Mastery Tests— Revised. Circle Pines, MN: American Guidance Service. Woodcock, R. W. (2011). Woodcock Reading Mastery Tests—Third Edition. Rolling Meadows, IL: Riverside. Woodcock, R. W., Alvarado, C. G., Ruef, M. L., & Schrank, F. A. (2017). The Woodcock– Muñoz Language Survey– Third Edition. Rolling Meadows, IL: Riverside. Woodcock, R. W., Mather, N., & Schrank, F. A. (1997). Woodcock– Johnson– III Diag-
nostic Reading Battery (WJ-III DRB). Rolling Meadows, IL: Riverside. Wray, K. A., Alonzo, J., & Tindal, G. (2014). Internal consistency of the easyCBM vocabulary measures: Grades 2–8 (Technical Report No. 1406). Eugene: University of Oregon, Behavioral Research & Teaching. Xin, Y. P., Jitendra, A., Deatline- Buchman, A., Hickman, W., & Bertram, D. (2002). A comparison of two instructional approaches on mathematical word problem solving by students with learning problems (Eric Document Reproduction Service No. ED473061). Lafayette, IN: Purdue University. Xue, G., & Nation, I. S. P. (1984). A university word list. Language Learning and Communication, 3(2), 215–229. Yeo, S. (2010). Predicting performance on state achievement tests using curriculum- based measurement in reading: A multilevel meta- analysis. Remedial and Special Education, 31(6), 412–422. Young, C., Hecimovic, A., & Salzberg, C. L. (1983). Tutor–tutee behavior of disadvantaged kindergarten children during peer tutoring. Education and Treatment of Children, 6, 123–135. Young, K. R., West, R. P., Smith, D. J., & Morgan, D. P. (1991). Teaching self-m anagement strategies to adolescents. Longmont, CO: Sopris West. Ysseldyke, J. E., & Christenson, S. (1987). The Instructional Environment Scale. Austin, TX: PRO-ED. Ysseldyke, J. E., & Christenson, S. (1993). TIES-II/The Instructional Environment System– II. Longmont, CO: Sopris West. Ysseldyke, J. E., Spicuzza, R., Kosciolek, S., & Boys, C. (2003). Effects of a learning information system on mathematics achievement and classroom structure. Journal of Educational Research, 96, 163–173. Ysseldyke, J. E., Thurlow, M. L., Christenson, S. L., & McVicar, R. (1988). Instructional grouping arrangements used with mentally retarded, learning disabled, emotionally disturbed, and nonhandicapped elementary students. Journal of Educational Research, 81, 305–311. Ysseldyke, J. E., Thurlow, M. L., Mecklenberg, C., Graden, J., & Algozzine, B. (1984). Changes in academic engaged time as a function of assessment and special education
536
References
intervention. Special Services in the Schools, 1(2), 31–44. Zakszeski, B. N., Hojnoski, R. L., & Wood, B. K. (2017). Considerations for time sampling interval durations in the measurement of young children’s classroom engagement. Topics in Early Childhood Special Education, 37(1), 42–53. Zeno, S. M., Ivens, S. H., Millard, R. T., & Duvvuri, R. (1995). The educator’s word frequency guide. Brewster, NY: Touchstone Applied Science Associates. Zheng, X., Flynn, L. J., & Swanson, H. L. (2013). Experimental intervention studies on word problem solving and math disabilities: A selective analysis of the literature. Learning Disability Quarterly, 36(2), 97–111. Ziegler, J. C., Perry, C., Ma-Wyatt, A., Ladner, D., & Schulte-Körne, G. (2003). Developmental dyslexia in different languages: Language-
specific or universal? Journal of Experimental Child Psychology, 86(3), 169–193. Zimmerman, B. J. (2008). Goal setting: A key proactive source of academic self-regulation. In D. H. Schunk & B. J. Zimmerman (Eds.), Motivation and self- regulated learning: Theory, research, and applications (pp. 267– 295). Hillsdale, NJ: Erlbaum. Zimmermann, L. M., Reed, D. K., & Aloe, A. M. (2021). A meta-analysis of non-repetitive reading fluency interventions for students with reading difficulties. Remedial and Special Education, 42(2), 78–93. Zipprich, M. A. (1995). Teaching web making as a guided planning tool to improve student narrative writing. Remedial and Special Education, 16, 3–15. Zumeta Edmonds, R., Gruner Ganhdi, A., & Danielson, L. (2019). Essentials of intensive intervention. New York: Guilford Press.
Index
Note. f or t following a page number indicates a figure or a table. Abstraction, 71, 297 Academic Competence Evaluation Scale (ACES), 108 Academic Engaged Time Code of the SSBD (AET), 116t Academic engagement. See also Engagement assessing the academic environment and, 42–43 Behavioral Observation of Students in Schools (BOSS) and, 43, 118–125, 121t classroom contingencies and, 46–47 keystone behavior perspective and, 41 mathematics skills and, 81–82 overview, 40 pace of instruction and, 45–46 progress monitoring and, 361 Academic environment. See also Assessment of the academic environment; Classroom environment data-based instructional decisions and, 383, 385 identifying targets for assessment and intervention and, 41–48, 88 integrated model of curriculum-based assessment and, 21–22, 22f less intensive intervention strategies for academic problems, 223–235, 227f, 228f, 229f, 231f Academic Performance Rating Scale (APRS), 108, 422 Academic strategy training, 226–227 Academic vocabulary, 166–167, 289. See also Vocabulary Acadience, 335t, 341t, 352t, 354t, 398 Accuracy in mathematics, 191, 326 Accuracy in reading aligning assessment with interventions and, 251f assessing, 142f, 143 determining instructional level and, 174, 178
overview, 269, 326 reading practice in connected text and, 276 Accurate production scoring methods, 198–199, 356–357 Active engagement, 31–32, 42, 43, 118–125, 121t. See also Academic engagement; Engagement Add-a-Word program, 321–322 Addition computation skills mastery curriculum, 209–210 early numerical competencies and, 183 examiner-created mathematics computation probes and, 207–208 fraction operations and, 77 mathematics instruction and intervention for, 302–307, 304f mathematics skills assessment and, 183, 184–185 multidigit computation and, 76 number combinations and, 73–75 ADHD School Observation Code (SOC), 116t Adjustments, instructional. See Instructional modification Affirmative feedback, 44, 45. See also Feedback AIMSweb and AIMSweb Plus overview, 335t, 341t, 352t, 354t progress monitoring and, 343–344 universal screening and, 398, 402, 403f Algebra. See also Mathematics instruction and intervention and, 298f, 315–316 keystone model of mathematics and, 69f, 70–72 mathematics skills assessment and, 189 overview, 68–69, 81–82 rational numbers and, 77–78 skill domains that support, 75–76 word problem solving and, 78–80
537
538 Alliteration, 160–161. See also Phonological awareness Alphabetic knowledge. See also Letter–sound correspondence assessing, 52, 155–158, 156t, 158t overview, 52–56, 54t phonemic awareness and, 56 progress monitoring and, 335t, 336 reading interventions for, 52, 252 universal screening and, 394 Antecedent–behavior–consequence sequence, 44, 46–47 Applied behavioral analysis, 38–39 Aptitude by treatment interactions (ATI) concept, 28–31 ASPENS (Assessing Proficiency in Early Number Sense), 351, 352t, 354t Assessment. See also Assessment instruments; Assessment of the academic environment; Curriculum-based measurement (CBM); Mathematics skills assessment; Multitiered system of supports (MTSS); Progress monitoring; Reading skills assessment; Targets for assessment; Universal screening; Writing skills assessment; individual types of assessment for academic problems, 7–8 aligning instruction and interventions with, 250, 251f, 296–297, 298f, 317–319, 318f assumptions in, 34–37 keystone model of reading and, 52 overview, 1–7, 393 simple view of reading (SVR) and, 50–51 student interviews and, 127f–128f types of, 8–18, 10t–11t, 24 working memory and, 25 Assessment instruments. See also Assessment; Assessment of the academic environment; Progress monitoring comprehension difficulties, 168, 169t, 170 early literacy skills, 156t early numerical competencies, 183, 184t early reading skills progress monitoring, 334–341, 335t fractions and other rational numbers, 188 graphing data from, 375–377, 376f language and, 166t mathematics progress monitoring, 350–355, 352t, 354t number combinations, 181, 182t oral reading, 143–145, 144t overview, 1–2, 333–334, 365 phonological and phonemic awareness, 159t procedural computation, 185 reading progress monitoring, 341–347, 341t spelling progress monitoring, 349–350 spelling skills, 163t types of, 8–18, 10t–11t for universal screenings, 396–397 vocabulary progress monitoring, 347–349
Index word reading, 150–153, 154–155, 154t word problem solving and pre-algebraic reasoning, 187–188 writing skills, 194t–195t writing skills progress monitoring, 355–361, 358t, 359f Assessment of the academic environment. See also Academic environment; Assessment Behavioral Observation of Students in Schools (BOSS) and, 137–138 case examples, 454–456, 455t, 461–462 Data Summary Form for Academic Assessment, 213–214, 215–216, 217–218 direct observation and, 109–125, 116t–117t, 118t, 121t hypothesis formation and refinement, 130 overview, 90–92, 93f permanent product review and, 126, 129–130, 129f, 130f student interviews and, 125–126 teacher interviews and, 94–109, 99f, 101f–106f Attack strategy technique, 229f Attention, 24–25, 65, 80, 88 Authentic texts, 277–278 Automaticity algebra and, 189 assessing, 143 early literacy skills, 156–157 inference making and, 292 mathematics skills and, 73–75, 76, 182 orthographic mapping and, 259 overview, 326 reading instruction and interventions for, 269–282 word reading and, 63–64
B Background knowledge and experiences aligning assessment with interventions and, 251f primed background knowledge, 245, 247 reading comprehension and, 65, 67 reading instruction and interventions and, 288–293 vocabulary instruction and, 291 BASC-2 Portable Observation Program (POP), 117t Base-10 blocks, 299f, 300 Base-10 system, 72, 184–185 Beginning Teacher Evaluation Study (BTES), 42 Behavior. See also Disruptive behavior behavior management, 385 case examples, 432 data-based instructional decisions and, 382 direct behavior ratings (DBRs) and, 125 direct observation and, 43, 112, 118–125, 121t, 137–138 observational codes and, 113–118, 116t–117t, 118t progress monitoring and, 361
Index 539
teacher interviews and, 96, 102f–103f, 132–133 universal screening and, 394, 408–409 working memory and, 25 Behavior rating scales, 8, 211. See also Rating scales Behavioral assessment, 22, 34–37. See also Assessment Behavioral Observation of Students in Schools (BOSS) adaptations to, 123–124 assessing the academic environment and, 43 case examples, 422–423, 423t, 426t, 436–437, 436t, 439–440, 439t, 455, 455t complete, 137–138 overview, 116t, 118–124, 121t progress monitoring and, 361 Behavioral self-regulation, 26, 40. See also Selfregulation Benchmark assessment instruments, 143–145, 144t Benchmark goal-setting methods, 368–369, 405, 406f BEST strategy, 268, 269 Big ideas, 245, 246 Blending, 56, 160, 161, 252–253, 257–259, 258f, 335t Borrowing, 76, 307, 308. See also Regrouping skills Brief experimental analysis (BEA), 147–149, 203 Brief informal assessments, 383–384 Brisk instructional pace, 45–46, 239–240
C Calculation skills. See also Mathematics; Procedural computation assessment and, 180, 181f instruction and intervention and, 302–307, 304f progress monitoring and, 353–355, 354t word problem solving and, 79–80, 187 Capitalization, 196–199, 197t Cardinality, 71, 297 Carrying, 76, 307, 308 Cause and effect text structure, 284–285, 286f CBM oral reading (CBM-R), 331–333, 341t, 342 CBMath Automaticity, 353 CBMcomprehension, 341t, 344 Challenging text, 64, 173, 174, 276–277 Choral responding, 240 Classification accuracy, 396 Classroom contingencies, 46–47, 224 Classroom environment. See Academic environment; Assessment of the academic environment Classroom Observation of Student–Teacher Interactions (COSTI), 115, 118 Classwide peer tutoring (CWPT), 241–243, 244, 322. See also Peer tutoring Click or clunk strategy, 287–288 Code for Instructional Structure and Student Academic Response (CISSAR & MS-CISSAR), 115, 118t
Codes, observational. See Direct observation; Observational codes Cogmed program, 24–25 Cognition, 25, 26, 30, 87 Collaborative Strategic Reading (CSR), 287–288 Common Core State Standards (CCSS) hierarchical orderings and, 192 mathematics skills and, 72, 75, 76, 180, 185, 192, 351 progress monitoring and, 341t Compare and contrast text structure, 284–285, 286f COMPefficiency, 341t, 344 Composition skills assessment and, 193f instruction and intervention and, 317, 323–325 keystone model of writing and, 83f overview, 84–85 progress monitoring and, 355–361, 358t, 359f Comprehension. See also Language comprehension; Reading comprehension aligning assessment with interventions and, 251f assessment and, 13, 49, 142–143, 142f, 168–171, 169t, 172–173 comprehension monitoring, 287–288 developing oral reading fluency probes, 205–206 instruction and intervention and, 282–294, 286f keystone model of reading and, 51f linguistic comprehension, 165–168, 166t mathematics language and vocabulary, 190 progress monitoring and, 341t, 343–344 simple view of reading (SVR) and, 49–51, 50f word problem solving in mathematics and, 187 Computation. See also Mathematics; Multidigit computation computation skills mastery curriculum, 209–210 examiner-created mathematics computation probes and, 207–208 mathematics instruction and intervention and, 298f mathematics skills assessment and, 184–186 multidigit computation and, 76 progress monitoring and, 353–355, 354t teacher interviews and, 99, 99f universal screening and, 394 word problem solving and, 79–80 Computer-adaptive tests (CATs), 361–364, 399–400 Computer-based observational measures, 114–115 Computerized scoring of students’ writing, 360 Conceptual understanding, 69–70, 72, 81, 295 Concrete–representational– abstract (CRA) framework, 296 Connectionist models of reading, 51, 59–61 Consolidated phase, 58 Conspicuous strategies, 245, 246 Construction-integration model, 67 Content validity, 12, 36–37 Content-area reading, 282–294, 286f Contingencies, group, 46–47, 224 Contingency-based self-management procedures, 232–233
540
Index
Continuous reading, 279–280 Cooperative learning, 244–245 Core instruction, 391–392, 392f. See also Instruction; Multi-tiered system of supports (MTSS) Corrective feedback, 44, 45, 270. See also Feedback Counting. See also Mathematics keystone model of mathematics and, 71–72 mathematics instruction and intervention and, 302–306, 304f mathematics skills assessment and, 183 number combinations and, 73–75 universal screening and, 394 Cover–copy–compare (CCC) strategy, 261–262, 295, 306–307, 308, 321 Creativity, 99–100 Criterion-referenced tests, 7, 15–18. See also Assessment Cueing techniques. See also Self-management interventions mathematics instruction and intervention and, 308 overview, 225–230, 227f, 228f, 229f self-monitoring strategies and, 232 writing skills instruction and intervention and, 325 Curriculum. See also Instruction assessing the academic environment and, 91 content of assessments and, 35 survey-level assessment of reading skills and, 175–179, 178f teacher interviews and, 94, 96 Curriculum-based assessment (CBA). See also Assessment; Curriculum-based measurement (CBM) case examples, 420–443, 423t, 425f, 426t, 428f–429f, 430f, 431t, 436t, 438f, 439t identifying targets for assessment and intervention and, 88 overview, 8, 18–24, 22f, 37–38, 92 Curriculum-based evaluation (CBE), 19–20 Curriculum-based measurement (CBM). See also Assessment; Curriculum-based assessment (CBA); Progress monitoring academic skills assessment and, 140 for assessing comprehension difficulties, 169t for assessing early literacy skills, 156t for assessing fluency with number combinations, 182t assessing phonological and phonemic awareness and, 159t, 160–161 assessing vocabulary knowledge, 166t, 167 for assessing writing and writing skills, 194t–195t case examples, 460–469, 463t, 466f, 467f, 468f characteristics of, 330–333 developing oral reading fluency probes, 205–206 for measuring language in academic assessment, 166t for measuring oral reading, 143–146, 144t, 146f for measuring word reading, 150–153
monitoring progress in early reading skills and, 334–341, 335t overview, 7, 8, 20–21, 140–141, 328–329, 330, 393 spelling skills and, 163–164 survey-level assessment of reading skills and, 175, 176–177 universal screening and, 397–398 writing assessment and, 193–201, 193f, 194t–195t, 197t, 198f
D Data collection, 113–118, 116t–117t, 118t, 397, 399, 409–410. See also Assessment; Curriculumbased measurement (CBM); Progress monitoring; Universal screening Data-based individualization, 330, 374, 382–387 Data-based program modification. See Curriculumbased measurement (CBM) Data-driven decision making. See also Decision making case examples, 469–473, 470f, 471f, 472f overview, 391–393, 392f progress monitoring and, 377–389, 378f, 380f, 388f universal screening and, 400–407, 401f, 402f, 403f, 404f, 405f, 406f Decision making. See also Data-driven decision making assessment and, 7–8, 24, 37, 203–204 curriculum-based measurement (CBM) and, 38, 328 data-based instructional decisions and, 377–389, 378f, 380f, 388f progress monitoring within an MTSS framework and, 410–412, 410t Decodable texts, 277–278 Decoding. See also Word reading aligning assessment with interventions and, 251f assessing word reading and, 154–155, 154t building accuracy and, 270 flexible use of decoding strategies and, 264–266, 268–269 morpheme-based strategies and, 267 overview, 56 progress monitoring and, 335t reading comprehension and, 64–65 reading instruction and interventions for, 257–259, 258f, 261–262, 264–266, 267–268 reading practice in connected text and, 277–278 self-teaching hypothesis and, 59 simple view of reading (SVR) and, 49–51, 50f teacher interviews and, 103f word meaning and, 262 Decontextualized word reading, 150–151, 270–275 Description text structure, 284–285, 286f Development of academic skills identifying targets for assessment and intervention and, 49, 89 oral reading assessment and, 145–146, 146f
Index 541 reading development, 52–56, 54t simple view of reading (SVR) and, 50–51 Diagnostic assessment, 1–5, 154, 157, 158t, 382–387, 396, 412 DIBELS. See also DIBELS Oral Reading Fluency (D-ORF) case examples, 470, 470f, 472 goal setting for progress monitoring and, 368–369, 371, 374 overview, 335t, 337, 340, 341, 341t universal screening and, 398, 400, 401, 407 DIBELS Oral Reading Fluency (D-ORF). See also DIBELS case examples, 470, 470f, 471f, 472 universal screening and, 401–402, 401f, 402f, 404f, 406f Difficulties in academic skills, 249, 250, 251f. See also Learning disabilities; Students with difficulties in academic skills Direct assessment. See also Assessment; Curriculum-based assessment (CBA); Mathematics skills assessment; Reading skills assessment; Writing skills assessment case examples, 424, 427, 429, 430f, 431, 431t, 437, 440, 453–473, 455t, 456t, 458f, 459f, 463t, 466f, 467f, 468f, 470f, 471f, 472f computer-adaptive tests and, 364 Data Summary Form for Academic Assessment, 211–219 objectives in, 141 overview, 18–24, 22f, 139–141 Direct behavior ratings (DBRs), 40, 125, 361 Direct Instruction, 43, 257–259, 258f. See also Direct instruction Direct instruction, 31, 58–60, 235, 292, 326. See also Direct Instruction; Explicit instruction Direct interventions, 31–32. See also Intervention Direct observation. See also Assessment assessing the academic environment and, 92, 109–125, 116t–117t, 118t, 121t Behavioral Observation of Students in Schools (BOSS) and, 43, 118–125, 121t, 137–138 case examples, 422–423, 423t, 425f, 426, 426t, 431, 436–437, 436t, 439–440, 439t, 455, 455t, 462 comparison children and, 112–113 direct behavior ratings (DBRs) and, 125 frequency of, 112 hypothesis formation and refinement, 130 integrated model of curriculum-based assessment and, 22–23, 22f preparing for, 110–111 progress monitoring and, 361 using existing observational codes and, 113–118, 116t–117t, 118t when to conduct, 111–112 Disability, learning. See Learning disabilities Discourse-related skills, 85–86 Discrepancy/consistency method (D/CM), 29
Disruptive behavior, 41, 47, 361, 394. See also Behavior Division, 76, 77, 207–208, 209–210 Drafting, 83f, 84–85, 193f, 318f, 324 Drill approaches, 295, 306–307 Dynamic Indicators of Vocabulary Skills (DIVS), 347 Dyscalculia, 249 Dyslexia, 28, 55, 150, 152, 172, 249
E Early education, 124. See also Early literacy; Early numerical competencies (ENCs) Early literacy assessment and, 13–14, 142f, 143, 155–165, 156t, 158t, 159t, 163t, 164t keystone model of reading and, 51–52, 51f progress monitoring and, 334–341, 335t reading development, 52–56, 54t reading interventions for, 252–264, 254f, 258f spelling skills and, 152, 162–165, 163t, 164t universal screening and, 394 writing skills instruction and intervention and, 319 Early numeracy intervention (ENI), 301 Early numerical competencies (ENCs). See also Mathematics; Numerical competencies intervention research in, 300–302 keystone model of mathematics and, 71–72 mathematics instruction and intervention and, 297, 299–302, 299f mathematics skills assessment and, 182–183, 184t progress monitoring and, 351–353, 352t EasyCBM goal setting for progress monitoring and, 371 overview, 335t, 338, 340, 341t, 348, 352t, 354t universal screening and, 398, 400 Ecobehavioral Assessment Systems Software (E-BASS), 115 Ecobehavioral System for Complex Assessments of Preschool Environments (ESCAPE), 115 Editing step in writing, 84–85, 324 Educational placement. See Placement Efficiency in reading. See also Fluency aligning assessment with interventions and, 251f assessing, 142f, 143, 147 fluency and, 62–64 need for intensive interventions and, 326 Effort, 296, 361 Elkonin boxes, 253–254, 254f Engagement. See also Academic engagement; Active engagement assessing the academic environment and, 42–43 Behavioral Observation of Students in Schools (BOSS) and, 43, 118–125, 121t data-based instructional decisions and, 382 goal-setting and self-graphing techniques and, 234
542
Index
Engagement (cont.) mathematics instruction and intervention and, 81–82, 296 overview, 31–32, 40 pace of instruction and, 45–46 point systems and incentives and, 234–235 progress monitoring and, 361 self-monitoring strategies and, 230–231 universal screening and, 394 word problem solving in mathematics and, 80 Enhanced Core Reading Instruction (ECRI), 238, 418 Environment, academic. See Academic environment Error analysis, 154, 276 ESHALOV (Every Syllable Has At Least One Vowel) strategy, 266–267, 269 Every Syllable Has At Least One Vowel (ESHALOV) strategy, 266–267, 269 Evidence-based approaches, 221, 293–296, 327 Executive functioning (EF), 25–27, 87, 326. See also Working memory Expectations assessment and, 47, 91, 202–203 establishing and reinforcing, 223 progress monitoring and, 361 student interviews and, 125–126 Explicit instruction. See also Direct instruction; Instruction assessing the academic environment and, 43–44 data-based instructional decisions and, 384–385 inference making and, 292–293 mathematics instruction and intervention and, 294–295, 307–308 opportunities to respond (OTR) and, 240 orthographic mapping and, 259 overview, 31, 235–238, 326 procedural computation and, 307–308 word reading and, 58–60 writing skills and, 86 Expository texts, 284–285, 286f Expressive responses, 256–257
F FastBridge, 335t, 336, 338, 341t, 344, 352t, 354t, 398 Feedback assessing the academic environment and, 44–45 classroom contingencies and, 46 data-based instructional decisions and, 385 explicit instruction and, 44, 235, 237 instructional feedback, 238–239 word reading and, 270 First/last sound identification, 160–161, 335t. See also Phonological awareness Flashcard practice, 272–275, 383–384 Fluency. See also Efficiency in reading; Procedural fluency; Word reading aligning assessment with interventions and, 251f assessing, 142f, 143, 147
comprehension difficulties and, 169 curriculum-based measurement (CBM) and, 331 developing oral reading fluency probes, 205–206 mathematics skills and, 72–75 orthographic mapping and, 259 progress monitoring and, 335t, 336, 341t, 342 reading comprehension and, 64–66 teacher interviews and, 103f word reading efficiency and, 62–64 writing skills and, 84, 86 Formative assessment, 328, 330. See also Assessment; Curriculum-based measurement (CBM); Progress monitoring Fraction Face-Off! (FFO), 310–311 Fractions, 77–78, 180, 181f, 188, 298f, 309–311 Frustrational level, 174, 178, 178f Fuchs Research Group, 335t, 341t Full alphabetic phase, 58 Functional analysis, 38–39 Functional assessment, 8, 38–39 Functional behavioral assessment (FBA), 8 Fusion intervention, 301–302, 307
G Games, 256–257, 325 General outcomes measurement (GOM) perspective. See also Curriculum-based measurement (CBM) case examples, 453–469, 455t, 456t, 458f, 459f, 463t, 466f, 467f, 468f mathematics instruction and intervention and, 351 overview, 21, 140–141, 330–333, 364–365 Geometry, 81, 180, 181f, 188–189, 314–315. See also Mathematics Goal-level material, 345–346 Goals and goal setting. See also Self-management interventions academic skills assessment and, 203–204 case examples, 444, 448 modifying, 387–389, 388f overview, 225, 233–234 progress monitoring and, 366–375, 366f, 370t, 372t universal screening and, 395 Good Behavior Game, 224 Grade level, 174–179, 178f, 345–346, 395. See also Goals and goal setting Grammar instruction and intervention and, 318f keystone model of writing and, 83f overview, 84 writing assessment and, 193f, 196–199, 197t Grapheme–phoneme correspondence. See Letter– sound correspondence Graphic organizers, 316, 325 Gross motor skills, 27–28
Index 543 Group contingencies, 224 Group size, 386
H Handwriting instruction and intervention and, 317, 318f, 321, 322–323 keystone model of writing and, 83f overview, 83–84 permanent product review and, 129–130 teacher interviews and, 99–100 universal screening and, 394 writing assessment and, 193f, 198 Here, Hidden, and In My Head (3H) strategy, 287 Hypothesis formation assessment and, 130, 203–204 Data Summary Form for Academic Assessment, 218–219 determining instructional level in mathematics and, 191 teacher interviews and, 100, 106f, 136 Hypothesis-driven approach to assessment, 38–39, 148–149
I I do, we do, you do, 44, 235, 236–237, 240. See also Explicit instruction Idiographic perspective, 34–35, 88–89 Incentive programs, 234–235 Incremental rehearsal case examples, 454–460, 455t, 456t, 458f, 459f mathematics instruction and intervention and, 295 for number combinations, 306 overview, 275 word reading and, 272–274 Independent reading level, 174, 178, 178f Indirect interventions, 24–31. See also Intervention Individuals with Disabilities Education Act (IDEA), 390 Inference making, 65, 251f, 292–293 Informal assessments, 383–384. See also Assessment Informal Decoding Inventory, 383 Informational texts, 284–285, 286f Institute for Education Sciences of the U.S. Department of Education, 326 Instruction. See also Academic environment; Curriculum; Explicit instruction; Instructional decision making; Intervention academic environment and, 91 assessment and, 13, 18, 35 computer-adaptive tests and, 364 intensity of interventions and, 221–222, 221f moderately intensive strategies to make instruction more effective, 235–247
observational codes and, 115, 118 pace of, 45–46, 239–240 progress monitoring and, 377–389, 378f, 380f, 388f teacher interviews and, 94, 96, 100, 107 word reading and, 57, 58–60 Instructional Content Emphasis—Revised (ICE-R), 115, 118 Instructional decision making. See also Instruction; Instructional modification instructional design and, 245–247 progress monitoring and, 377–389, 378f, 380f, 388f universal screening and, 400–407, 401f, 402f, 403f, 404f, 405f, 406f Instructional environment. See Academic environment Instructional feedback, 238–239. See also Feedback Instructional level, 174–179, 178f, 191–192, 276–277, 345–346 Instructional modification. See also Instructional decision making; Intervention case examples, 456–457, 456t, 458f data-based instructional decisions and, 382–387 instructional design and, 245–247 integrated model of curriculum-based assessment and, 22f, 23 moderately intensive strategies to make instruction more effective, 235–247 overview, 220–223, 221f Instructional placement. See Placement Integrated model of curriculum-based assessment, 21–24, 22f. See also Curriculum-based assessment (CBA) Intellectual ability, 7–8, 87 Interdependent group contingencies, 224 Intervention. See also Instructional modification; Mathematics interventions; Reading interventions; Targets for intervention; Writing skills interventions; individual intervention strategies academic skills and, 24–32, 203–204, 247–248 assessment and, 2–5, 91, 182, 203–204 behavior-specific praise, 224 case examples, 443–453, 446f, 447f, 450f, 451f, 452f, 453f, 464–465, 466–467, 467f establishing and reinforcing rules, expectations, and routines, 223 explicit instruction, 235–238 group contingencies, 224 increasing student opportunities to respond and practice, 240–245 instructional design and, 245–247 instructional feedback, 238–239 intensity of, 221–223, 221f intensive interventions, 249–250, 325–327 intervention fidelity and, 382–383 less intensive interventions, 223–235, 227f, 228f, 229f, 231f
544
Index
Intervention (cont.) moderately intensive interventions, 235–247 overview, 5–7, 220–223, 221f pace of instruction and, 239–240 planning, 36 point systems and incentives, 234–235 reducing intensity of interventions and, 388–389 self-management interventions, 225–234, 225f, 227f, 229f, 231f specific and intensive interventions, 249–250, 325–327 teacher interviews and, 95–96, 100, 102f writing skills and, 84–85 Interviews, 22–23, 22f. See also Student interviews; Teacher interviews Inventories, 383–384 IRIS Center, 294, 326 Irregular words, 61–62, 262–266. See also Word reading Isolated word reading practice, 150, 270–275
J Judicious review, 245, 247
K Keyboarding skills. See Typing Keystone model academic skills and, 49 algebra and, 69f, 70–72, 81–82 geometry and measurement, 81 mathematics skills assessment and, 87–88, 179–180, 181f number combinations and, 72–75 overview, 40–41, 51–52, 51f, 69, 69f rational numbers and, 77–78 reading skills assessment and, 87–88, 142, 142f skill domains that support algebra and, 75–76 word problem solving, 78–80 writing skills assessment and, 82, 83f Knowledge building, 288–293, 324. See also Background knowledge and experiences
L Language. See also Language comprehension; Oral language assessment and, 142f, 201, 251f identifying targets for assessment and intervention and, 52, 89 instruction and intervention and, 432 keystone model and, 51–52, 51f, 70 mathematics language and vocabulary, 190 teacher interviews and, 97–98 word problem solving in mathematics and, 78
working memory and, 25 writing skills and, 85–86 Language comprehension. See also Comprehension; Language; Oral comprehension aligning assessment with interventions and, 251f assessing, 142f, 165–168, 166t reading comprehension and, 65, 66–67 simple view of reading (SVR) and, 49–51, 50f word problem solving in mathematics and, 80 Language disability, 98. See also Learning disabilities Learning, 28–31, 88, 326 Learning difficulties, 89. See also Students with difficulties in academic skills Learning disabilities. See also Difficulties in academic skills; Dyscalculia; Dyslexia; Reading disability; Students with difficulties in academic skills assessment and, 8, 13, 89 overview, 5–6, 249, 390 progress monitoring and, 412 word reading and, 149 writing skills and, 82, 84, 320 Learning environment. See Academic environment Letter–name knowledge assessing, 155–157, 156t, 158t progress monitoring and, 335t, 336, 337–341 universal screening and, 394 Letter-naming fluency (LNF) measures, 335t, 336, 337–341 Letter–sound correspondence assessment and, 13, 142f, 155–157, 156t, 158t, 251f keystone model of reading and, 51–52, 51f phonemic awareness and, 56 progress monitoring and, 335t, 336 reading development and, 55–56 reading interventions for, 255–257 sight words and, 61–62 universal screening and, 394 word reading and, 56, 57 word-building interventions and, 260 Letter–sound fluency (LSF) measures, 335t, 336, 337–341 Letter–word sound fluency, 335t Lexical Quality Hypothesis, 51, 64 Linguistic comprehension, 142f, 165–168, 166t, 187, 288–293. See also Comprehension; Language comprehension Listening comprehension, 142f, 165, 167–168, 172–173
M Main idea strategies, 283–284 Manipulatives, 297, 299–300, 299f, 316 Mastery, 15–18, 91, 143, 209–210 Mastery level of learning, 174, 178, 178f
Index 545 Math facts, 353. See also Number combinations Mathematics. See also Mathematics interventions; Mathematics skills assessment algebra and, 68–69, 81–82 computation skills mastery curriculum, 209–210 conceptual understanding and procedural fluency and, 69–70 geometry and measurement, 81 interpreting assessment results and, 13–14 keystone model of mathematics and, 69, 69f number combinations and, 72–75 permanent product review and, 126 progress monitoring and, 350–351 rational numbers and, 77–78 skill domains that support algebra and, 75–76 teacher interviews and, 98–99, 99f, 100, 102f, 104f, 134 universal screening and, 394 word problem solving and, 78–80 Mathematics interventions. See also Intervention; Mathematics aligning assessment with, 296–297, 298f for calculations and number combinations, 302–307 case examples, 442, 449–450, 450f, 451, 452f, 453f for early numerical competencies (ENCs), 297, 299–302, 299f evidence-based principles of, 294–296 for fractions and other rational numbers, 309–311 for geometry and measurement, 314–315 identifying targets for assessment and intervention and, 68–82, 69f, 86–88 overview, 294, 317 for pre-algebra and algebra, 315–316 for procedural computation skills, 307–309 for word problem solving, 311–314, 312f Mathematics skills assessment. See also Assessment; Direct assessment; Mathematics algebra, 189 aligning intervention with, 296–297, 298f case examples, 424–427, 425f, 426t, 428f–429f, 431, 437, 439–440, 439t computation skills mastery curriculum, 209–210 curriculum-based measurement (CBM) and, 331 Data Summary Form for Academic Assessment, 214–216 early numerical competencies, 182–183, 184t examiner-created mathematics computation probes and, 207–208 fluency with number combinations, 181–182, 182t fractions and other rational numbers, 188 geometry and measurement, 188–189 identifying targets for assessment and intervention and, 68–82, 69f, 86–88 interpreting assessment results and, 191–192 mathematics language and vocabulary, 190
overview, 179–180, 181f procedural computation, 184–186 word problem solving and pre-algebraic reasoning, 186–188 Maze, 341t, 343 mClass, 352t, 354t Measurement, 81, 180, 181f, 188–189, 314–315. See also Mathematics Mediated scaffolding, 245, 246 Metacognition, 40, 225 Missing Number (MN) measures, 351, 352–353, 352t Modeling, 44, 237, 325 Model–lead–test, 44, 235, 236–237, 255–256, 261. See also Explicit instruction Modification, instructional. See Instructional modification Monitoring progress. See Progress monitoring Motivation data-based instructional decisions and, 382 goal-setting and self-graphing techniques and, 234 keystone model of writing and, 83f mathematics instruction and intervention and, 81–82, 296 reading comprehension and, 65 universal screening and, 394 writing assessment and, 193f writing skills instruction and intervention for, 84, 85, 318f Multicomponent writing interventions, 323–325 Multidigit computation. See also Computation assessment and, 184–185, 207–208 computation skills mastery curriculum, 209–210 instruction and intervention and, 307, 308 overview, 76 Multiple-Choice Reading Comprehension measures, 341t, 343–344 Multiplication, 76, 77, 207–208, 209–210 Multisyllabic words, 266–268 Multi-tiered system of supports (MTSS). See also Assessment; Core instruction; Instruction; Intervention; Progress monitoring; Response to intervention (RTI); Universal screening case examples, 469–473, 470f, 471f, 472f comprehensive MTSS programs, 418 Data Summary Form for Academic Assessment, 218 overview, 4–5, 390–393, 392f, 418–419 teacher interviews and, 100, 105f–106f, 135–136
N Narrative texts, 284–285, 286f National Center on Intensive Intervention (NCII), 293, 308, 326, 383, 397 Neuropsychological assessment, 30, 87 Nomothetic perspective, 34–35, 88–89
546
Index
Nonsense words, 152, 335t, 336–341. See also Pseudoword decoding Normative data, 395 Norm-referenced goal-setting methods, 369–371, 370t Norm-referenced instruments. See also Assessment; Standardized measures assumptions in, 35 comprehension difficulties, 169t early literacy skills, 156t integrated model of curriculum-based assessment and, 23 number combinations, 182t oral reading, 144t, 146f overview, 7, 8–15, 10t–11t, 17–18, 140, 166t phonological and phonemic awareness, 159t word reading, 150–153 Number combinations assessment and, 180, 181–182, 181f, 182t, 184–185 instruction and intervention and, 298f, 302–307, 304f, 309 keystone model of mathematics and, 72–75 multidigit computation and, 76 procedural computation and, 309 progress monitoring and, 353–355, 354t universal screening and, 394 word problem solving and, 80 Number identification, 72, 351, 352t Number lines fraction operations and, 310 keystone model of mathematics and, 72 overview, 75, 299–300, 299f, 305 rational numbers and, 77 Number Rockets program, 307 Number sense, 71–72 Numerical competencies, 71–72, 180, 181f, 182–183, 184t. See also Early numerical competencies (ENCs)
O Observation. See Direct observation Observational codes. See also Direct observation assessing the academic environment and, 43, 113–118, 116t–117t, 118t Behavioral Observation of Students in Schools (BOSS), 118–125, 121t Off-task behavior, 43, 118–125, 121t. See also Behavior; Disruptive behavior One-to-one correspondence, 71, 297 On-task behavior, 43, 118–125, 121t. See also Academic engagement; Behavior Opportunities to respond (OTR) assessing the academic environment and, 42–43 classroom contingencies and, 46–47 data-based instructional decisions and, 385 overview, 31–32, 240–245, 326 pace of instruction and, 45–46
Oral comprehension, 97–98. See also Comprehension; Language comprehension Oral Counting (OC) measures, 351, 352t Oral language keystone model of writing and, 83f mathematics language and vocabulary, 190 word problem solving in mathematics and, 187 writing skills and, 85–86, 193f, 201 Oral reading. See also Reading efficiency; Reading fluency assessing, 142–149, 142f, 144t, 146f, 150, 151t, 205–206 curriculum-based measurement (CBM) and, 331 developing oral reading fluency probes, 205–206 misinterpretations of, 147–149 progress monitoring and, 341t, 342 writing assessment and, 193f Oral reading fluency (ORF), 341t, 342 Order irrelevance, 71 Organization in writing, 99–100 Orthographic mapping, 51, 57, 259 Orthography, 59, 161–162
P Pace of instruction, 45–46, 239–240. See also Instruction Partial alphabetic phase, 58 Passage prompts for writing assessments, 199–200 Passage reading. See also Reading skills assessment; Survey-level assessments Data Summary Form for Academic Assessment, 213 oral reading, 205–206 progress monitoring and, 341t, 342, 345–346 reading practice in connected text and, 275–281 Passive engagement, 42, 43, 118–125, 121t. See also Academic engagement; Engagement Peer tutoring, 240–244, 276–277, 322 Peer-assisted learning strategies (PALS), 241, 243–244, 276–277, 307. See also Peer tutoring Peer-comparison data, 112–113 Perceptual processes, 7, 27–28, 326 Performance deficits, 91, 202–203 Permanent product review. See Product review Persistence, 318f, 356 Phase theory, 51, 57–58, 60–61 Phoneme blending. See Blending; Phonemic awareness Phoneme deletion, 161–162. See also Phonemic awareness Phoneme reversal, 161–162. See also Phonemic awareness Phoneme segmentation. See also Phonemic awareness overview, 161, 162, 252–253 phonemic awareness and, 56 progress monitoring and, 334, 335t, 336, 337–341
Index 547
Phoneme substitution, 161–162. See also Phonemic awareness Phonemic awareness advanced phonemic awareness skills, 161–162 assessment and, 142f, 159t, 251f keystone model of reading and, 51–52, 51f letter–sound acquisition and, 56 overview, 53–55, 54t, 160 reading interventions for, 252–255, 254f sight words and, 60–61 universal screening and, 394 word reading and, 56 Phonics instruction, 57, 58–60, 252, 257–259, 258f Phonological awareness assessing, 158–162, 159t case examples, 463–464, 463t connectionist frameworks and, 59 overview, 52–56, 54t, 160 reading interventions for, 252 teacher interviews and, 103f Pirate Math program, 313–314, 316 Placement case examples, 456, 462–465, 463t determining instructional level in mathematics and, 191–192 determining instructional level in reading, 174–179, 178f integrated model of curriculum-based assessment and, 22f Planning in writing assessment and, 193f instruction and intervention and, 318f, 324 keystone model of writing and, 83f overview, 84–85 progress monitoring and, 356 teacher interviews and, 99–100 Point systems, 234–235 Position and reason text structure, 284–285, 286f Practice aligning assessment with interventions and, 251f data-based instructional decisions and, 385 decoding instruction and, 258–259 goal-setting and self-graphing techniques and, 234 increasing student opportunities for, 240–245 mathematics instruction and intervention and, 295, 306–307 orthographic mapping and, 259 overview, 31–32 reading practice in connected text, 275–281 self-teaching hypothesis and, 59 word reading and, 57, 63–64, 270–275 writing skills and, 86 Praise, 224 Pre-algebraic reasoning, 186–188, 315–316 Pre-alphabetic phase, 58. See also Algebra Predictable texts, 278 Prefixes, 267 Primed background knowledge, 245, 247. See also Background knowledge and experiences
Print concepts, 335t Prior knowledge. See Background knowledge and experiences Problem and solution text structure, 284–285, 286f Problem solving, 2–5, 8, 19–20, 226, 328 Procedural computation. See also Calculation skills assessment and, 180, 181f, 184–186, 192 examiner-created mathematics computation probes and, 207–208 instruction and intervention and, 298f, 307–309 progress monitoring and, 353–355, 354t Procedural fluency, 69–70, 72, 76, 81. See also Fluency Product review assessing the academic environment and, 92, 93t, 110, 126, 129–130, 129f, 130f Behavioral Observation of Students in Schools (BOSS) and, 123 case examples, 424, 427, 430, 431, 440, 455 hypothesis formation and refinement, 130 integrated model of curriculum-based assessment and, 22–23, 22f writing assessment and, 201–202 Production-dependent scoring methods, 197–199, 198f, 356–357 Production-independent scoring methods, 356–357 Productivity, 234–235 Proficiency, 15–18, 91, 143 Progress monitoring. See also Assessment; Curriculum-based measurement (CBM); Multitiered system of supports (MTSS) academic skills assessment and, 204 behavior, 361 case examples, 443–453, 446f, 447f, 450f, 451f, 452f, 453f, 457–458, 459f, 465–466, 466f, 467f, 469–473, 470f, 471f, 472f characteristics of, 330–333 computer-adaptive tests and, 361–364 criterion-referenced tests and, 17 developing measures for, 346–347 early literacy skills, 334–341, 335t frequency of, 375 general outcomes measurement (GOM) perspective and, 140–141 graphing data from, 375–377, 376f, 386–387 instructional decisions and, 220–221, 377–389, 378f, 380f, 388f integrated model of curriculum-based assessment and, 22f, 23 mathematics skills, 296, 350–355, 352t, 354t in an MTSS framework, 410–418, 410t, 413f, 414f, 415f oral reading assessment and, 144 overview, 4, 7, 36, 327, 328, 330, 389, 393, 409–410, 419 probes for, 333 reading skills, 175, 282, 341–347, 341t selecting a measure for, 334, 365 setting a goal for, 366–375, 366f, 370t, 372t setting an end date for, 367
548
Index
Progress monitoring (cont.) spelling skills, 349–350 vocabulary skills, 347–349 writing skills, 355–361, 358t, 359f Prompts, 284, 308, 325, 360–361. See also Cueing techniques Pronunciations, 260, 262–264, 276 Pseudoword decoding, 148, 152, 335t, 336–337. See also Nonsense words Punctuation, 196–199, 197t
Q Quantity Discrimination (QD) measures, 351, 352–353, 352t Question–answer relationship (QAR) strategy, 287 Questioning strategies, 285, 287 Quizzes, 325
R Rapid automatized naming (RAN), 157–158 Rate of improvement (ROI) goal-setting method, 369, 371–374, 372t Rating scales, 22f, 93t, 108–109, 125. See also Behavior rating scales Rational numbers, 77–78, 188, 298f, 309–311. See also Fractions Reading accuracy. See Accuracy in reading Reading comprehension. See also Comprehension assessment and, 142f, 150, 168–171, 169t, 173, 251f case examples, 448–449, 450, 451, 451f fluency and, 64–66 instruction and intervention and, 251f, 282–294, 286f keystone model of reading and, 51f overview, 65–66 progress monitoring and, 341t, 343–344 reading practice in connected text and, 276 simple view of reading (SVR) and, 49–51, 50f student-level variables in, 66–67 teacher interviews and, 97 vocabulary and background knowledge and, 288–293 word problem solving in mathematics and, 78, 187 writing skills and, 86 Reading Curriculum-Based Measurement (R-CBM), 462–463, 463t Reading development, 13, 57–62 Reading disability, 13. See also Learning disabilities Reading efficiency, 234, 269–282. See also Efficiency in reading Reading fluency. See Reading efficiency; Reading rate Reading interventions. See also Intervention; Reading skills for advanced word reading skills, 264–269 aligning with assessment results, 250, 251f
for basic/beginning reading, 252–264, 254f, 258f case examples, 432–434, 441, 444–445, 446f, 448–449, 450, 451, 451f, 464–465, 466–467, 467f improving word and text reading efficiency and, 269–282 overview, 282 packaged evidence-based programs for, 293–294 for reading comprehension and content-area reading, 282–294, 286f Reading rate assessing, 143, 147 brief experimental analysis (BEA) and, 147–149 curriculum-based measurement (CBM) and, 331 overview, 64–65 reading practice in connected text and, 276 Reading skills. See also Comprehension; Reading interventions; Reading skills assessment; Skill deficits; Word reading identifying targets for assessment and intervention and, 48–68, 50f, 51f, 54t, 86–88 keystone model and, 51–52, 51f, 83f language and, 52 mathematics language and vocabulary, 190 phonological awareness and alphabetic knowledge and, 52–56, 54t progress monitoring and, 341–347, 341t reading comprehension and, 64–67 simple view of reading (SVR), 49–51, 50f teacher interviews and, 97–98, 100, 101f–106f, 107, 133 universal screening and, 394 word reading and, 56–65 writing skills and, 85–86 Reading skills assessment. See also Assessment; Direct assessment; Reading skills aligning intervention with, 250, 251f case examples, 422–424, 423t, 431, 436–437, 436t, 438f common and less common situations in, 172–174 comprehension difficulties, 168–171, 169t curriculum-based measurement (CBM) and, 331 Data Summary Form for Academic Assessment, 212–214 determining instructional level and, 174–179, 178f developing oral reading fluency probes, 205–206 early literacy skills, 155–165, 156t, 158t, 159t, 163t, 164t interpreting assessment results and, 171–172 linguistic comprehension, 165–168, 166t monitoring progress in early reading skills and, 334–341, 335t oral reading, 142–149, 144t, 146f, 151t overview, 141–142, 142f phonological and phonemic awareness, 158–162, 159t spelling skills as a part of, 162–165, 163t, 164t survey-level assessment, 174–179, 178f word reading, 149–155, 154t
Index 549 Reading systems framework (RSF), 51, 67 Reading words. See Word reading Reasoning, 65, 69f, 298f Receptive responses, 256–257 Referral problems, 1–5, 81–82, 95, 131. See also Targets for assessment; Targets for intervention Regrouping skills, 76, 307, 308 Reinforcement, 224, 230–231 Reliability, 36–37 Reminders. See Cueing techniques Repeated readings, 278–280, 281 Response, providing opportunities for. See Opportunities to respond (OTR) Response to intervention (RTI). See also Multitiered system of supports (MTSS) case examples, 469–473, 470f, 471f, 472f Data Summary Form for Academic Assessment, 218 overview, 4–5, 6, 390–393, 392f teacher interviews and, 100, 105f–106f, 135–136 Reviewing in writing, 84–85 Revision in writing assessment and, 193f instruction and intervention and, 318f, 324 keystone model of writing and, 83f overview, 84–85 teacher interviews and, 99–100 Rhyming, 160, 335t. See also Phonological awareness Rime-based strategies for decoding and spelling, 260, 265 Robust vocabulary instruction, 289–293. See also Vocabulary Root words, 267 ROOTS intervention program, 301, 307 Routines, 47, 223 Rules, 47, 223, 361. See also Expectations
S Say-It-and-Move-It activity, 254–255, 254f Scaffolding decoding instruction and, 257–258, 258f mathematics instruction and intervention and, 308 mediated scaffolding, 246 overview, 326 writing skills instruction and intervention and, 325 Schema-based instruction, 78–79, 311–313, 312f Scoring writing assessments, 196–199, 197t. See also Writing skills assessment Screening, universal. See Universal screening Segmenting, 160, 335t. See also Phoneme segmentation Self-graphing techniques, 233–234, 278 Self-instruction techniques. See also Selfmanagement interventions mathematics instruction and intervention and, 308 overview, 225–230, 227f, 228f, 229f reading instruction and interventions and, 287
Self-management interventions. See also Intervention cueing, self-instruction, and self-regulated strategy techniques, 225–230, 227f, 228f, 229f goal-setting and self-graphing techniques, 233–234 overview, 225–234, 227f, 229f, 231f self-monitoring strategies and, 230–233, 231f Self-management spelling (SMS) procedure, 322 Self-monitoring strategies, 230–233, 231f Self-questioning strategy, 230, 285, 287. See also Questioning strategies Self-regulated strategy techniques, 225–230, 227f, 228f, 229f, 319–321, 323–325. See also Selfmanagement interventions Self-regulation. See also Executive functioning (EF) identifying targets for assessment and intervention and, 88 intervention methods that target, 26 overview, 40 progress monitoring and, 361 word problem solving in mathematics and, 80 writing assessment and, 83f, 193f, 201 writing skills and, 85 Self-teaching hypothesis, 51, 58–59, 60–61 Sensory processing, 87, 326 Sequence text structure, 284–285, 286f Serial processing, 270 Sight words, 58, 60–62, 262–264, 335t, 336. See also Word reading Silent reading fluency, 341t, 343–344 Simple view of reading (SVR), 49–51, 50f, 78 Situation-level variables, 65–66 Six traits of writing model, 360 Skill deficits. See also Reading skills academic skills assessment and, 202–203 assessing the academic environment and, 91 goal-setting and self-graphing techniques and, 234 teacher interviews and, 95–96, 97–98, 100–109, 101f–106f Slope rules approach to instructional decision making, 377–378, 378f Social, Academic, and Emotional Behavior Risk Screener (SAE-BRS), 409 Social functioning, 102f–103f Social Skills Improvement System Rating Scales (SSIS), 108 Social–emotional problems, 22 Sound boxes, 253–254, 254f Sounding out words, 56–57, 257–259, 258f. See also Word reading Special education services, 5, 328, 391–393, 392f, 412 Specific learning disability (SLD), 150, 390. See also Dyslexia; Dyscalculia; Learning disabilities Specific subskill mastery measurement (SSMM) case examples, 453–469, 455t, 456t, 458f, 459f, 463t, 466f, 467f, 468f overview, 21, 331–333, 364–365 Speed in reading, 234, 331. See also Reading rate
550
Index
Spelling advanced phonemic awareness skills and, 161–162 assessment and, 13, 152, 162–165, 163t, 164t, 193f, 251f case examples, 430–431, 431t, 434 flexible use of decoding strategies and, 265 instruction and intervention and, 251f, 261–262 keystone model of writing and, 83f overview, 83–84 permanent product review and, 126, 129–130 phonemic awareness and, 56 progress monitoring and, 349–350, 356 reading practice in connected text and, 276 scoring writing assessments and, 196–199, 197t teacher interviews and, 98, 99–100 universal screening and, 394 word-building interventions and, 259–260 writing skills instruction and intervention and, 317, 318f, 321–323 SpringMath program, 308, 310, 315, 316, 332, 352t, 354t, 418 Standardized measures. See also Assessment; Normreferenced instruments academic skills assessment and, 140, 141 case examples, 460–469, 463t, 466f, 467f, 468f comprehension difficulties, 169t early literacy skills, 156t integrated model of curriculum-based assessment and, 23 for measuring language in academic assessment, 166t number combinations, 182t oral reading, 144t, 146f overview, 8–15, 10t–11t phonological and phonemic awareness, 159t word reading and, 150–153 State–Event Classroom Observation System (SECOS), 43, 114, 117t Story mapping, 285, 286f Story starters, 196, 197t, 199–200. See also Writing skills assessment Strategic counting, 302–306, 304f Strategic incremental rehearsal (SIR), 274, 275 Strategic integration, 245, 246–247 Strategies intervention model, 227, 230 Strengths identifying targets for assessment and intervention and, 88–89 teacher interviews and, 94, 95, 98 word reading and, 154–155, 154t Struggling readers and writers. See Difficulties in academic skills; Learning disabilities; Students with difficulties in academic skills Student interviews. See also Interviews assessing the academic environment and, 92, 93t, 125–126 case examples, 423–424, 426–427, 430, 437, 455, 462
Data Summary Form for Academic Assessment, 214, 216 data-based instructional decisions and, 384 hypothesis formation and refinement, 130 integrated model of curriculum-based assessment and, 22–23 Student Interview Form, 126, 127f–128f writing assessment and, 201–202 Students with difficulties in academic skills. See also Difficulties in academic skills; Learning difficulties curriculum-based measurement (CBM) and, 331 data-based instructional decisions and, 382–387 overview, 249 universal screening and, 394 writing skills and, 82–83 Subtraction assessment and, 183, 184–185 computation skills mastery curriculum, 209–210 early numerical competencies and, 183 examiner-created mathematics computation probes and, 207–208 fraction operations and, 77 instruction and intervention and, 302–307, 304f multidigit computation and, 76 number combinations and, 73–75 Suffixes, 267 Summarizing strategies, 283–284 Survey, Question, Read, Recite, Review (SQ3R) method, 230 Survey-level assessments, 19, 174–179, 178f Syllable blending. See Blending; Phonological awareness Syllable segmentation, 160. See also Phoneme segmentation; Phonological awareness Syllable-based strategies, 266–268 Syntax instruction and intervention and, 318f overview, 84, 85–86 progress monitoring and, 83f writing assessment and, 193f, 196–199, 197t Systematic instruction. See also Explicit instruction data-based instructional decisions and, 384–385 decoding instruction and, 257–259, 258f mathematics instruction and intervention and, 295 overview, 235 Systematic review, 385
T Targets for assessment. See also Assessment assumptions in the selection of, 34–37 identifying, 41–49, 86–87 mathematics skills and, 68–82, 69f neuropsychological and cognitive processes as, 87 observational codes and, 113–114 overview, 33–34, 87–89
Index 551 reading skills and, 48–68, 50f, 51f, 54t teacher interviews and, 95, 131 writing skills and, 82–86, 83f Targets for intervention. See also Intervention identifying, 41–49, 86–87 mathematics skills and, 68–82, 69f neuropsychological and cognitive processes as, 87 overview, 33–34, 87–89 reading skills and, 48–68, 50f, 51f, 54t writing skills and, 82–86, 83f Task-level variables, 65–66, 88, 202–203, 229f Teacher interviews. See also Interviews assessing the academic environment and, 92, 93t, 94–109, 99f, 101f–106f assessing word reading and, 150 Behavioral Observation of Students in Schools (BOSS) and, 131–136 case examples, 422, 424, 426, 427, 430, 436, 437, 439, 454, 461–462 Data Summary Form for Academic Assessment, 211 data-based instructional decisions and, 384 efficiency and, 107–108 hypothesis formation and refinement, 130 integrated model of curriculum-based assessment and, 22–23 Teacher Interview Form, 94–107, 99f, 101f–106f, 130, 131–136, 150 Teacher talk, 46, 235, 237 Teacher-completed rating scales, 108–109. See also Rating scales Team-assisted individualization (TAI), 245 Team-based decisions, 411 Ten frames, 71–72, 299f Test of Word Reading Efficiency (TOWRE-2), 151–152 Test–retest reliability, 36–37 Text difficulty/complexity, 64, 173, 174, 276–277 Text reading, 172–173, 251f, 269–282. See also Reading skills; Word reading Text structure, 251f, 284–285, 286f Text-level variables, 65–66 Think-aloud processes, 383–384 Tiered approaches. See Multi-tiered system of supports (MTSS); Response to intervention (RTI) Timed measures, 151–152, 156–157 Timed practice, 271–272, 275–276, 278, 306–307. See also Practice Token economies. See Incentive programs; Point systems Transcription skills instruction and intervention and, 317, 318f, 319, 320–323 keystone model of writing and, 83f overview, 83–84 permanent product review and, 129–130 progress monitoring and, 355–361, 358t, 359f teacher interviews and, 99–100 writing assessment and, 193f, 196–199, 197t
TREE acronym for essay writing, 324 Tutoring, peer. See Peer tutoring Typing, 83–84, 83f, 193f, 323
U Universal screening. See also Assessment; Multitiered system of supports (MTSS) case examples, 469–473, 470f, 471f, 472f computer-adaptive tests and, 364 implementing, 397–407, 401f, 402f, 403f, 404f, 405f, 406f overview, 4, 391, 393, 394–395, 407–409, 419 selecting measures for, 396–397 survey-level assessment of reading skills and, 175 Untimed measures, 151–152, 156–157
V Validity, 12, 36–37 Visual processing, 27–28, 87, 230 Visual scaffolds, 257–258, 258f. See also Scaffolding Vocabulary assessment and, 13–14, 142f, 165–167, 166t, 172–173, 201, 251f, 347–348 instruction and intervention and, 251f, 288–293, 298f, 318f, 324 keystone model and, 51f, 52, 69f, 70 mathematics language and vocabulary, 190, 298f phonemic awareness and, 53 progress monitoring and, 335t, 347–349 reading comprehension and, 65, 66–67 reading practice in connected text and, 276 teacher interviews and, 103f word reading and, 61–62 word problem solving in mathematics and, 78, 80, 187 writing skills and, 85–86, 201, 318f, 324
W Weaknesses, 88–89, 94, 98, 154–155, 154t What Works Clearinghouse (WWC), 293–294, 326 Whole numbers, 71–72, 298f. See also Mathematics Whole-language approaches, 86 Whole-number concepts, 309. See also Early numerical competencies (ENCs) Wide reading, 279–280, 281 Word Attack, 464 Word boxes, 253–254, 254f Word callers, 169 Word families, 260 Word identification, 265, 335t, 336, 337–341, 464 Word list reading practice, 271–272, 274–275 Word meanings, 262, 276. See also Comprehension; Vocabulary
552
Index
Word problems in mathematics assessment and, 180, 181f, 186–188 early algebraic reasoning and, 78–80 fraction operations and, 77 geometry and measurement and, 189 instruction and intervention and, 298f, 311–314, 312f progress monitoring and, 354t, 355 Word reading. See also Decoding; Fluency; Sight words assessment and, 13, 142f, 149–165, 154t, 156t, 158t, 159t, 163t, 164t, 172–173, 251f comprehension difficulties and, 168–169 determining instructional level and, 174 early literacy skills, 155–165, 156t, 158t, 159t, 163t, 164t identifying strengths and weaknesses in, 154–155, 154t instruction and intervention and, 251f keystone model of reading and, 51–52, 51f overview, 56–62 progress monitoring and, 335t, 336–337 reading comprehension and, 64–66 reading instruction and interventions for, 257–259, 258f, 261–262, 269–282 reading practice in connected text, 275–281 simple view of reading (SVR) and, 49–51, 50f teacher interviews and, 98, 103f theories of word reading acquisition, 57–62 universal screening and, 394 word reading efficiency, 62–64 word-building interventions and, 259–260 word problem solving in mathematics and, 78 Word reading fluency, 151–152, 335t, 336. See also Fluency; Sight words Word recognition, 270, 335t Word-building interventions, 259–260 Word-processing programs, 323 Work samples. See Product review Working memory. See also Executive functioning (EF) advanced phonemic awareness skills and, 162 identifying targets for assessment and intervention and, 87
intervention methods that target, 24–25, 26 need for intensive interventions and, 326 reading comprehension and, 65 Worksheet methods, 306–307 Writing Architect, 195t, 199–200, 357–359 Writing skills. See also Writing skills assessment; Writing skills interventions identifying targets for assessment and intervention and, 82–88, 83f keystone model of writing and, 82, 83f motivation and self-regulation and, 85 oral language and reading and, 85–86 overview, 82 permanent product review and, 126, 129–130, 129f, 130f progress monitoring and, 355–361, 358t, 359f struggling writers, 82–83 teacher interviews and, 99–100, 102f, 104f–105f, 107, 134–135 transcription and composition skills, 83–85 universal screening and, 394 Writing skills assessment. See also Assessment; Direct assessment; Writing skills case examples, 427, 429–432, 430f, 431t CBM approach to, 193–201, 193f, 194t–195t, 197t, 198f Data Summary Form for Academic Assessment, 216–218 interpreting assessment results and, 202 overview, 192–201, 193f, 194t–195t, 197t, 198f scoring, 196–199, 197t self-regulation and, 201 using permanent product reviews and student interviews in, 201–202 Writing skills interventions. See also Intervention; Writing skills aligning assessment with, 317–319, 318f case examples, 434 evidence-based components of, 319–325 overview, 317 self-regulated strategy development (SRSD) techniques and, 319–321, 323–325 for transcription skills, 320–323