Teaching Improvement Science in Educational Leadership : A Pedagogical Guide [1 ed.] 9781975503765, 9781975503741

A 2022 SPE Outstanding Book Honorable Mention Teaching Improvement Science in Educational Leadership: A Pedagogical Guid

156 8 2MB

English Pages 255 Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Teaching Improvement Science in Educational Leadership : A Pedagogical Guide [1 ed.]
 9781975503765, 9781975503741

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

A dva nce P r a is e f o r

Teaching Improvement Science in Educational Leadership: A Pedagogical Guide “Teaching Improvement Science in Educational Leadership is an essential pedagogic resource for anyone involved in the preparation and continued professional education of teacher, school, or system leaders. The authors are themselves leaders in the teaching of Improvement Science and in mentoring the application of the improvement principles to redressing racial and class inequities. They share here valuable lessons from their own teaching and improvement efforts.” Anthony S. Bryk, Immediate past-president, Carnegie Foundation for the Advancement of Teaching and Author, Learning to Improve: How America’s Schools Can Get Better at Getting Better “Teaching Improvement Science in Educational Leadership: A Pedagogical Guide, edited by Spaulding, Crow and Hinnant-Crawford, explores ways to teach Improvement Science. It is the fourth book in Myers Education Press’s Improvement Science in Education Series, wherein—in useful detail—several experts in Improvement Science offer a plethora of strategies to teach Improvement Science in higher education classrooms, in school districts and in professional organizations. For practitioners and scholars who are interested in transforming U.S education for the better, this book is a critically important tool.” Kofi Lomotey Bardo Distinguished Professor in Educational Leadership Western Carolina University

Teaching Improvement Science in Educational Leadership

The Improvement Science in Education Series Improvement Science (IS) originated in such fields as engineering and health care, but its principal foundation has been found to be an effective school improvement methodology in education. Although improvement science research is so quickly becoming a signature pedagogy and core subject area of inquiry in the field of educational leadership, the literature is still scant in its coverage of IS models. The Improvement Science in Education series is intended to be the most comprehensive collection of volumes to inform educators and researchers about problem analysis, utilization of research, development of solutions, and other practices that can be employed to enhance and strengthen efforts at organizational improvement. This series concentrates on the elements faculty, students, and administrators need to enhance the reliability and validity of improvement or quality enhancement efforts.

Books in the Series

The Educational Leader’s Guide to Improvement Science: Data, Design and Cases for Reflection by Robert Crow, Brandi Nicole Hinnant-Crawford, and Dean T. Spaulding (2019)

The Improvement Science Dissertation in Practice: A Guide for Faculty, Committee Members, and their Students by Jill Alexa Perry, Debby Zambo, and Robert Crow (2020)

Improvement Science in Education: A Primer by Brandi Nicole Hinnant-Crawford (2020)

Teaching Improvement Science in Educational Leadership: A Pedagogical Guide by Dean T. Spaulding, Robert Crow, and Brandi Nicole Hinnant-Crawford (2021)

Improvement Science: Methods for Researchers and Program Evaluators by Brandi Nicole Hinnant-Crawford, Dean T. Spaulding, and Robert Crow (2022)

Improvement Together: Case Studies of Networked Improvement Science Communities

by Robert Crow, Brandi Nicole Hinnant-Crawford, and Dean T. Spaulding (2022)

Improvement Science: In the Multidisciplinary Arena

by Robert Crow, Brandi Nicole Hinnant-Crawford, and Dean T. Spaulding (2022)

Editorial submissions Authors interested in having their manuscripts considered for publication in the Improvement Science in Education Series are encouraged to send a prospectus, sample chapter, and CV to any one of the series editors: Robert Crow ([email protected]), Brandi Nicole Hinnant-Crawford ([email protected]), or Dean T. Spaulding ([email protected]).

Teaching Improvement Science in Educational Leadership A Pedagogical Guide

edited by Dean T. Spaulding, Robert Crow, and Brandi Nicole Hinnant-Crawford

Copyright © 2018 | Myers Education Press, LLC Gorham, Maine Published by Myers Education Press, LLC P.O. Box 424 Gorham, ME 04038 All rights reserved. No part of this book may be reprinted or reproduced in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, recording, and information storage and retrieval, without permission in writing from the publisher.

Copyright © 2018 | Myers Education Press, LLC Published by Myers Education Press, LLC Copyright P.O. Box 424 © 2021 | Myers Education Gorham, ME 04038

Press, LLC

Published by Myers Education Press, LLC

All rights reserved. P.O. Box 424 No part of this book may be reprinted or reproduced in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including Gorham, ME 04038 photocopying, recording, and information storage and retrieval, without permission in writing from the publisher.

All rights reserved. No part of this book may be reprinted or reproduced in any form by anyPress electronic, mechanical, or other means, Myersor Education is an academic publisher specializing in books, e-books now and known or hereafter digital content in the field of photocopying, education. All of our books are subjected a rigorous invented, including recording, andtoinformation storage and retrieval, peer reviewpermission process and produced in compliance the publisher. standards of the Council on without in writing fromwith the Library and Information Resources.

Library of Congress Cataloging-in-Publication Data availablepublisher from Library of Congress. Myers Education Press is an academic specializing

in books, e-books, and digital content in the field of education. All of our books are subjected to a 13-digit ISBN 978-1-9755-0009-2 (paperback) rigorous peer review process and produced in compliance with the standards of 13-digit ISBN 978-1-9755-0008-5 (hard cover) the Council on Library(library and networkable Information Resources. 13-digit ISBN 978-1-9755-0010-8 e-edition) 13-digit ISBN 978-1-9755-0011-5 (consumer e-edition) Printed in the States ofCataloging-in-Publication America. Library ofUnited Congress

Congress.

Data available from Library of

All first editions printed on acid-free paper that meets the American National Standards Institute Z39-48 standard.

13-digit ISBN 978-1-9755-0375-8 (paperback)

13-digit ISBN 978-1-9755-0374-1 cover) Books published by Myers Education Press may(hard be purchased at special quantity dis13-digit ISBN 978-1-9755-0376-5 (library networkable count rates for groups, workshops, training organizations and classroom usage.e-edition) Please call our customer service department at 1-800-232-0223 for details. 13-digit ISBN 978-1-9755-0377-2 (consumer e-edition) Printed in the United States of America.

Cover design by Sophie Appel

Visit us on the web at www.myersedpress.com to browse our complete list of titles.

All first editions printed on acid-free paper that meets the American National Standards Institute Z39-48 standard. Books published by Myers Education Press may be purchased at special quantity discount rates for groups, workshops, training organizations, and classroom usage. Please call our customer service department at 1-800-232-0223 for details. Cover and text design by Sophie Appel. Visit us on the web at www.myersedpress.com to browse our complete list of titles.

contents Dedication ix Acknowledgments xi Introduction The Need for Curating a Repertoire of Improvement Science Pedagogies • Dean T. Spaulding, Brandi Nicole Hinnant-Crawford, & Robert Crow xiii Chapter 1. A Pedagogy for Introducing the Improvement Science Method: The Personal Improvement Project • Robert Crow 1 Chapter 2. Who Is Involved? Who Is Impacted? Teaching Improvement Science for Educational Justice • Brandi Nicole Hinnant-Crawford, Ricardo Nazario y Colón, & Tacquice Wiggan Davis 17 Chapter 3. Finding Problems, Asking Questions, and Implementing Solutions: Improvement Science and the EdD • Jill Alexa Perry & Debby Zambo 43 Chapter 4. Teaching the Design of Plan-Do-Study-Act Cycles Using Improvement Cases • Chad R. Lochmiller

63

Chapter 5. Embedding Improvement Science in One Principal Licensure Course: Principal Leadership for Equity and Inclusion • Susan P. Carlile & Deborah S. Peterson

89

Chapter 6. Embedding Improvement Science in Principal Leadership Licensure Courses: Program Designs • Deborah S. Peterson, Susan P. Carlile, Maria Eugenia Olivar, & Cassandra Thonstad

103

Chapter 7. The Essential Role of Context in Learning to Launch an Improvement Network • Emma Parkerson, Kelly McMahon, & Barbara Shreve 119 vii

viii

Teaching Improvement Science in Educational Leadership

Chapter 8. From Learning to Leading: Teaching Leaders to Apply Improvement Science Through a School–University Partnership • Segun C. Eubanks, Margaret McLaughlin, Jean L. Snell, & Charoscar Coleman 141 Chapter 9. Empowering Incremental Change Within a Complex System: How to Support Educators to Integrate Improvement Science Principles Across Organizational Levels • Jacqueline Hawkins & Monica Martens 163 Chapter 10. Aligning Values, Goals, and Processes to Achieve Results • Ryan Carpenter & Kathleen Oropallo 187 Chapter 11. Toward a Scholarship of Teaching Improvement: Five Considerations to Advance Pedagogy • LaRena Heath, Barbara Shreve, Louis M. Gomez, & Paul G. LeMahieu 207 About the Authors

215

Index

227

dedication We dedicate this volume to educational leaders. Although it may be trite to call them unsung heroes, it is no less true. The individuals who go to work each day seeking to improve opportunities to learn, issues of access, and student experiences are such an essential part of our educational systems. It is leaders such as these who stay up at night brainstorming solutions to complex problems of practice. Therefore, we dedicate this text to the leaders—and the brave teachers, professors, coaches, and consultants who try to teach leaders the science of improvement.

ix

acknowledgments Acknowledgments could easily go on for pages, because books are always the culmination of hard work from the authors as well as those who support the authors. We would like to begin by thanking each contributor for sharing their knowledge about how to teach improvement science within this volume. We would also like to thank Chris Myers, Stephanie Gabaree, and the team at Myers Education Press for a smooth and seamless process. And we would like to thank those closest to us—Robert, Evan, Rose, Elizabeth, and Elijah—as we write on time borrowed from our families. We love you and thank you for your understanding.

introduction

The Need for Curating a Repertoire of Improvement Science Pedagogies Dean T. Spaulding Z Score Inc.

Brandi Nicole Hinnant-Craw ford Western Carolina University

ROBERT CROW

Western Carolina University

A

s faculty teaching in higher education or school leaders working to improve teacher practices in the K–12 arena, teaching the adult learner is certainly rewarding, although at times a challenge; however, no matter what challenges we face as instructors, one thing remains true: We all want to deliver effective instruction to our students/staff and for our students/staff to integrate that instruction into their daily practices as school leaders and educators and change the environments in which they themselves practice. This building of shared knowledge is the most fundamental element for which we exist as educators. However, for this goal to happen, the change has to start with us—the instructors. And this change can only take place if we examine, assess, and modify our own curriculum. Adopting and integrating improvement science practices into your curriculum is perhaps one of the most effective and timeproven tasks that you can do to effect change. Initially, changing xiii

B

xiv

Teaching Improvement Science in Educational Leadership

one’s curriculum to include improvement science practices may seem like the obvious choice. However, as the chapters in this book will tell you, it can be a complex task. You may be willing to modify your instruction to include the teachings of improvement science, but what about others around you? Members of your own department may not see the need for such practices; traditional textbooks may appear counterproductive by focusing on methodologies used to answer research hypotheses and not solve deeply rooted problems in our educational systems; accreditation agencies may not be knowledgeable about improvement science and refer to such practices as “not rigorous,” placing your program in jeopardy. And then there are the students/staff themselves who may become frustrated at having to think about addressing real problems rather than collecting meaningless data to fulfill some benign assignment or professional development activity. And these challenges are just the beginning. No, changing your curriculum to include improvement science is the easy part; convincing others that it is for a greater good is the more difficult task at hand. But that is where this book comes in . . .

Approaches to Developing Capacity for Improvement Science Early on, there have been disparate avenues for education leaders to develop competency in the field of improvement science. Graduate schools of education do not hold the monopoly on where the principles and application of improvement science are being taught. We are currently seeing a multipronged approach to the spread of improvement science that includes focus on the improvement capacity of school districts. We are also seeing field-building from sources supporting the development of the science of improvement. Pathways for developing facility with improvement science, therefore, are through the curriculum of graduate schools of education, through professional training for district school leadership, and through academic support entities that facilitate the research and development that underlie and facilitate these developmental approaches.

Introduction

xv

A major approach for creating capacity in the field of improvement science focuses on doctoral programs in educational leadership. Scholars in this arena began to rethink the purpose of the education doctorate, including the form and format of the dissertation in practice, over a decade ago. During the course of this work, programs in educational leadership began focusing efforts on developing the scholarly practitioner student learning outcome. As scholar-practitioners, recent graduates from redesigned programs were assumed to enter their professional contexts prepared to single-handedly change culture. An assumption here is that by graduating a critical mass of reform-minded leaders, the field would develop over time. This perspective belies the notion that the graduate simultaneously serves in a martyr’s role to the extent to which one’s context is mired in maintaining the status quo. Doctoral faculty who took the “one-off” approach of sending graduates to do battle with districts in promoting this emerging field quickly realized it was insufficient to encourage the knowledge, skills, and dispositions comprising the field of improvement science to spread. Another approach building momentum for imparting the knowledge, skills, and dispositions comprising improvement science is in field-building at the school district level. Effort to develop the entire organization is an integral element in a multipronged approach, as field-building at the local education agency level is taking root. Professional development opportunities, such as the Carnegie Foundation’s Summit for Improvement in Education as well as specialized district team training by the foundation and other improvement science experts, provide both foundational and applied learning opportunities to develop district personnel. Past events have been heavily attended by district improvement teams sent to professional gatherings to bring back improvement science know-how to the home organization. The final approach for developing others’ understanding and use of the principles and frameworks of improvement science are the professional organizations that are dedicated to providing the research, development, and underlying architecture supporting the science of improvement. Not only are professional organizations

xvi

Teaching Improvement Science in Educational Leadership

building foundational knowledge around what improvement science is, they are also formulating the frameworks, principles, and—most important—applications needed for those who wish to practice this craft. Working hand in hand to develop graduate faculty has been just one of the strategies undertaken by the Carnegie Foundation for the Advancement of Teaching. Through an agenda of professional development offerings, the foundation has set its sights on district personnel, and more specifically, district teams, toward developing a shared understanding of the potential for improvement science to “move the needle” in organizational and institutional improvement.

Developing Instructional Expertise Initially, those who find themselves in front of an eager audience of would-be improvement scientists might have trepidation about their current level of expertise with teaching the subject matter. The field of improvement science is new to education, and those charged with moving it forward by developing others’ facility with the subject may not have had the luxury of exposure to decades of developed learning, as is the case with more traditional academic subjects. This knowledge and insight is needed in order to establish the curriculum and pedagogy of improvement science. Perhaps more important, the goal is to develop one’s specific pedagogical content knowledge (Shulman, 2005) relative to teaching the disparate components comprising the whole of improvement science. It is difficult to teach something that is new to you, or that you feel you are not fully familiar with. And although readers may be experts in leadership, research methods, assessment and evaluation, higher education, or curriculum and instruction, the task of teaching improvement science may seem a little daunting. The scholars here are sharing assignments and activities that readers can use within their own teaching settings to illuminate the science of improvement to their students or colleagues. However, the book is not only a collection of pedagogical approaches, it also helps solidify the reader’s knowledge of the foundations of improvement science. As

Introduction

xvii

one reads this text to prepare for a course, unit, or in-service training on improvement science, one must also consider trying some of the activities—for yourself, initially. For example, the editors endorse every instructor going through the personal improvement project as described in chapter 1. The pedagogy of improvement science is varied, and accordingly, the context of the following chapters gives the reader a wide experience with applied concepts. The content of this book is informative because the reader is presented with a range of pedagogies from a variety of viewpoints and approaches. The book gives the reader a holistic picture for how one might develop stakeholder competency and capacity with improvement science as a signature problem-solving methodology for educators. And although there are books that provide foundational knowledge on the methodology of improvement science (e.g., Bryk et al., 2015; Crow et al., 2019; Hinnant-Crawford, 2020; Langley et al., 2009), this book differs in that we present varying approaches for teaching others about improvement science. We have written the book because we know that those who desire to develop the methodology need resources. As such, the book provides the illustrations, examples, and other concrete applications so that those involved in teaching the subject matter may connect their foundational knowledge of improvement to the applied context. What you will find in this book is a collection of example pedagogies—approaches that are used in different contexts for different purposes. In chapter 1 you will not only be introduced to the improvement science process but will see firsthand how one faculty member introduces the process through a personal improvement project. Introducing your students/staff to improvement science through a personal improvement project is one of the most effective ways in which to get your students/staff excited about improvement science. Chapter 2 examines the misconception held by many who are new to improvement science that an improvement science project and social justice need to be mutually exclusive initiatives. This chapter will mend that misperception and show instructors how to go about successfully merging these two areas seamlessly together. Chapter 3 will provide the instructor with a “box” full of tools that

xviii

Teaching Improvement Science in Educational Leadership

are the essential components of any effective improvement science project. Faculty will learn about tools such as driver diagrams and improvement cycles (plan-do-study-act), along with comparing (and contrasting) the differences between the traditional research approach (PhD) and the dissertation in practice (EdD). This should be valuable information to faculty who are interested in reexamining and perhaps revamping their current doctoral programs, particularly for those in educational leadership. The reader will be introduced to plan-do-study-act (PDSA) improvement cycles in chapter 3; chapter 4 provides more in-depth information regarding improvement cycles and how to apply them to case studies—a critical activity in providing as realistic an environment for students and stakeholders to practice and hone their improvement science skills as possible. Chapter 5 examines the collaborative revision process developed by an administrator program leadership team as they increase their capacity to deepen and broaden use of improvement science to prepare effective, equity-focused principals. Chapter 6 illustrates how the faculty in a principal licensure program integrated improvement science into its principal licensure program courses in response to the state’s new school administrator licensure standards that emphasize improvement. Readers will also examine the project’s timeline and successful strategies for collaborating among the program faculty to complete the redesign work. The chapter explores not only the successes but the challenges and barriers that were met during this process as well. Chapter 7 shares lessons learned by faculty who have guided groups of educators, university faculty, and district leaders in developing the necessary components they would need to form new network improvement communities. In Chapter 8 readers review how one institute of higher education explored the impetus for the evolution of a practitioner doctorate program, and the influence and impact of improvement science in this program transformation. Chapter 9 provides practical activities to illustrate the connections between the political landscapes within which people function, change management within systems, and the principles of improvement science. This chapter provides the study of incremental change with the

xix

Introduction

application of pedagogical tools to assist faculty with their teaching, curriculum development, and ongoing improvement of EdD programs. Finally, in chapter 10 readers are introduced to the journey of a school system as it embarked upon aligning priorities, developing systems to achieve defined success, and undergoing cycles of improvement. The time is now for providing a guide that can equip educational practitioners and scholars with the know-how to implement the signature problem-solving methodology for educational transformation. We hope readers find enlightenment in the many facets of knowledge, skills, and dispositions development relative to the field of improvement science. But even more so, we hope this text demystifies the task of teaching improvement science. Ivan Illich (1971) once said, “Most learning is not the result of instruction. It is rather the result of unhampered participation in a meaningful setting” (p. 39). Improvement science and the pedagogies that accompany it provide readers the opportunity to construct meaningful settings for their pupils to grapple with improvement science and its praxis to, in turn, go out and change the world.

References Bryk, A., Gomez, L., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Crow, R., Hinnant-Crawford, B. N., & Spaulding, D. T. (2019). The educational leader’s guide to improvement science: Data, designs and cases for improvement. Myers Education Press. Hinnant-Crawford, B. N. (2020). Improvement science in education: A primer. Myers Education Press. Illich, I. (1971). Deschooling Society. Harper & Row. Retrieved from: https://globalin telhub.com/wp-content/uploads/2013/07/DeschoolingSociety.pdf Langley, G., Moen, R., Nolan, K., Nolan, T., Norman, C., & Provost, L. (2009). The improvement guide: A practical approach to enhancing organizational performance (2nd ed.). Jossey-Bass. Shulman, L. (2005). Signature pedagogies in the professions. Daedalus, 134(3), 52–59.

chapter one

A Pedagogy for Introducing the Improvement Science Method The Personal Improvement Project ROBERT CROW

Western Carolina University

Abstract

M

any aspiring educational leaders may be unfamiliar with improvement science principles and frameworks. When developing leaders, like the ones enrolled in the doctoral program in which I teach, one approach for introducing improvement science principles and applications is through a learner-centered lens. The personal improvement project (PIP) provides such a lens for learners new to the concepts comprising the field of improvement science. The PIP is a learning exercise which affords an opportunity for students to engage in a novel experience, to use prior knowledge and already learned, personally relevant information to make connections to new concepts from the field of improvement science. The ability to contextualize a problem is important for upholding the two improvement principles of making the work problem-specific and user-centered (Bryk et al., 2015). Further, constructing learning opportunities that students find personally meaningful, such as in the case of the PIP, is in step with cognitive theorists who pose that it is the successful learner who “links new information with existing knowledge in meaningful ways” (Sternberg & Williams, 2002, p. 446). Because the links between novel incoming information and 1

2

Teaching Improvement Science in Educational Leadership

one’s existing knowledge are integrative and mutually reinforcing, pedagogies that strengthen these associations enable learners to more readily transfer their newly acquired skills to novel tasks and situations. It is due to this transferability that I find the PIP to be an effective approach for introducing the improvement science method to developing education leaders. The act of internalizing external frameworks is a cognitive activity. Learning theorists posit that the signs and symbols of the external environment, such as letters comprising the alphabet, become internalized over the learning period as cognitive structures and mental operations that are later called on based on cognitive demands (Wertsch & Stone, 1999). Therefore, the purpose of this chapter is to illustrate a pedagogy for achieving the learning outcome of internalizing the improvement method. Through engagement in the PIP, the principles and applications of improvement science can be learned through the lens of personal relevancy.

Background In learning to use improvement science principles and applications in the educational setting, I have found it useful for my students to first develop a foundational knowledge base by viewing these elements through a personally relevant lens. Therefore, for the purpose of this chapter, I present a hypothetical example of a PIP focused on exercise and movement—or, more specifically, the problem of lack of personal motility. To further the example, the overall aim of my personal improvement initiative is to increase the relative amount of mobility in which I presently engage. To be more specific and in better alignment with SMART goals—that is, goals that specify measurable, time-bound outcomes—my goal is to increase overall engagement in high-aerobic activities such that I sustain at least 20 minutes of high-intensity cardiovascular activity four times per week. In this case, high intensity is considered activity that causes a minimum heart rate of 120 beats per minute (bpm).

Pedagogy for Introducing the Improvement Science Method

3

Using Self as a Lens for Understanding the Problem As mentioned, the PIP is an instructional exercise designed to allow learners to gain experience studying and applying improvement science principles and practices. As a whole, the PIP can take on a range of possible forms and foci. Regardless, all improvement initiatives should focus on the primary goal of answering the question What are you aiming to improve? Typical examples of students’ past PIP projects include topics that address the “too little of x” or the “too much of y” conundrum. As you might infer, the theme of too little of/too much of delineating the actual versus the ideal underlies most, if not all, improvement projects. Stated earlier, the problem of a lack of motility in my daily routine represents a too-little-of condition, wherein through my goal to increase the behavior (in this case, it is sustained aerobic activity), I sought to rectify the discrepancy between the actual condition versus the ideal one (Archbald, 2014). Through the PIP, I launch the journey of overcoming my actual condition, one that did not involve any meaningful quantity or intensity of aerobic exercise, toward the ideal state, a (somewhat) regular exercise routine that ought to be included in my daily schedule. The pathway to achieving the goal sounds elusive, and at first glance it is. However, through an improvement lens, students come to understand and address their problem by employing techniques such as systems and/or process mapping and causal systems analysis. They envision and enact ways toward solutions through driver diagramming and plan-do-study-act cycles. Through engagement in the PIP, an in-depth modus operandi can develop that will, with time and practice, become an internalized orientation to practical problem finding and solving.

What’s the Problem? A primary activity I like to introduce in my classes early in the PIP is problem exploration. A major part of this exploration is in establishing

4

Teaching Improvement Science in Educational Leadership

a problem definition. The 5 Why’s is one of several techniques I have found useful for addressing the rationale and justification supporting problem definition and framing (Bryk et al., 2015). Using the PIP topic focused on motility, one might begin by asking the first of a series of Why questions with: Why is too-little-of, or too-much-of, x a problem? Or, as it is in our case—Why is a lack of motility a problem? As a possible response, one might postulate, based on the use of the 5 Why’s technique, that because of a lack of strenuous and sustained exercise, the heart does not receive cardiovascular benefit. Why is a lack of cardiovascular activity a problem? According to the American Heart Association (n.d.), 20 minutes of sustained cardiovascular activity per day is recommended in order to maintain a healthy lifestyle. Consult Table 1.1 for help with the process of defining the parameters of the problem, adapting descriptors for one’s personally relevant PIP. Table 1.1. An Improvement Science Lens for Actionable Problems of Person Urgent for the self

Problem arises out of a perceived need by the person affected and can be sourced from personal record collecting and keeping

Actionable

Problem exists within the individual’s sphere of influence

Feasible

Problem prioritization can occur that considers timeframe and resources

Strategic

Problem is connected to the goal of the individual affected

Tied to a specific set of practices

Problem is narrowed to high-leverage practice(s) yielding incremental new learning through plan-do-study-act (PDSA) cycles

Forward-looking

Problem reaches toward the next level of improvement— scaling up and sustainability

Note: The construct “problem of person” (the focus of the PIP) replaces “problem of practice.” Source: Adapted from Perry et al. (2020).

What’s Causing the Problem? Once the issue has been identified, the underlying root causes contributing to the so-called problem can be identified. Problem

5

Pedagogy for Introducing the Improvement Science Method

identification fulfills an essential improvement principle: see the system producing the outcome (Bryk et al., 2015). Once identified, the improver goes on to conduct a causal systems analysis to articulate the set of underlying, or root, causes found to be responsible for that problem’s existence. Figure 1.1 depicts the general structure of the causal systems diagram, consisting of the problem statement and the major and minor contributory root causes determined to be responsible for perpetuating the problem at hand. The fishbone diagram is composed of specific elements. The first and perhaps most notable feature is the problem statement. The problem statement is expressed in the left-hand area (or head of the fish skeleton) in the diagram. Contributing factors, or more correctly, the root causes found to be responsible for the existence of the problem, appear respectively in the areas represented by the word Category. Each category of root causes is then further deconstructed into discrete, granular elements comprising that category. Upon conclusion of the causal systems analysis work, the finished diagram provides stakeholders with a nearly complete roadmap that highlights the causal factors that need to be addressed in the next phase of the improvement project—determining which changes might be introduced that could lead to improvements.

Category

Category

CAUSE CAUSE

Problem Statement

CAUSE WHY?

Category Figure 1.1. Causal systems diagram (a.k.a. fishbone diagram).

Category

6

Teaching Improvement Science in Educational Leadership

Causal systems diagramming is an important technique for creating an understanding of the problem context (ultimately, a shared understanding if undertaken collaboratively). The diagram functions as a means for collecting and organizing current knowledge about the underlying causal factors found to be responsible for perpetuating the problem at hand (Langley et al., 2009). The causal analysis requires improvers to “direct attention to the question, Why do we get the outcomes that we currently do?” (Bryk et al., 2015, p. 198). One may undertake a causal systems analysis (CSA) in a variety of ways. In the case of the PIP illustration, we will construct a fishbone diagram. Doing so allows one to pinpoint contributory root causes underlying the problem at hand. During the process of constructing the diagram, distinguishing the set of root cause(s) responsible for the existence or perpetuation of a problem can be garnered from several sources. While not exhaustive, one can turn to at least three avenues for determining root causes—(a) through findings and other results in the published literature, (b) by tapping into the local knowledge base, and (c) through stakeholder engagement and dialogue, as follows:

Literature scan (and other published information)—published results and other findings appearing in peer-reviewed journals and other scientific papers



Local knowledge (including anecdotal)—leaders’ years of experience, professional organizations and networks, institutional data

Stakeholder engagement—variety of first-person perspectives of experiences From our PIP example of increasing opportunities to engage in an elevated cardiovascular rate, much can be learned by exploring the underlying causes hampering engagement in strenuous activity. The underlying elements are the actual root causes contributing to the existence of the problem at hand. The causal analysis technique,

Pedagogy for Introducing the Improvement Science Method

7

employing the fishbone diagram (also called the Ishikawa diagram), allows individuals or groups to construct a visual depiction of the host of contributory factors and their relative subsets found through literature scans, local knowledge and data, and stakeholder voices closest to the problem. When introducing the CSA as new material to be learned, I emphasize that in professional practice the technique is by no means to be undertaken independently. Construction of the diagram is intended to be a collaborative endeavor. Myriad voices of relevant stakeholders, as well as other sources of information, should inform the diagram’s final rendering. Later, when we discuss ways to develop the “it takes a village” mindset for how to overcome the various causal factors, it is important to remember to delegate where, taken one by one, the strategy for tackling the various challenges might occur when each “category” (or bone in the fish diagram) can be delegated to specific staff or a particular unit. These independent units could then, in turn, take on the particular component in the improvement initiative. Selection of those responsible for proaction should be strategic and consider the following questions—Who can get the most leverage? Who has the most social capital? Further, in the case of the PIP, causal factors may not be in the immediate control of the person completing the project. Careful consideration can reveal which causal factors are within grasp (and thereby best addressed during the project) and which ones are not, which challenges have the greatest potential for returns and which do not, and so on. As shown in Figure 1.2, the problem statement is written in measurable terms. Conversely, one may transpose the negatively phrased problem statement into a positively phrased aim statement focusing on an aspect of improvement. The categorical factors (represented by the larger bones in the fishbone diagram) can be considered by the improver in relation to leveraging access, social capital, and personal volition, so that through transposition, a workable set of prioritized actions can be constructed to target these deterrents (i.e., causal factors).

8

Teaching Improvement Science in Educational Leadership

Lack of Motivation

Category

FOOD & DRINKS ENTERTAINMENT

I have a sedentary lifestyle

CAUSE WHY?

DAILY SCHEDULE COMMITMENTS

Lack of Dedicated Time

Category

Figure 1.2. Fishbone diagram illustrating causal factors for lack of motility.

In addition to undertaking the causal analysis, there are other improvement techniques that one might employ. For example, and also in an effort to see the system, one might use a system map or process map as a means to contextualize the problem. In either case, one illustrates the problem at hand as a node within a set of procedures, processes, workflows, and so on, so that targeted efforts can be made in response to particular bottlenecks. In identifying areas within a system that are underperforming or contributing to unwanted outcomes or variation, care can be made to also attend to areas that lie outside of the system improvement project. What Should Be My Aim? At this stage in the PIP, a set of inquiry questions should be addressed. The questions comprising the improvement model are as follows: 1. What am I trying to accomplish? 2. What change could lead to improvement? 3. How would I know whether the changes led to improvement? (Langley et al., 2009, p. 5)

Pedagogy for Introducing the Improvement Science Method

9

Earlier, in working toward articulating a problem statement, the first question from the model of improvement was addressed. In our case, the problem was lack of motility (motility defined as an increased sustained heart rate of 120 bpm). Next, in creating the causal analysis illustration (i.e., the fishbone diagram), the constellation of causal factors was articulated. Our causal analysis mapped out several underlying factors such as lack of motivation and lack of dedicated time as partially responsible for contributing to my lapse of engagement in high-aerobic activity. What Changes Might Lead to Improvement? The next step in the improvement project is to take the causal factors and pinpoint corresponding actions that might address the hindrances that are standing in the way of improvement. When faced with answering the second question from the model of improvement—What change could lead to improvement?—in improvement work, one necessarily turns to the driver diagram. In creating the plan, or theory of improvement, a driver diagram is described as a means to “visually represent a group’s working theory of practice improvement [by creating] a common language, coordinat[ing] the effort among the many different individuals joined together in solving a shared problem” (Bryk et al., 2015, p. 199). In order to illustrate a theory of improvement, one relies on driver diagramming. This illustrative framework has several defining features: an aim statement, primary and secondary drivers, and associated change ideas (depicted in Figure 1.3). The aim statement is an articulation of the overarching goal, and is expressed in a way that is both measurable and time-bound. The primary driver “represents a community’s hypothesis about the main areas of influence necessary to advance the improvement aim” as shown by our primary drivers of time commitment and daily organization (Bryk et al., 2015, p. 199). These primary drivers are deconstructed into more granular elements, secondary drivers, that illustrate the system components that activate the associated primary drivers in representing the “how” of the change process (Bryk et al., 2015). The

10

Teaching Improvement Science in Educational Leadership

most granular elements in the driver diagram are the change ideas which represent the “alteration to a system or process that is to be tested through a PDSA cycle to examine its efficacy for improving some driver(s) in the working theory of improvement” (p. 199). In our case, in order to achieve sustained aerobic activity, my change ideas include the addition of brisk walking and exercise on an elliptical machine into my weekly routine. AIM: Engage in increased opportunities for obtaining an increased heart rate [20%] by the end of the summer term

Primary Drivers

Secondary Drivers

Change Ideas

Time Commitment

Exercise Time

Walking to increase heart rate

Daily Organization

Revise Daily Schedule

Elliptical to increase heart rate Change Idea

Figure 1.3. Example of a driver diagram.

The elements of the driver diagram allow one (or a group) to construct a visual platform that illustrates the pathway to one’s desired outcomes. Responding directly to root causes generated through the causal systems analysis, the driver diagram provides the framework that allows one to drill down, through primary and then secondary drivers, all the way to the respective change idea counterparts to capture the specific proactive strategy(ies) (i.e., theory of action) designed to achieve the aim. As mentioned, the driver diagram allows one to engage with answering the question What is the most reasoned change to try? A completed driver diagram allows one to realize another essential principle in improvement work—“We cannot improve at scale what we cannot measure” (Bryk et al., 2015, p. 14). Once the change ideas are produced from the driver diagram, our next challenge is to establish measures that gauge progress toward our sought-after processes and outcomes so that we may evaluate whether change has actually led to improvement.

Pedagogy for Introducing the Improvement Science Method

11

What Changes Might I Introduce That Would Lead to An Improvement? High leverage change ideas for improvement do not just spontaneously appear. Change ideas evolve through careful articulation of the problem, consideration of its root causes, and then tracing these causes onto a map of potential improvement response ideas (i.e., change interventions). In order to consider each of the chosen interventions in light of the question—What changes might lead to improvements?—we take each change idea and “test” that idea through a type of inquiry cycle frequently referred to as the plando-study-act (PDSA) cycle. The PDSA is characterized as a “pragmatic scientific method for iterative testing of changes in complex systems” and can be conceptualized as “mini-experiments in which observed outcomes are compared to predictions and discrepancies between the two become a major source of learning” (Bryk et al., 2015, p. 200). The PDSA cycle, as its name implies, consists of four major phases of action: Plan, Do, Study, and Act. In each phase, the improver considers a unique aspect of the mini-experiment process. For example, in the initial Plan phase, one sets out to understand the change action, is able to describe the change being initiated, and can also explain the goal sought. In our case of lack of motility, one inquiry cycle might focus on the introduction of brisk walking as a change idea. Therefore, in the Plan phase, I would articulate the change effort (introducing brisk walking) and the goal desired (increased sustained aerobic activity). As we pass into the next phase of the cycle, the Do phase, details and description of what happened during the testing are generated. In the data that are collected, a range of responses is desired. Not only are outcomes of the test recorded, but also records of any obstacles in data collection or surprises that might have occurred during the testing phase are noted. Moving through the PDSA cycle into the Study phase, results arising from the testing phase are analyzed. Measures previously articulated—outcome, driver, process, and balancing—are

12

Teaching Improvement Science in Educational Leadership

considered in light of the data collected. A comparison of how the measures matched, or did not match, one’s original prediction(s) are weighed. In the case of the PIP, recorded intervals of activity, particularly of brisk walking, have provided evidence of my sought-after aim of sustained, increased cardiac activity. Tallies show that at least five days out of the week, I engaged in at least 30 minutes of cardiovascular exercise with a sustained heart rate of 120 bpm. Last, the final phase of the cycle, the Act phase, requires a commitment to using the knowledge generated during the PDSA cycle to refine and revise one’s current thinking about the change agents and their efficacy. Modifications to original planning might be necessary as one moves into the next phase of iterative inquiry. In our case, brisk walking led to increased high-intensity sustained aerobic activity. A secondary outcome, satisfaction, has also increased. I feel like the almost-daily exercise I have committed to resulted in both physical and mental benefit. Those evaluating whether improvement efforts (i.e., change ideas) have contributed toward improvement have found utility in the PDSA cycle framework. This framework allows one to trace each change idea to ascertain trends through reflective and systematic analyses. Interestingly, the PDSA framework, such as in the case of the PIP, can be used as a framework for mini-experimental inquiry cycles, or as a larger, more overarching conceptual framework— wherein each phase of the cycle (Plan, for instance) contributes to the entirety of the improvement project. For example, the Plan phase might include a wide variety of actions, such as considering (using the 5 Why’s technique)—Why is it that the problem is indeed considered a problem? The process of consideration, while in the Plan phase, could also include systems and/or process mapping, so that the improvement initiative might be more targeted on the particular systemic element contributing to the unwanted variation in the outcome under consideration (i.e., the problem that exists). Contrasting the idea of using the PDSA framework as one unifying structure to encompass the entirety of the project, one might consider deconstructing the uber-PDSA into a series of PDSA cycles.

Pedagogy for Introducing the Improvement Science Method

13

In the latter case, each PDSA targets a specific change idea (originating from the driver diagram) for achieving the improvement aim. Further, considering the number of sources of data being collected during the project, one could also conceptualize focusing PDSA cycles on each data set. As one might infer, working with the PDSA framework can take on many constellations in improvement work. How Will I Know If Changes Led to Improvement? Specifying measures in improvement work is imperative. Measurement for improvement, also conceptualized as practical measurement, comprises four overarching categories as follows:

Driver measures—answer the question Is it working? and are measures that one may refer to prior to concluding the improvement project. Thought of as an intermediary measure, driver measures allow for gauging whether you are “on the right track” during the initiative rather than waiting until the project is over (Hinnant-Crawford, 2020).



Process measures—answer the question How is it working? and gauge implementation efficacy, quality control, or other system components. An example of a process measure is the time interval occurring between a student’s first inquiry into a particular college and when the recruitment materials were received by the student. In this instance, an improvement initiative focused on recruitment and retainment of students might focus on improving lagging intervals where elements or steps involved in the process might be enhanced.



Outcome measures—answer the question Did it work? and provide a summative evaluation of the outputs of a system’s components. Typically, when gauging whether an initiative has indeed led to improvements, examples of outcome measures include test scores, retention rates, and the like.

14

Teaching Improvement Science in Educational Leadership



Balancing measures—answer the question Did it work as intended? and reflect the evaluative component that assesses the extent to which inadvertent or unintentional consequences might have occurred during the improvement project. How did the overall system respond to the introduction of new inputs (e.g., an instructional strategy, an administrative procedure, etc.)? An assessment of system-wide performance is done with balancing measures, an example of which might include behavioral referral rates of students who have undergone the behavioral abatement training consequent to concluding the program.

Conclusion Connecting prior knowledge with new learning increases the extent to which novel information is encoded. Using and applying principles and frameworks in personally relevant improvement projects is a pedagogy that allows for internalizing the content comprising the field of improvement science. Learning a new methodology—with its unique principles and processes—can be connected to the vast store of knowledge already residing in learners’ long-term memories. Associating new information with that which was previously learned allows neural connections to be forged and facilitates a deeper level of encoding. The personal improvement project is an exercise that affords education leaders an opportunity to integrate personal interest and motivation into their learning experience of connecting new concepts. Past experience with conducting the PIP activity with doctoral-level students in educational leadership programs has resulted in students’ enhanced drive to improve current personal circumstances. More importantly, the project provides the means necessary for opening the way for transferring newly developing improvement science skills from the personal plane to the professional one in order to address pressing problems in educational practice.

Pedagogy for Introducing the Improvement Science Method

15

References American Heart Association. (n.d.). Recommendations for physical activity for adults and kids. https://www.heart.org/en/healthy-living/fitness/fitness-basics/ aha-recs-for-physical-activity-in-adults Archbald, D. (2014). The GAPPSI method: Problem-solving, planning and communicating. NCPEA Publications. Bryk, A., Gomez, L., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Hinnant-Crawford, B. N. (2020). Improvement science in education: A primer. Myers Education Press. Langley, G., Moen, R., Nolan, K., Nolan, T., Norman, C., & Provost, L. (2009). The improvement guide: A practical approach to enhancing organizational performance (2nd ed.). Jossey-Bass. Perry, J. A., Zambo, D., & Crow, R. (2020). The improvement science dissertation in practice: A guide for faculty, committee members, and their students. Myers Education Press. Sternberg., R., & Williams, W. (2002). Educational psychology. Allyn and Bacon. Wertsch, J., & Stone, C. (1999). The concept of internalization in Vygotsky’s account of the genesis of higher mental functions. Lev Vygotsky: Critical Assessments, 1, 363–380.

chapter two

Who Is Involved? Who Is Impacted? Teaching Improvement Science for Educational Justice BRANDI NICOLE HINNANT-CRAWFORD Western Carolina University

RICARDO NAZARIO Y COLÓN Western Carolina University

TACQUICE WIGGAN DAVIS Western Carolina University

Abstract

I

n this chapter, we show current and future pedagogues how to integrate improvement science and educational justice as instructional outcomes. Grounded in Paulo Freire’s notion of critical pedagogy, we argue improvement science can be a critical pragmatic approach to inquiry or critical praxis if taught in a way that prioritizes justice and liberation in both processes and outcomes. We introduce the four-part Teaching Improvement Science for Educational Justice (TISEJ) Framework, which includes explicit instruction, anticipation, preparation, and facilitation. The chapter provides examples of materials, activities, and assignments in each phase of the framework. [We] must perceive the reality of oppression not as a closed world for which there is no exit, but as a limiting situation which [we] can transform. —Paulo Freire, Pedagogy of the Oppressed 17

18

Teaching Improvement Science in Educational Leadership

What Is the Relationship Between Social Justice and Improvement? I (Brandi Hinnant-Crawford) was in a session at a conference where a participant asked how a leadership program can use improvement science when their emphasis is social justice. On first hearing the question, I was puzzled, as in my mind, the two fit together—without much effort. Justice, which in the realm of education we define as equitable access to opportunities to learn, should be the goal of every improvement. But this question made it clear to me proponents of improvement science have not made this compatibility evident. Policy scholars and proponents of equity (Horsford et al., 2019), explaining improvement science and similar forms of datadriven management, say that, “at its best, it promoted a form of organizational learning and data analysis in business productivity. But its use in education has largely created a culture of quantification and contrived collegiality” (p. 199). The critiques of improvement science within the fields of educational leadership and policy are warranted. Improvement scientists’ context-free descriptions of the method have made it difficult to see the relationship between improvement science and educational justice. Justice in improvement science deals with two aspects: the problem addressed and the process employed. In Improvement Science in Education: A Primer, I (Hinnant-Crawford, 2020a) explain: In order to improve with equity in mind, you have to think about who is involved in the improvement (whose voices have been considered in the definition of the problem and the design of the solution) and who is impacted by the improvement [original emphasis]. (p. 20)

These two questions—Who is involved? and Who is impacted?— should echo throughout all educators’ improvement journeys. Who is involved speaks to equity and inclusion in the improvement process. Who is impacted reminds improvement scientists to consider whose needs will be served and whose needs are centered when problems are defined.

Who Is Involved? Who Is Impacted?

19

To teach improvement science as a methodological tool for educational justice, one has to believe that inquiry methods can be liberatory and that improvement science can1 be critical praxis. This chapter will explore how to ensure inclusion in the process as well as how to center justice when defining the problems to address. We will also guide how to teach improvement science in a manner that does not neglect justice in the process of improvement or in the definition of problems of practice. In our guidance, we make several assumptions about who you are and what you are teaching. We assume readers are preparing to teach improvement science • to preservice or in-service education professionals (teachers/ professors, coaches, principals, deans, directors, continuous improvement specialists, etc.); • to adults with some experience (even if minimal) within the elementary, secondary, or postsecondary educational institutions; and • to individuals who have or are developing dispositions committed to justice and student-centered practice. We will use the terms class and students as proxies for all of these diverse teaching and learning contexts. Instructor will be used synonymously with coach. If the reader does not fall into one of these categories, this chapter will illustrate the symbiotic nature between improvement science and justice work. The pedagogical aspects can be used as guidance for greater self-edification.

Who Is Involved?—Improvement Science as Critical Praxis Educational (and psychological) research has an ugly history of generating data that justifies oppression and discrimination (Gould, 1996; Gutherie, 2004; Hinnant-Crawford, 2016). Our educational

1 Improvement science can be critical praxis, which is explained in detail in the chapter. This does not imply every incidence of improvement science is critical praxis.

20

Teaching Improvement Science in Educational Leadership

system, founded upon understanding derived from such research, has continued to play a role in stratifying society based on race, class, gender, and other immutable characteristics (Bordieu & Passeron, 1990). The history of uncritical positivistic analyses that illustrate associations between student characteristics and achievement has led to deficit ideologies pervasive in educational thought. However, research (even quantitative research) does not have to be the enemy of justice (Hood, 2001; Tillman, 2002). Synthesizing context specificity with scientific inquiry, improvement science is a methodology that accelerates improvement by defining problems, developing changes, testing the efficacy of changes, and refining them rapidly. Unlike traditional research methods that can be conducted with an individual, improvement science requires scholar-practitioners, especially justice-oriented scholar-practitioners, to work with various stakeholders to address problems of practice within their organizations. In this work, scholar practitioners must be cognizant of power dynamics, structures that diminish the contributions of the marginalized, and the tendency to blame clients/students/employees for systemic problems. For the improvement science process to be a form of critical praxis, embodying justice requires both action and reflection. Far too often, those engaged in the work have been accused of allowing situations where “data and spreadsheets too often stand in for genuine dialogue and inquiry” (Horsford et al., 2019, p. 199). The strength of an improvement initiative is not in the data it generates but the reflection that data catalyzes; the richness of that reflection is directly a result of who is involved. The ontological basis of improvement science is the testing (action) and revision (reflection) of theories. In a similar vein, Freire (2008) describes praxis “as the reflection and action which truly transform reality, is the source of knowledge and creation” (pp. 100–101). Proponents, coaches, and teachers of improvement science must view this framework as more than a sequence of steps to be transformative. They must review improvement work as praxis. Proponents of improvement science must examine the principles of improvement as guidance for action and reflection and insist that reflection be rooted in justice.

Who Is Involved? Who Is Impacted?

21

Improvement science in education is usually guided by three questions, outlined in Langley et al.’s (2009) model for improvement: 1. What am I trying to accomplish? 2. How will I know a change is an improvement? 3. What change might I introduce, and why? In answering these questions, improvement scientists define problems of practice, develop changes, and test the efficacy of those changes. To center justice in these processes, those engaging in improvement work must interrogate the axiological motivations behind the initial question What am I trying to accomplish? The logical follow-up question is Why am I trying to accomplish this? and Who benefits from this being accomplished? That initial reflective question will begin to answer whether this improvement journey is about efficiency, competition, compliance, or liberation. Similarly, the second question, How will I know a change is an improvement? must be countered with Who defines what improvement is? and Whose values and priorities are reflected in how improvement is defined? Are there times when goals are externally defined? Absolutely. However, even with external mandates and compliance-related continuous improvement, educators can use those opportunities to advance an equity agenda. The final question, What change might I introduce, and why? must also examine closely the who: Whose voices and ideas are reflected in the creation of the change? Whose labor is required in the implementation of the change? And again, Who benefits from the change? In their groundbreaking text, Learning to Improve: How America’s Schools Get Better at Getting Better, Bryk et al. (2015) delineate six principles to drive improvement work: 1. Be user-centered and problem-specific 2. Pay attention to variation 3. See the system that produces the results 4. You can’t improve what you cannot measure 5. Use disciplined inquiry to drive improvement 6. Accelerate improvement through the use of networks

22

Teaching Improvement Science in Educational Leadership

Each of these principles can be employed while considering who is involved and who is impacted. When collecting data and paying attention to variation, justice-oriented scholar-practitioners ask How accessible are the data? or What explanations must be included, so everyone understands the data? and Who bears the burden of data collection and analysis? Similarly, when developing practical measures, improvement scientists who care about equity in the process ask, Whose perspectives are represented in the theory of improvement these measures are operationalizing? and Whose perspectives are we missing? For a more comprehensive list of reflection questions related to each principle, see Improvement Science in Education: A Primer (Hinnant-Crawford, 2020a). Two of the principles outlined in Learning to Improve—(a) being user-centered and problem-specific and (b) seeing the system that produces the results—are directly tied to having a justice-oriented improvement process. When teaching improvement science, the ties between these two principles and justice must be made explicit. User-Centered and Problem-Specific At its most basic level, being user-centered means respecting the people who actually do the work by seeking to understand the problems they confront. —Bryk et al., Learning to Improve

The first principle of improvement science, as defined by Bryk et al. (2015), is to be user-centered and problem-specific. Being user-centered is essential to an inclusive and equitable process. But how do improvement scientists avoid the “contrived collegiality” in the critique described above? One key is humility. As Freire (2008) has said, “Dialogue cannot exist without humility. The naming of the world, through which people constantly re-create the world, cannot be an act of arrogance” (p. 90). Far too often, the stakeholders invited to participate are those most like the ones doing the inviting. They may bring slightly different perspectives, but educators avoid inviting people who will really challenge perspectives. As such, what

Who Is Involved? Who Is Impacted?

23

develops in the definition of problems and the creation of solutions is uncritical and more of the same. Bryk et al. (2015) define being user-centered as “examining the problem from the point of view of the user—the person experiencing it firsthand” (p. 13). When being user-centered, it is essential to include the voices of those closest to the problem: teachers, instructors, adjunct instructors, students, parents, and advisors. However, once those individuals are at the table, one must demonstrate there is truly space for their voice and that their contributions are not symbolic. As Freire (2008) has held, the pedagogy of the oppressed is one that “must be forged with, not for, the oppressed”; improvement must be defined, designed, and delivered by those for whom the problem is intimate and immediate (p. 48). To be truly user-centered and justice-oriented in the improvement process, educators must approach improvement science like those employing emancipatory research practices in disability studies. Operating under the mantra “nothing about us without us,” Gabel (2005) explains that emancipatory research must proceed with participation and leadership from disabled people to the greatest extent possible. Research agendas must be driven by the concerns defined by disabled people. It is assumed that when this is followed, disabled people’s problems of access and liberation are more likely to be solved; emancipation is possible because disabled people are the ones who best know the issues and problems and can best frame the questions that guide research and the analysis of data gathered through research. (p. 9)

Gabel’s definition of emancipatory research illustrates what it means to be user-centered throughout the improvement process. Scholars have provided frameworks and tools to aid in user-centered problem definition in both PK–12 and higher education settings, notably Terrence Green, Darrius Stanley, and Damon Williams. Green (2017) provides a framework for a community-based equity audit, which synthesizes equity audits, community audits, community-based research, and is grounded in the Freirean notion of dialogue. Green says such a process “should be viewed as an approach

24

Teaching Improvement Science in Educational Leadership

to address adaptive and systemic problems that require time, trust, experimentation, iteration, and commitment to shift from deficit to asset-based perspectives about students, families, and communities” (p. 5). Similarly, Stanley (2020) introduces a process for user-centeredness focusing on school communities that requires educators to learn about the community, engage with the community, and partner with the community. At the collegiate level, Williams et al. (2005) discuss at length the necessity for user-centeredness in defining problems and in developing strategies to address them. While they detail the requirement for senior leadership’s direct involvement with inclusive excellence, they underscore the necessity for using their tool, the inclusive excellence scorecard, to ensure inclusive excellence permeates the entire institution. They explain through the process known as cascading: A scorecard decentralizes the change vision and provides everyone with the opportunity to contribute to the vision at multiple levels of the institution. By having each unit develop a portion of the scorecard from its own vantage point and across the four areas, the change effort is more quickly institutionalized into the core values, beliefs, and processes of the campus. (p. 28)

Although this speaks about being user-centered within the institution, institutions of higher education also have a responsibility to be responsive to the surrounding needs of the community. Community colleges, particularly workforce development, respond swiftly to changing workforce needs within the community. But as universities share space and resources with their surrounding communities and their students work, shop, and play within the broader community, it is critical to understand there will also be shared problems— and the university cannot address those problems without or for the community. See the System I began to use the phrase in my work “white supremacist capitalist patriarchy” because I wanted to have some language that would

Who Is Involved? Who Is Impacted?

25

actually remind us continually of the interlocking systems of domination that define our reality and not to just have one thing be like, you know, gender is the important issue, race is the important issue, but for me the use of that particular jargonistic phrase was a way, a sort of short cut way of saying all of these things actually are functioning simultaneously. —bell hooks, Bell Hooks: Cultural Criticism and Transformation Every system is perfectly designed to get the results it gets. —The Central Law of Improvement, The Improvement Guide

Seeing the system that produces the results is the third improvement science principle. It is undergirded by the central law of improvement, which states every system is designed to get its results. Many critics of continuous improvement view improvement as a structural-functionalist tool to primarily improve the efficiency of a system designed to perpetuate the status quo. In such perspectives, improvers tinker without addressing (or dismantling) systems that lead to current outcomes. Improvement science that seeks to be critical praxis requires an in-depth examination of the systems producing the results. As Freire (2008) explains, “To surmount the situation of oppression [or the problem of practice], people must first critically recognize its causes, so that through transforming action, they can create a new situation” (p. 47). One cannot identify causes if one cannot see the system—in all its complexity. Seeing the system is akin to Freire’s (2008) conception of conscientizaçāo—which is defined by his translator Donald Macedo as “learning to perceive social, political, and economic contradictions, and to take action against oppressive elements of reality” (p. 34). It is often difficult to see the many underlying factors at play that lead to disparate outcomes. Improvement scientists must be able to identify intersecting systems and processes within their organization and the broader society. Within the organization, it is critical to be user-centered and have a variety of perspectives on the problem to see the system. For example, if a school administrator is trying to determine why English language learners (ELLs) are showing

26

Teaching Improvement Science in Educational Leadership

differential growth from non-ELLs, it is not enough to involve core teachers. In seeing the system, they should include the voices of ELL students and parents, language acquisition teachers, and perhaps experts on the assessments that provide data on growth. In assembling this cast, to have a justice-centric process, the administrator convening this team (or multiple teams) will pay close attention to unequal power dynamics and will create segmented groups when necessary to ensure no one’s voice is stifled (i.e., separate groups for teachers and students). Seeing the system in the organization may be more intuitive than seeing broader systems at play that lead to inequality. As quoted above, bell hooks (hooks & Jhally, 1997) explains she uses the language “white supremacist capitalist patriarchy” so she never forgets that all of these forces are working together simultaneously. Part of the power in such forces is that they are often concealed. To cultivate improvement scientists who see the broader systems at play in the work, it is essential to expose them to literature that delineates multiple components and describes the interaction of systems. To begin to challenge innate “solutionitis,” students must be exposed to scholarship that delineates the complexity of education’s problems—for example, readings like the classic “From the Achievement Gap to the Educational Debt: Understanding Achievement in US Schools” by Gloria Ladson-Billings (2006), where she examines historical, economical, sociopolitical, and moral influences on differential achievement in U.S. schools. Similarly, scholars such as Linda Darling-Hammond, Rich Milner, Terah Chambers, Tressie McMillan Cottom, J. Luke Wood, Carlos Nevarez, and Terrell Strayhorn (among countless others) all illustrate the complexity that leads to predictable variance in educational outcomes from achievement on standardized tests to defaulting on student loans. In addition to contemporary analyses, educational historians elucidate how these systems have developed and morphed over time; a critical examination of educational history cannot be underestimated when designing potential solutions. Improvement scientists will fail in the educational sector if their work is ahistorical. History reveals the baseline of the system.

Who Is Involved? Who Is Impacted?

27

Who Is Impacted? Defining Problems That Center Justice I consider the fundamental theme of our epoch to be that of domination—which implies its opposite, the theme of liberation, as the objective to be achieved. —Paulo Freire, Pedagogy of the Oppressed

Most improvement science texts focus on the process of improvement to illustrate its applicability to a wide array of problems. In doing that, they do not address the latent issues that can arise in the problem definition processes that are directly related to issues of power, positionality, and participation. But educators must reconceptualize improvement so that the goal of improvement efforts is always justice (or liberation). After teaching improvement science for several years, it is evident without priming, students of improvement science will define root causes based on the deficit understanding of students and the communities from which they come. Deficiency approach (Boykin, 1984), deficit theories (Nieto, 2000), deficit thinking (Valencia, 1997), deficit ideology (Gorski, 2018), deficit cognitive frame (Bensimon, 2005), and deficit mindedness (McNair et al., 2020) are some of the many descriptors scholars of elementary, secondary, and postsecondary education have used to describe perspectives that view students and communities as the source of inequitable outcomes instead of the systems producing the results. As Valencia (1997) details: The deficit thinking model, at its core, is an endogenous theory— positing that the student who fails in school does so because of internal deficits or deficiencies. Such deficits manifest, it is alleged, in limited intellectual abilities, linguistic shortcomings, lack of motivation to learn, and immoral behavior. The proposed transmitters of these deficits vary according to the intellectual and scholarly climate of the times. We shall see that genetics, culture, and class, and familial socialization have all been postulated as the sources of alleged deficits expressed by the individual student who experiences school failure. (p. 2)

28

Teaching Improvement Science in Educational Leadership

Deficit understandings propelled “solutions” to society’s problems as insidious as the eugenics movement to the more subtle (and ostensibly less insidious) tracking in schools. It is important to identify and recognize how the source of the deficits has shifted from genetics to culture to socialization. Nieto (2000) explains how a great deal of research in the latter half of the 20th century focused on the relationship between school failure and the “inadequacy” of students’ homes (p. 231). Similarly, Boykin (1984) explains that the deficiency approach leads educators to “‘operate’ on the child to correct the deficiencies” instead of exploring the structures and systems that lead to disparate outcomes (p. 465). To aid students in defining problems and developing changes that will transform institutions that perpetuate injustice in its many forms, we propose a four-part framework, titled “Teaching Improvement Science for Educational Justice” (TISEJ), that includes explicit instruction, anticipation, preparation, and facilitation that can be employed in any educational course or program using improvement science as a methodology.

Explicit Instruction Structural analysis of inequality Critical self-reflection

Anticipation

• Brainstorm deficit root causes with classmates

Preparation Facilitation

Learn by doing Critical reflection on the problem definition process

Figure 2.1. Teaching improvement science framework.

• •

Identify research/data refute the anticipated deficit root causes Identify data (scholarly or institutional) to inform problem definition

Who Is Involved? Who Is Impacted?

29

TISEJ Part I: Explicit Instruction Explicit instruction is essential to preparing improvement scientists who can respond and redirect groups that are defining problems based on deficit understandings of individuals or their communities. It is essential that students of improvement science, who are preparing to lead improvement initiatives in their work contexts, begin with a baseline understanding of the structural determinants of inequality in our society. If students have received this grounding well in courses that are prerequisites, instructors may only need to review. However, if the inquiry is introduced ahead of some of the foundational courses, the instructor must realize some foundation is required for informed inquiry. The term explicit instruction refers to a particular pedagogical approach employed when students “learn content they could not learn on their own, or through the use of less guided and supportive methods (e.g., discovery learning)” (Hughes et al., 2017, p. 141). Hegemonic structures are often invisible to those without a critical eye; therefore, the average student may be unaware of their existence. To make the student aware, the instructor must provide content that clarifies how “isms” in our society work to create a perfect storm for inequities. For in truth, “equity-mindedness does not come naturally. It requires a knowledge base, and it takes lots of practice” (McNair et al., 2020, p. 108). Content and critical reflection are the essential parts of the explicit instruction phase. If the instructor’s course is geared toward student affairs professionals or aspiring school principals, it is easier to narrow the pool to focus on a particular educational sector. If the class is diverse, it is necessary to provide foundationally and context-specific content. For example, a syllabus designed to explore the structural nature of race and its implications on educational inequities may begin using sociologist Eduardo Bonilla-Silva’s Racism Without Racists (2006) or the article Rethinking Racism: Towards a Structural Interpretation (1997). The same syllabus would also have context-specific literature on race by Shaun Harper for the higher education professionals and by Bettina Love for the elementary and

30

Teaching Improvement Science in Educational Leadership

secondary education professionals. Part of the explicitness of this instruction is making the link between inequities in broader society and educational institutions. It is important to remember when designing courses that all content does not have to be reading. Instructors could provide introductory overviews of certain structural inequalities using videos from Crash Course Sociology, TED Talks, and vetted podcasts. Even as one decides their content, they should ensure it reflects a diversity of thought. Every voice should not be a White, cisgendered, heterosexual male. (This means the instructor must be reading diverse voices in preparation for class and asking, Whose voice is absent from my syllabus?) Content is only half of the explicit instruction phase of this framework. In addition to delving into rich and illuminating texts, the instructor must guide future improvement scientists through critical reflection. It is not enough to know that systems and isms exist; improvement scientists must examine how they fit in those systems and how their practice (even unintentional practice) may have contributed to the marginalization of some to the advantage of others. They also have to explore how systems of domination may have benefited them—which is difficult when people like to view their individual success as meritorious. In Muhammad Khalifa’s (2018) book Culturally Responsive School Leadership, he discusses the necessity for critical self-reflection as a requisite step, so leaders (or in this case improvement scientists) can “shift from personal to institutional reflection” (p. 64). Khalifa explains that “critical self-reflection is an iterative process that involves personal and structural reflections in a constant state of change, combating the ever-morphing systems of oppression that our students face” (p. 63). This means the instructor must introduce critical self-reflection, through critical autobiographies or other means, while emphasizing that personal and institutional reflection must become a part of the justice-oriented scholar-practitioner’s embodied practice. It is the instructor’s responsibility to make it clear that the inequities in education do not take place in a vacuum. For improvement science to be critical praxis and to ensure nondeficit perspectives

Who Is Involved? Who Is Impacted?

31

for problems of practice, instructors of improvement science must explore racism, classism, sexism, ableism, monolingualism, and heteronormativity as they play out in society and in educational institutions. We know that this seems big—it is. This facet of the framework, explicit instruction, may seem like a class on its own. And depending on the foundational courses in the program (or the design of in-service development), this may be covered elsewhere and only needs to be reinforced in the improvement science course. However, if the improvement science course is early in the curriculum, it may be beneficial to pair a corequisite course that focuses on historical or sociological foundations. If taught in a separate space, it is still necessary for the improvement science instructor to clarify how the content in the other course informs the process of improvement. TISEJ Part II: Anticipation Adult learning frameworks, such as andragogy, tell educators it is critical to capitalize on the previous knowledge and experiences of our students (Merriam, 2001). Most educators encountering improvement science as a method of inquiry in graduate courses or through in-service professional development have had experiences and witnessed firsthand the deficit discourse around certain populations. In that initial explicit instruction phase, it is essential to have them reflect on those experiences, and even come to terms with whether or not they have perpetuated such understandings. While in the instruction phase the goal is to ensure their understanding of the structural determinants of inequality and the interacting systems that lead to the current results, in the anticipation stage students must go back and remember what they have heard or said that may have led to identifying improvement initiatives that focus on changing people instead of systems. Instructors must have students name a proposed problem of practice, so the activities that follow are concrete and not theoretical. When completed as intended, the anticipation phase not only relies on the knowledge of the individual student but the knowledge

32

Teaching Improvement Science in Educational Leadership

of the collective. After students have named a problem of practice within their organization, have them work in triads or quads developing fishbone diagrams (or other root cause analytical visuals). Have them complete the first step with pencil or pen. Figures 2.2 and 2.3 depict preliminary fishbone diagrams on parental involvement in an elementary school and retention of first-generation students in an institution of higher education, respectively, that students may sketch during this brainstorm. After the initial sketch, have them go over their identified root causes and highlight those that are informed by deficit worldviews.

Oppportunities for Involvement

Value of Education

Low parental involvement

Communication

Priorities

Figure 2.2. Preliminary fishbone diagram on parental involvement.

Lack of Family Support

Underprepared

Low retention rates of firstgeneration students Underutilization of Campus Resources

Motivation

Financial Struggles

Figure 2.3. Preliminary fishbone diagram on retention.

33

Who Is Involved? Who Is Impacted?

Finally, have students make a list of where deficit perspectives arose (see Figures 2.4 and 2.5); this list can be the basis of their attempt to preemptively address deficit root causes that will arise when they do this within their own organization. Oppportunities for Involvement

Value of Education Deficit Perspective

Low parental involvement

Communication

Priorities Deficit Perspective

Figure 2.4. Annotated fishbone diagram on parental involvement.

Lack of Family Support

Underprepared

Deficit Perspective

Deficit Perspective

Low retention rates of firstgeneration students Underutilization of Campus Resources Deficit Perspective

Motivation

Financial Struggles

Deficit Perspective

Figure 2.5. Annotated fishbone diagram on retention.

TISEJ Part III: Preparation Once a student has identified probable deficit ideas that will arise, they can begin to prepare to redirect those ideas. At this part in the framework, the agency has completely shifted from the instructor to the student. There is no way the instructor can provide guidance

34

Teaching Improvement Science in Educational Leadership

to prepare for every problem of practice in the class. This is where students have to engage in their research to prepare to lead their improvement teams. In our classes, we have a formal preparation assignment. In the master’s of school administration research course, students are to convene a group to engage in collective problem definition. Before that convening, they must complete an assignment like the “Preparation for Collective Problem Definition” assignment taken from a syllabus by the author: Leadership requires us to include stakeholders in problem-solving, as together, we have a more holistic understanding of the problems that plague our schools. Stakeholders can include administrators, teachers, students, parents, and/or other community members. However, as we talk about the pervasiveness of deficit ideology, we have to recognize any stakeholders may bring deficit views of children, their families, or educators to the table. Before jumping into a collective problem-solving activity, do a little background research on the problem and deficit perceptions that may enter a problem definition conversation. For example, if your anticipated problem concerns seventh-grade boys’ achievement, be sure you are aware of the stereotypes stakeholders may have about seventh-grade boys before beginning the discussion. Prior to the meeting, you should ask yourself, “What data do we need to understand this problem?” Consider doing an equity audit, examining the outcomes of seventh-grade boys in multiple areas (math, reading, discipline, extracurriculars, SPED, gifted), noting whether the boys are over- or underrepresented and offering guidance or empathy interviews, where you sit down with seventh-grade boys and ask them about their perspectives and experiences. Create a presentation (no more than 10 slides) or handout (infographic) with key information about your problem that you can use at your collective problem definition meeting. You may want to include myths versus facts, if applicable. (Hinnant-Crawford, EDRS 602 for MSA Students, Syllabus)

The assignment requires them to become intimately familiar with the discourse around the problem of practice they anticipate choosing as well as the data indicating the problem is indeed a problem

Who Is Involved? Who Is Impacted?

35

in their organization. The deliverable—the one-pager or short presentation—is an artifact they can use when the team is convened to define the problem of practice. This work is preemptive. A student may begin working on absenteeism, and once the team has been convened, they may move in a different direction. That is okay. This process shows them the type of information they should have on the table to guide the team’s conversation away from root causes informed by deficit perspectives. When they first receive this assignment, students are jarred. Providing examples of what preparation looks like is helpful. Instructors may begin by having them read: “Yeah But: Common Rebuttals” in Is Everyone Really Equal: An Introduction to Key Concepts in Social Justice Education (Sensoy & DiAngelo, 2017). This will start by giving them examples of counterarguments individuals can use in difficult justice-centered conversations. As instructors prepare new improvement scientists to facilitate this work, they must stress that initial meetings are opportunities to not only shift the culture to one of continuous improvement but also shift the culture to a justice-oriented one. The preparation requires the scholar-practitioner to be a scholar. One should approach the initial meeting (or meetings) as a time to disrupt deficit perspectives through teaching about deficit ideology, providing context-specific examples, refuting deficit perspectives with sound scholarship, and grounding the conversation by examining outcomes and practices within the organization. As they orient their teams to the task, they must plan to begin with teaching what it means to approach the problem from a deficit worldview. This cannot be a theoretical lecture. In their defining what a deficit perspective is, students must give examples specific to the problem of practice. For instance, the student working on parental involvement may begin by putting on the table common notions held about parents of color and parents from low-income communities. “These parents don’t care” and “These families just don’t value education” are two commonly held notions that have been widely dispelled in the research literature (Chavkin, 1989; Ladson-Billings, 2007; Lightfoot, 2004). After dispelling the myths, the student may

36

Teaching Improvement Science in Educational Leadership

use the work of Susan Auerback and/or Annette Lareau to illustrate how narrowly educators usually conceive parental involvement and barriers to parental involvement. Because lists of myths versus facts are easy to digest, a one-pager or infographic illustrating common conceptions and their rebuttals are a helpful handout to create during the preparation phase. Lastly, they should gather evidence (data) from their own institution’s parental involvement indicators and derive discussion questions that will prompt reflection on that data. After this, the student may plan to lead the team through the 5 Why’s or the development of a fishbone diagram to begin problem definition. The student dealing with first-generation retention should use a similar tactic when planning for their convening, which begins with defining deficit approaches, orienting their team to the discourse on first-generation retention, and examining data from their institution. This student may plan to begin their meeting with the TEDxCambridge talk by Anthony Jack (2019), On Diversity: Access Ain’t Inclusion. The voice of a first-generation student who is an expert on disadvantaged students in collegiate settings is a powerful way to begin such a discussion. This student could also choose to do an activity similar to the one described in Macias (2013), where team members call out words or descriptors associated with this demographic and then code them collectively as positive or negative, to get a sense of how their demographic is consciously (or unconsciously) viewed among the team. The video and activity “prime the pump” for a greater discussion on the research surrounding the success of first-generation students. The student may then plan to lead the team through the discourse on first-generation students through a series of slides, where they would make a point to show how the theories have morphed from trying to identify and fix deficiencies in the students to focusing on institutional barriers to success. At the culmination of their presentation, the team should understand the following: Instead of the unified, holistic, and supportive community advertised by college admissions materials and often reflected in dominant narratives about college life, low-income, first-generation

Who Is Involved? Who Is Impacted?

37

college students can find themselves in a highly privileged and unfamiliar space that is simultaneously friendly and hostile, expected and unexpected, empowering and disempowering. (Means & Pyne, 2017, p. 907)

Finally, the student would lead their team in reviewing retention rates, persistence rates, participation rates, and achievement indicators for first-generation students in their college or university. This may be completed using an interactive dashboard created by the student (or available at the institution) or handouts the student has created. The key to the preparation phase is not to go into a meeting without a plan. The planning allows students to identify key information the team needs to know before they engage in defining the problem of practice and set parameters for the discussion. TISEJ Part IV: Facilitation Facilitation is the final piece of the framework. When instructors can have students role-play or present to each other, this is where the students learn by doing. Where in the last phase, preparation, the scholar was emphasized, in this phase, the practitioner is emphasized. Instructors may want to share common guidance for facilitating groups such as that given in the text Difficult Conversations: How to Discuss What Matters Most (Stone et al., 2010). Students must recognize in some instances that they will be challenging people’s preconceptions, and that may produce strong reactions. Nevertheless, to move from theory to practice, instructors must require students (or in-service educators) to convene improvement teams and facilitate the problem definition process, then use the classroom as a space to reflect on what happened. In our classes, we require students to bring artifacts from their convenings. These artifacts can include photos of diagrams (fishbone, yes/no trees, SWOT analysis, etc.) or activities completed as well as notes of what happened (such as meeting minutes). The artifacts should be shared in small or large groups to facilitate

38

Teaching Improvement Science in Educational Leadership

discussion. Using document cameras or uploading to shared drives or discussion boards where classmates can view the artifacts are essential. Each student should be given an opportunity to share what happened in the convening before being asked questions. Critical reflection on the process is not a “gotcha!” activity, but an opportunity to learn where the process can be improved going forward. As the classroom becomes a designated reflective space, simple yet powerful questions can include (but are not limited to) the following: • Whose voice was included in this process? – Whose voice was absent? • Whose priorities were reflected in defining the problem? • Whose data was used? • What data was considered most important? The most persuasive? • What went well? • What was unexpected? • Were there strong reactions when long-held beliefs were challenged? • What would you do differently? • What information was most important to the session? – What information did you need but not have? In this facilitation phase, as students share with each other the strengths and weaknesses of their approach, there is a unique opportunity to build a community of justice-oriented scholar-practitioners. As students answer these and other questions, resources for convening groups and having hard conversations may be shared. If instructors of improvement science carefully move their students through the four phases of explicit instruction, anticipation, preparation, and facilitation, students of improvement science are more likely to define problems of practice as rooted in organizational systems and practices. They will seek to transform organizations in ways that lead to more just outcomes.

Who Is Involved? Who Is Impacted?

39

Teaching Improvement Science for Educational Justice When we teach our students about research epistemologies, we teach about postpositivism, constructivism, and emancipatory and pragmatic paradigms. It is easy to see how to locate improvement science within the pragmatic discourse. But we also tell our students their work does not have to fall solely in one category—and neither does improvement science. Improvement science can be critical praxis, action, and reflection, designed to bring about a new and just reality. Proponents, coaches, and instructors of improvement science cannot assume their students will see the links between improvement science and justice. It is incumbent upon those who want to employ improvement science as a method for critical praxis to make the relationship explicit. If justice is not the goal—what is the purpose of improvement? Although many programs have designated courses to issues of justice—be they courses on leadership for social justice or multicultural education—we know that students need to see justice integrated throughout the coursework. We cannot examine methods as distinct from our efforts to ensure a more just society. As we think about research epistemologies and the history of marginalization and the role research has played in the reification of and justification for oppression, and also as proponents of inquiry techniques new to education, we must be intentional, so students do not become another tool that upholds the status quo. References Bensimon, E. M. (2005). Closing the achievement gap in higher education: An organizational learning perspective. New Directions for Higher Education, 2005(131), 99–111. Bonilla-Silva, E. (1997). Rethinking racism: Toward a structural interpretation. American Sociological Review, 62(3), p. 465–480. Bonilla-Silva, E. (2006). Racism without racists: Color-blind racism and the persistence of racial inequality in the United States. Rowman & Littlefield Publishers. Bourdieu, P., & Passeron, J. C. (1990). Reproduction in education, society, and culture. Sage.

40

Teaching Improvement Science in Educational Leadership

Boykin, A. W. (1984). Reading achievement and the social-cultural frame of reference of Afro-American children. The Journal of Negro Education, 53(4), 464–473. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Chavkin, N. F. (1989). Debunking the myth about minority parents. Educational Horizons, 67(4), 119–123. Freire, P. (2008). Pedagogy of the oppressed. Bloomsbury USA. Gabel, S. L. (Ed.). (2005). Disability studies in education: Readings in theory and method (Vol. 3). Peter Lang. Gorski, P. C. (2018). Reaching and teaching students in poverty: Strategies for erasing the opportunity gap. Teachers College Press. Gould, S. J. (1996). The mismeasure of man. W. W. Norton and Company. Green, T. L. (2017). Community-based equity audits: A practical approach for educational leaders to support equitable community-school improvements. Educational Administration Quarterly, 53(1), 3–39. Gutherie, R. V. (2004). Even the rat was white: A historical view of psychology. Pearson. Hinnant-Crawford, B. N. (2016). Standardized testing. In K. Lomotey (Ed.), People of color in the United States: Contemporary issues in education, work, communities, health and immigration (pp. 341–355). ABC-CLIO. Hinnant-Crawford, B. N. (2020a). Improvement science in education: A primer. Myers Education Press. Hinnant-Crawford, B. N. (2020b, Spring). EDRS 602: Methods of Research (MSA ONLY) [syllabus]. Western Carolina University. Hood, S. (2001). Nobody knows my name: In praise of African American evaluators who were responsive. New Directions for Evaluation, 92(3), 31–42. hooks, b., & Jhally, S. (1997). Bell Hooks: Cultural criticism and transformation. Media Education Foundation. Horsford, S. D., Scott, J. T., & Anderson, G. L. (2019). The politics of education policy in an era of inequality: Possibilities for democratic schooling. Routledge. Hughes, C. A., Morris, J. R., Therrien, W. J., & Benson, S. K. (2017). Explicit instruction: Historical and contemporary contexts. Learning Disabilities Research & Practice, 32(3), 140–148. Jack, A. (2019, June 13). On Diversity: Access Ain’t Inclusion [Video]. TEDxCambrige. https://www.tedxcambridge.com/talk/on-diversity-access-aint-inclusion/ Khalifa, M. (2018). Culturally responsive school leadership. Harvard Education Press.

Who Is Involved? Who Is Impacted?

41

Ladson-Billings, G. (2006). From the achievement gap to the education debt: Understanding achievement in US schools. Educational Researcher, 35(7), 3–12. Ladson-Billings, G. (2007). Pushing past the achievement gap: An essay on the language of deficit. The Journal of Negro Education, 316–323. Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: A practical approach to enhancing organizational performance. John Wiley & Sons. Lightfoot, D. (2004). “Some parents just don’t care”: Decoding the meanings of parental involvement in urban schools. Urban Education, 39(1), 91–107. Macias, L. V. (2013). Choosing success: A paradigm for empowering first–generation college students. About Campus, 18(5), 17–21. McNair, T. B., Bensimon, E. M., & Malcom-Piqueux, L. (2020). From equity talk to equity walk: Expanding practitioner knowledge for racial justice in higher education. John Wiley & Sons. Means, D. R., & Pyne, K. B. (2017). Finding my way: Perceptions of institutional support and belonging in low-income, first-generation, first-year college students. Journal of College Student Development, 58(6), 907–924. Merriam, S. B. (2001). Andragogy and self-directed learning: Pillars of adult learning theory. New Directions for Adult and Continuing Education, 89(1), 3–13. Nieto, S. (2000). Affirming diversity: The sociopolitical context of multicultural education. Longman. Sensoy, O., & DiAngelo, R. (2017). Is everyone really equal? An introduction to key concepts in social justice education. Teachers College Press. Stanley, D. (2020). Addressing unseen suffering and reimagining possibility through community engagement: Lessons from the back of the bus. MOJA, 1(1), 51–60. Stone, D., Heen, S., & Patton, B. (2010). Difficult conversations: How to discuss what matters most. Penguin. Tillman, L. C. (2002). Culturally sensitive research approaches: An African-American perspective. Educational Researcher, 31(9), 3–12. Valencia, R. R. (Ed.). (1997). The evolution of deficit thinking: Educational thought and practice (Vol. 19). Psychology Press. Williams, D. A., Berger, J. B., & McClendon, S. A. (2005). Toward a model of inclusive excellence and change in postsecondary institutions. Association of American Colleges and Universities.

chapter three

Finding Problems, Asking Questions, and Implementing Solutions Improvement Science and the EdD JILL ALEXA PERRY

Carnegie Project on the Education Doctorate

DEBBY ZAMBO

Carnegie Project on the Education Doctorate

Abstract

I

mprovement science comprises a set of tools and approaches designed to facilitate innovation and implementation of new organizational practices (Langley et al., 2009). This methodology is being used in schools and educational organizations across our country to improve education for all children and students. For example, Baltimore City public school teachers employed improvement science and networked improvement communities to advance equitable outcomes for their students (i.e., increase African American, Latinx, and low-income students’ high school graduation rates) (Bryk, 2017, 2018). The Building Teaching Effectiveness Network (BTEN) used rapid, small-scale testing to develop a feedback protocol aimed at reducing new teacher burnout, increasing feelings of confidence, and improving district systems and processes to support new teacher development (Myung & Martinez, 2013). Public school teachers in New York City are encouraged to use improvement science to make progress on critical issues that stand in the way of their 43

44

Teaching Improvement Science in Educational Leadership

students’ success (New York City Department of Education, 2018). These examples demonstrate the diversity of improvement science across K–12 educational contexts. Likewise, improvement science has proven useful at the tertiary level. For example, the Carnegie Foundation for the Advancement of Teaching’s (CFAT) Statway and Quantway networks helped community college students successfully complete remedial mathematics courses in roughly half of the time (Strother & Sowers, 2014). As improvement science is spread and proves to be useful in varied educational systems, we see it as a useful methodology to be taught in education doctorate (EdD) programs. We believe this because EdD students are practitioners who, while seeking their doctorates, remain in practice (Perry et al., 2020). EdD students have unique characteristics. They are typically mature, often have 5–25 years of professional experience, and as educational leaders are dedicated to, and responsible for, change and improvement (Perry, 2013; Perry et al., 2020; Willis et al., 2010). Given their nature and responsibilities, we believe the EdD is the terminal doctorate in education that should provide educational leaders with the knowledge, skills, and dispositions they need to achieve their goals and make the improvements expected of them.

Background Years of working with schools of education in the Carnegie Project on the Education Doctorate (CPED) have shown us that change is not always quick or easy in EdD programs, even if faculty have been trained in doctor of philosophy (PhD) programs. Adopting improvement science into EdD programs is difficult because it is different from what many faculty learned. As a new methodology in the field of education, many faculty members do not understand improvement science, nor have they used it. So when faculty are asked to teach in EdD programs, they, like most humans, fall back on what they know, which is likely the teaching of traditional research designs and methods. Yet we know that, frequently, these

Finding Problems, Asking Questions, and Implementing Solutions

45

traditional methodologies do not support educational leaders in making actual improvement their context (Perry, 2013). An aim of the CPED consortium is to bring new and innovative knowledge and ways to the preparation of education leaders. When Anthony Bryk, president of the Carnegie Foundation for the Advancement of Teaching, suggested to CPED members that improvement science might be a good fit for the practitioner dissertation in practice (CPED, 2010), we saw an opportunity to rethink research and inquiry in EdD programs. CPED is a consortium devoted to inspiring all schools of education to apply its framework (a new definition of the EdD, six guiding principles for program design, and six design concepts) to the preparation of educational leaders. CPED believes the professional doctorate in education should prepare educators to apply appropriate and specific practices, generate new knowledge, and steward the profession (CPED, 2010). Members seek to prepare EdD students to become well-equipped scholarly practitioners, or individuals who • blend their practical wisdom with their professional skills and knowledge to name, frame, and solve problems of practice; • use practical research and applied theories as tools for change because they understand the importance of equity and social justice; • disseminate their work in multiple ways; and • resolve problems of practice through collaborating with key stakeholders, including the university, the educational institution, the community, and individuals. To achieve this aim, programs are cohesively designed to offer core knowledge, applied research and inquiry training, and content knowledge. EdD students are expected to apply this knowledge and training to their contexts and are tested on this in their dissertations. Because the education profession does not have licensure, EdD students complete a dissertation in practice (DiP), a scholarly endeavor that impacts a complex problem of practice (CPED, 2010), as their demonstration of competence.

46

Teaching Improvement Science in Educational Leadership

We believe improvement science is a perfect methodology for the dissertation in practice because it offers a valuable set of tools and a collaborative mindset that educational leaders can use to improve their schools and organizations. However, we recognize that the teaching of improvement science in EdD programs is new for many faculty members. Therefore, we offer this chapter as a means to support faculty in becoming better equipped to teach improvement science. We also extend this chapter to educational leaders who are in EdD programs to help them better understand how to work as leaders of an improvement effort and use improvement science as a frame for their DiPs. We couch both of these goals in comparisons between EdD programs and PhD programs. Therefore, the aim of this chapter is to provide practical insight into the differences between (a) the problems PhD students focus on and the practical improvement problems school administrators in EdD programs work to solve; (b) the research questions PhD students ask and the inquiry questions school administrators in EdD programs ask; and (c) the actions PhD students take to answer their questions and the cycles of improvement school administrators in EdD programs perform. This chapter also provides activities and critical questions as learning tools. Types of Problems PhD students seek their degrees to establish a research career and necessary publication record. To achieve this, they conduct independent research and write dissertations that build theory and/or contribute to their field (Plano Clark & Creswell, 2014). Worthwhile research problems for PhD students include the following: • Gaps or voids in the existing literature • Past findings that have not been replicated • Extending past research or examining an issue more thoroughly • Revealing silenced voices and inequities • Discovering new practices for those that need to be changed because of advances in society and technology

Finding Problems, Asking Questions, and Implementing Solutions

47

The traditional research of PhD students aims to predict, explain, and/or control (Creswell, 2014). Bryk (2015, 2017, 2018) compares this type of educational research to clinical trials in medicine. The strength of traditional research lies in its emphasis on building relevant theory and using empirical rigor to draw inferences about intervention or program effects. In contrast, the weakness of traditional research in a field like medicine lies in the fact that 80% to 90% of daily medical practice is not anchored in such evidence because it lacks the specific, detailed information practitioners need to make improvements in real time (Institute of Medicine, 2012). Similarly, traditional education research lacks applicable specifics because it is conducted by an academic (a PhD student/graduate) who has little stake in improving the context that they study. PhD students often build interventions on theory (as opposed to practice) and use large effect sizes to determine if the intervention worked. Such work lacks the details as to how to make the intervention work for different subgroups across varied contexts. Practitioners who face problems every day need to know more than that an intervention can work if it is implemented with fidelity. They need to know how to actually make the intervention work reliably across diverse contexts and populations (Bryk, 2017, 2018). In contrast, EdD students and the dissertations they write have a practical focus that provides useful answers. As scholarly practitioners, EdD students focus their DiPs on a problem of practice (PoP)—or persistent, contextualized, and specific issue embedded in the work of a professional practitioner, the addressing of which has the potential to result in improved understanding, experience, and outcomes (CPED, 2010). PoPs are local (i.e., within the EdD student’s context), meaningful to them and their constituents, and universal (important to the field, experienced in other contexts, having a robust literature) (Hochbein & Perry, 2013; Perry et al., 2020). Bryk et al. (2015a) describe PoPs as high leverage problems, or those that (a) consume substantial resources, (b) have variable outcomes, and (c) if addressed would result in better efficiency and/ or effectiveness in schools (Bryk et al., 2015a). Along the same lines, Mintrop (2016) calls such problems actionable problems of practice, or problems that are

48

Teaching Improvement Science in Educational Leadership

• urgent for the school/organization—arise out of a perceived need; • actionable—exist within the student’s sphere of influence; • feasible—able to be addressed in a limited timeframe with available resources; • strategic—linked to the goals of the larger organization; • tied to a specific set of practices—narrowed to specific practice(s) that have a good chance of improvement; and • forward looking—work on the problem will lead toward the next cycle of work (p. 30). Further, PhD dissertations are generally focused on a specific content area in which the student will ultimately become an expert. For the EdD, students do not become content-area specialists. Rather, they become experts in applying improvement science to solve any problem. In this respect, improvement science is a leadership tool. Such distinctions allow your EdD program to support the leadership development of practitioners. As they use improvement science to address practical problems, they are creating an improvement agenda that uses collaborative tools to engage stakeholders in problem-solving. Activity and Questions 1. Consider your EdD program and the differences described previously. Make a list of how your EdD students are taught and mentored to address the problems they face. Does your program need to change to support the development of scholarly practitioners? 2. How do the tools and processes of improvement science provide an opportunity for educational leaders to work on problems that matter to them? 3. If you are an EdD student, what problems in your practice would you like to solve? How would improvement science be a good methodology for you?

Finding Problems, Asking Questions, and Implementing Solutions

49

Uncovering Feasible Problems The problems PhD students focus on are uncovered by varied means. The most prevalent is linking one’s work to the established research agenda of one’s major professor or connecting it to a study being conducted in one’s department (e.g., working on a grant). Uncovering a problem can also come from the literature, especially if new theories or methods have been discovered within one’s field (Jacobs, 2013). The decision to research a particular problem is typically made between the student and their mentor and committee members. PhD students rarely consult the participants in their study to gain their perspectives on the problem. In contrast, educational leaders in EdD programs performing improvement science take a different approach. They work as insiders with other insiders on problems that are user-centered, specific, and meaningful to the stakeholders involved (Carnegie Foundation for the Advancement of Teaching, 2015). EdD students start the improvement process with their hunches about a problem and some preliminary guesses as to why the problem exists. Next, recognizing they alone do not have all the answers, they seek out understanding from stakeholders. That is, to truly understand the nature of a problem, educational leaders recognize they must have the perspectives of those who know or experience the problem. The tools they use to accomplish this include methods such as the following: • Fishbones—uncover an individual’s perspectives on the underlying causes of the problem • Systems improvement maps—summarize a group’s knowledge about the causes contributing to a problem • Existing data—provide evidence and a baseline measure of the problem • Observations—provide understanding through noticing or viewing • Empathy interviews—document thoughts, emotions, and experiences of stakeholders about the problem

50

Teaching Improvement Science in Educational Leadership

Literature is also important in helping EdD students uncover problems. In addition to improvement tools, EdD students investigate scholarly and professional literature to name and frame their problem and to rationalize why working on their problem can lead to an improvement. In this respect, literature is a tool for educational leaders. It serves to expand their own thinking about problems and learn what has been studied, implemented, and reported. However, this review literature for EdD students is equivalent to, yet different from, a traditional literature review written and used by PhD students. Table 3.1 captures some of these differences. Table 3.1. Comparisons of Literature Use PhD Students

EdD Students

• Situate their work in a scholarly/ historical context

• Build and share ideas with others facing the problem

• Understand existing claims about their problem

• Develop a theory of improvement (driver diagram) that blends their own practical knowledge with knowledge from the field

• Find and justify their research methodology • Identify knowledge gaps • Demonstrate their depth of understanding



• Understand what’s been tried in other contexts as well as what is unknown about the problem • Build arguments for their change ideas

EdD faculty must reframe the purpose of literature in EdD programs along these lines. To do this, use of literature as a tool must be taught, such as consuming literature critically, understanding how to find and review literature, and how to use literature to build arguments about problems and solutions. Further, teaching in this way should encourage practitioners to continue to utilize literature beyond their DiP, to any and all problems they face in the future. This will require that universities allow their alumni to have access to their libraries.

Finding Problems, Asking Questions, and Implementing Solutions

51

Activity and Questions 1. PhD students uncover problems differently than EdD students. Some of these differences were articulated in the preceding section. Can you think of any others? 2. Improvement science offers tools and encourages the use of data to uncover the true nature of a problem. If you are an EdD student, explain how, with whom, and when you might use each of these tools to uncover the true nature of your problem (Table 3.2). If you are a faculty member, consider when and how you will teach each of these tools in your courses (Table 3.3). 3. What other ways might there be to collect insight and understanding about a problem of practice? Table 3.2. Determining how, with whom, and when to use Improvement Tools Tool

How, With Whom, and When

Fishbones Systems maps Existing data Observations Empathy interviews

If you are a faculty member, consider when and how you will teach each of these tools into your courses. Table 3.3. Determining when and how to teach Improvement Tools Tool Fishbones Systems maps Existing data Observations Empathy interviews

When and How

52

Teaching Improvement Science in Educational Leadership

Articulating Questions From Problems and Literature After a PhD student uncovers a feasible problem, they begin to articulate their research questions. Their questions ask about relationships between variables, the causes and effects various variables have on outcomes, comparisons between groups, and how things are at a point in time. PhD students ask questions and then make hypotheses to predict or explain what they believe the outcome of their study will be (Creswell, 2014). To articulate research questions and hypotheses, PhD students typically work with their mentors. They do not ask those affected by the problem to determine what they want or need to know (Bryk et al., 2015b). Scholarly practitioners do things differently. Instead of formulating questions directly from what they believe is the problem, using their improver mindset, they develop questions and form hypotheses directly from the experiences of those most affected by the problem. To do this, they put together what they have gathered from employing improvement tools (e.g., fishbones, systems maps, existing data, interviews) and the literature and begin to map out a plan to turn ideas into action. This map comes in the form of a driver diagram, a cocreated visual tool that contains the following elements: • Aim statement—a clear and specific articulated goal • Primary drivers—places in the system that may influence the ability to achieve the aim • Secondary drivers—system components hypothesized to activate each primary driver • Change ideas—specific interventions to be tested Driver diagrams are developed during the program as part of students’ inquiry course and are grounded in the data gathered about the problem from enacting the improvement tools mentioned previously and reviewing the scholarly and professional literature. Once the driver diagram has been developed, the next step is to create an intervention or change idea, one that will maximize potential for improvement (New York City Department of Education, 2018).

Finding Problems, Asking Questions, and Implementing Solutions

53

Because most EdD programs are 3 to 4 years in duration, scholarly practitioners must think quickly and efficiently. They must consider their effort’s feasibility (Can it be done?), resources (Do we have the time and energy to do this? Can we implement the change with what we have?), and pitfalls (Is this new or are we doing more of the same?). Once the change idea is developed, it is time to develop the inquiry questions for the DiP (see right side of Figure 3.1). To create these questions, the scholarly practitioner would think along these lines: If we want to improve [aim] we must [primary driver] through/ by/with [secondary driver] and one way to do this is [change idea—most reasoned idea to try]? (New York City Department of Education, 2018)

Inquiry questions for a DiP focus on the most reasoned change to try. They ask how, if, and why the change idea achieved its aim. Figure 3.1 captures this process. Driver Diagram Group’s Working Theory of Improvement

Primary drivers

+

Literature

Secondary drivers

An Improvement Effort to Try

Inquiry Questions

Change Ideas

Aim Statement

Figure 3.1. How school administrators in EdD programs find inquiry questions.

54

Teaching Improvement Science in Educational Leadership

Activity and Questions 1. This section explained the difference as to how PhD and EdD students use literature differently. Can you list three more? 1. 2. 3. 2. Figure 3.1 captures the process of moving from a driver diagram to inquiry questions. How does this process differ from how PhD students find their research questions? 3. If you are an EdD student, explain how you will uncover your research questions. 4. If you are a faculty member, explain how you will teach EdD students to uncover their inquiry questions using the tools of improvement science along with literature.

Moving From Questions to Answers Once PhD students have their questions, they move into investigation mode. They enter a site they or their mentor has chosen, find participants to engage in their study, secure permissions, and test something that will later need to be implemented with fidelity or simply gather data. Once they are done, they leave the research site, analyze the data, and write their dissertations from these data (Herr & Anderson, 2005). From their findings, PhD students develop new theories or explain new interventions that must be implemented with fidelity (Bryk, 2018). They sometimes write up their findings and present them to their participants, but most often they leave the research site, write up their findings, and publish what they have learned in academic journals. In contrast, EdD students are in, and will remain in, the professional practice contexts they seek to improve and study. EdD students have a different purpose and process for their work. Instead of investigating an unknown site and individuals, they implement a change effort in a place they are responsible for with those most

Finding Problems, Asking Questions, and Implementing Solutions

55

affected by the problem. EdD students understand that variation in performance is the core problem to address. They seek to uncover what works in their context, for whom, and under what set of conditions (Carnegie Foundation for the Advancement of Teaching, 2015). To accomplish this, they both lead and work collaboratively in cycles. So, the next question is how those cycles roll out. Typical cycles in improvement work are called “PDSA cycles” (Bryk et al., 2015a; Langley et al., 2009). This acronym stands for plan (P), do (D), study (S), and act (A). To more clearly reflect the work of educational leaders in EdD programs as they lead improvement and collaborate with stakeholders, we, along with our colleague Robert Crow, have developed a new cycle that flows around inquiry questions and leadership (see Figure 3.2). Our cycle’s acronym is SIAR, which represents strategize (S), implement (I), analyze (A), and reflect (R). Our original cycle was published in The Improvement Science Dissertation in Practice: A Guide for Faculty, Committee Members and their Students by Perry et al. (2020).

Framework for trial and effort methodology

Strategize

Reflect

Inquiry questions

Analyze

Figure 3.2. Our Improvement cycle.

Implement

56

Teaching Improvement Science in Educational Leadership

Strategize The success of any change effort depends on how well it is conceived and executed. Leadership is key. Leaders understand the difference between a plan and a strategy. A plan entails a set of steps one needs to follow to accomplish a goal—the what, who, where, and when. A plan says, “Here are the steps” and is a good thing to have (Konetes, 2020). But for school administrators who are working toward improvement, a plan is not enough. They need a strategy. A strategy is bigger than a plan because it gets to the question of why and goes beyond the end result (what). A strategy seeks to understand unforeseen roadblocks that could hamper the implementation plan and the consequences of the change actions. With a strategy, variation in performance becomes the focal point of the change effort. To strategize, educational leaders work out every detail of their improvement plan before implementation. They consider their inquiry questions, the change effort, and the personal and financial costs to the organization. They ask, Why is this the most reasoned change idea to try? and Why is this idea different, more ethical, more resourceful? Implement Implementation tends to be the most exciting aspect of the improvement process for educational leaders because it is the time when they get to see change ideas in action. Implementation is when the change effort is rolled out, observed, and documented without bias or preconceived expectations. Implementing a change effort with an open mind requires that the educational leader ask questions like the following: • Should those closest to the problem be part of the implementation team? • What has the implementation team done to ensure data collection is equitable and just? • How will issues of power and bias be addressed during implementation?

Finding Problems, Asking Questions, and Implementing Solutions

57

• How will differing values, attitudes, and opinions be gathered? • How will all voices be heard? Implementation without bias is not always easy, however. There is a range of internal (personal) and external (systematic) factors that come into play. Key to eliminating these factors is strong leadership (Lyon et al., 2018). During implementation, educational leaders must listen to all voices and document accurately. Analyze After the change effort has been implemented and data have been collected, the data need to be analyzed, displayed, and interpreted. In improvement, analysis involves examining all the data to determine if the improvement effort worked (or did not work)—for whom, how, and why (Bryk et al., 2015a). To analyze the data, educational leaders • look for and recognize patterns; • suspend judgment—interpret data objectively, fairly, and without bias; • draw conclusions with caution; • ask thoughtful questions about findings; • identify various conclusions that are possible and decide which (if any) are sufficiently supported; and • weigh strengths and limitations of all options. To perform this analysis, educational leaders work mindfully and in collaboration with the implementation team. Reflect The final part of our model is reflection. For many educational leaders, reflection is a natural part of what they do every day. Reflection is not a detached, disconnected action acquired suddenly or used occasionally. Rather, reflection is an inherent skill that comes from

58

Teaching Improvement Science in Educational Leadership

years of being a reflective leader and is key to making informed and logical decisions. Reflective leaders are not “judgmental,” but they do “use their judgment.” After they lead an improvement effort, they reflect on the outcomes by asking, What steps need to be taken next? They also reflect on themselves as leaders by asking more questions: 1. What role did I play in the effort? 2. What have I learned about myself as a leader? • Was I committed and accountable? • Did I lead fairly and ethically? • How will I use what I learned about myself in the future? • What were my successes and challenges? 3. How do I feel about the results of the effort? Did they lead to an improvement for everyone? If not, who was left out? Reflection helps educational leaders consider how the improvement effort contributed to their context, their personal and professional goals, and their profession (Göker & Bozkuş, 2017). Activity and Questions 1. This section explained some of the differences between how EdD students conduct their dissertation inquiry and PhD students do their work. How has your EdD program made such distinctions? 2. The SIAR cycle is different from a typical PDSA cycle because it was developed for students in EdD programs. How do the new descriptors better capture the work of educational leaders seeking EdDs? Conclusion Teaching Improvement Science in Educational Leadership: A Pedagogical Guide is aimed at being an instructional resource for educators interested in teaching and using improvement science to bring about substantial change in the educational environments

Finding Problems, Asking Questions, and Implementing Solutions

59

in which they work. As part of this book, this chapter is dedicated to helping educational leaders in EdD programs better understand how to lead an improvement effort and use improvement science as a frame for their DiPs. In this chapter we examined comparisons between EdD and PhD programs and preparation. We have also focused on the difference between developing EdD students into scholarly practitioners and developing PhD students into tenure-track faculty. To accomplish this, we have examined • the types of problems PhD students focus on and the practical improvement problems educational leaders in EdD programs work to solve; • the research questions PhD students ask and the inquiry questions educational leaders in EdD programs ask; and • the actions PhD students take to answer their questions and the SIAR cycles of improvement educational leaders in EdD programs perform. In this chapter we have stretched the thinking of improvement science for doctoral-level work and in doing so, helped distinguish the difference between EdDs and PhDs. References Bryk, A. S. (2015). Accelerating how we learn to improve. Educational Researcher, 44(9), 467–477. Bryk, A. S. (2017, March 27). Redressing inequities: An aspiration in search of a method [Keynote address]. Carnegie Foundation Summit on Improvement in Education, San Francisco, CA. Bryk, A. S. (2018, April 3). Advancing quality in continuous improvement [Speech]. Carnegie Foundation Summit on Improvement in Education, San Francisco, CA. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015a). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015b). Breaking the cycle of failed school reforms: Using Networked Improvement Communities to learn fast and implement well. Harvard Education Letter, 31(1), 1–3. Carnegie Foundation for the Advancement of Teaching. (2015). Our ideas. https:// www.carnegiefoundation.org/our-ideas/

60

Teaching Improvement Science in Educational Leadership

Carnegie Project on the Educational Doctorate (CPED). (2010). Design concept definitions. http://cpedinitiative.org. Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches. Sage. Göker, S. D., & Bozkuş, K. (2017). Reflective leadership: Learning to manage and

lead human organizations. https://www.intechopen.com/ books/contemporary-

leadership-challenges/reflective-leadership-learning-to-manage-and-lead human-organizations Herr, K., & Anderson, G. L. (2005). The continuum of positionality in action research. In K. Herr & G. L. Anderson (Eds.), The action research dissertation: A guide for students and faculty (pp. 29–48). Sage. doi10.4135/9781452226644 Hochbein, C., & Perry, J. A. (2013). The role of research in the professional doctorate. Planning and Changing Journal, 44(3/4), 181–194. Institute of Medicine, Committee on Quality of Health Care in America. (2012). Best care at lower costs: The path to continuously learning health care in America. National Academies Press. Jacobs, R. L. (2013). Developing a dissertation research problem: A guide for doctoral students in human resource development and adult education. New Horizons in Adult Education & Human Resource Development, 25(3), 103–117. Konetes, G. (2020). The difference between a plan and a strategy. https://www. infinityconcepts.com/2011/09/the-difference-between-a-plan-and-a-strategy/ Langley, G. L., Moen, R., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: A practical approach to enhancing organizational performance (2nd ed.). Jossey-Bass. Lyon, A. R., Cook, C. R., Brown, E. C., Locke, J., Davis, C., Ehrhart, M., & Aarons, G. A. (2018). Assessing organizational implementation context in the education sector: Confirmatory factor analysis of measures of implementation leadership, climate, and citizenship. Implementation Science, 13(1), 5. doi10.1186/ s13012-017-0705-6 Mintrop, R. (2016). Design-based school improvement: A practical guide for education leaders. Harvard Education Press. Myung, J., & Martinez, K. (2013). Strategies for enhancing the impact of post-observation feedback for teachers. Carnegie Foundation for the Advancement of

Teaching. http://cdn.carnegiefoundation.org/wp-content/uploads/2013/07/ BRIEF_Feedback-for-Teachers.pdf

Finding Problems, Asking Questions, and Implementing Solutions

61

New York City Department of Education (NYCDEO). (2018). Improvement science handbook. https://www.weteachnyc.org/media2016/filer_public/4b/40/4b4027 b3-c0a6-4129-a050-7c41120a38d7/nycdoe_improvement_science_handbook_ 2018_online.pdf Perry, J. A. (2013). Developing stewards of practice. In J. A. Perry & D. L. Carlson (Eds.), In their own words: A journey to the stewardship of the practice in education. Information Age. Perry, J. A., Zambo, D., & Crow, R. (2020). The improvement science dissertation in practice: A guide for faculty, committee members, and their students. Myers Education Press. Plano Clark, V. L., & Creswell J. W. (2014). Understanding research: A consumer’s guide. Pearson. Strother, S., & Sowers, N. (2014). Community college pathways: A descriptive report

of summative assessments and student learning. Carnegie Foundation for the Advancement of Teaching. http://www.carnegiefoundation.org/resources/ publications/community-college-pathways-summative-assessments-student-

learning/ Willis, J. W., Valenti, R., & Inman, D. (2010). Completing a professional practice dissertation: A guide for doctoral students and faculty. Information Age.

chapter four

Teaching the Design of Plan-Do-Study-Act Cycles Using Improvement Cases CHAD R . LOCHMILLER

Indiana University Bloomington School of Education

Abstract

T

his chapter discusses an approach to teaching novice users of improvement science how to design plan-do-study-act (PDSA) cycles. Specifically, it describes the use of instructive teaching cases and a problem-based learning approach. The chapter highlights the important concepts that must be foregrounded when teaching the design of a PDSA. These concepts include systems, leverage, change ideas, and the broader acceptance that failure is a common part of an improvement orientation. Further, the chapter discusses the conditions necessary for case-based instruction. To assist readers in developing and using their own teaching cases, the chapter includes an illustrative teaching case which tracks the initial development of a PDSA as well as a lesson plan. Readers will benefit from the chapter’s incisive focus on PDSA design. Background I use this chapter to argue that teaching novice users of improvement science how to design effective PDSA cycles is best facilitated through the use of improvement cases and a problem-based learning 63

64

Teaching Improvement Science in Educational Leadership

orientation. An improvement case presents a realistic account of an improvement initiative drawn from education or similar fields. Improvement cases enable novice users of improvement science to learn to anticipate key decision points, recognize effective improvement designs, and become better prepared for the introduction of disciplined inquiry. Indeed, improvement cases motivate learners to reflect on the entire improvement process. This includes the identification of the improvement system, definition of a problem of practice, articulation of a change idea, and the design of a PDSA. As such, teaching with improvement cases can be used to introduce novice users to any aspect of the improvement science process and thus can be tailored to the users’ level of expertise. In this chapter, I describe my approach to using improvement cases to facilitate students’ understanding about how to design an effective PDSA cycle. To illustrate this approach, I provide an illustrative case that describes the work of a team of second-grade teachers. Conceptually, teaching improvement science using cases is rooted in the broader discipline of problem-based learning (PBL), wherein individuals acquire specific skills that allow them to address similar problems in the real world. This approach has been recommended for use in educational leadership preparation for some time (Bridges, 1992; Bridges & Hallinger, 1995, 1996). Further, it has been widely used in medical education (Barrows, 1986; 1994) as well as the field of business administration (Merchand, 1995), law (Boud & Feletti, 1998; Kurtz et al., 1990), nursing (Higgins, 1994), and social work (Bolzan & Heycox, 1998). PBL has been a particularly useful instructional technique to help both undergraduate and graduate students conceptualize specific leadership skills as well as learn to adopt specific orientations. PBL is a unique and highly appropriate pedagogical approach for teaching improvement science. Bridges (1992) asserts that PBL has four characteristics that differentiate it from other pedagogical orientations. First, learning proceeds from a clearly defined problem that lacks a well-specified response. Second, the problem is relevant to the students’ work as professionals, or in this case as future users of improvement science. Third, solving the problem fuels the

Teaching the Design of Plan-Do-Study-Act Cycles

65

acquisition of knowledge. Fourth, students are largely responsible for their own learning. Finally, the problem is situated within small group discussions versus whole class lectures. Instructors support this pedagogical approach by serving as facilitators who offer expert guidance at important points (Bridges, 1992). Compared with traditional lecture-based instruction, PBL represents a disruption in many classroom routines by repositioning the instructor as the primary actor in the learning process. Thus, to facilitate learning using this model, instructors must make specific assumptions about the orientation of the learner and their willingness to acquire new knowledge. According to Bridges and Hallinger (1995), these assumptions are threefold. First, instructors using PBL assume that individuals are more likely to acquire new knowledge when their prior knowledge is activated and new learning is integrated into their existing understanding. Second, instructors afford learners multiple opportunities to apply new learning in realistic and/or relevant work contexts. Finally, instructors using PBL assume that students will encode new knowledge most effectively when it is presented in a context similar to its eventual use. More succinctly, PBL offers a richer conceptualization of the “learn-bydoing” approach commonly advocated by instructors who teach improvement science. Indeed, it offers instructors a conceptually developed and empirically grounded technique that other forms of instruction do not. I use PBL in my courses focused on improvement science as it provides students with opportunities to learn the improvement process and leadership skills simultaneously. This approach also allows me to connect key concepts to the practices used by students in their future roles as district or building administrators. Research defines the classroom conditions necessary for an effectively structured classroom using PBL (Bridges & Hallinger, 1995). In my practice, I find that the vehicle best suited to activating prior knowledge and contextualizing learning is an instructive teaching case. Cases have been widely used in educational leadership, and there are numerous resources available (see, e.g., the Journal of Cases in Educational Leadership). Reviews of research on case-based instruction suggest

66

Teaching Improvement Science in Educational Leadership

that teaching cases is a more effective instructional method than traditional lectures; cases promote critical thinking and decision-making leadership skills (Kim et al., 2006). Cases are appropriate for small group learning and need to provide authentic learning opportunities that simulate key leadership decisions. Throughout the remainder of this chapter, I discuss the concepts that I typically introduce students to as part of my instruction. I then discuss the pedagogical considerations that guide my practice. To support readers in using and developing their own cases, I offer an instructive teaching case drawn from my own practice as an appendix to this chapter. Teaching a Practical Understanding of Improvement Science: An Overview As a leadership practice, improvement science rests on the identification of improvement systems and subsequent specification of meaningful change ideas. These ideas are rigorously tested in a PDSA cycle and eventually scaled based on the results achieved. According to Bryk et al. (2015), a system is defined as an “organization characterized by a set of interactions among the people who work there, the tools and materials they have at their disposal, and the processes through which these people and resources join together to accomplish its goals” (p. 198). This does not mean that an organization needs to be a school or school district. Rather, organizations might include professional learning communities (DuFour & Eaker, 2009), grade-level teams in elementary schools, or academic departments found in traditionally configured high schools. Research from other fields suggests that these smaller units are appropriate sites for improvement activities (Meltzer et al., 2010). In my courses, I situate cases in settings that are familiar to the students (e.g., classrooms, grade-level teams, school district central offices, etc.). This setting enables users to envision how improvement science might “look” when introduced in their setting. In order for students to be successful users of improvement science, instructors must foreground particular concepts. I recommend

Teaching the Design of Plan-Do-Study-Act Cycles

67

that this be done through appropriately selected readings and lively class discussions that seek to build knowledge. In addition to texts focused on introducing improvement science (Bryk et al., 2015), I also include readings that introduce the notion of a system (Meadows, 2008; Senge, 2006), the concept of leverage and the idea of a leverage point within a system (Anderson & Johnson, 1997; Senge, 2006), and how to design change ideas (Bryk et al., 2015; Langley et al., 2009). During my courses, I also encourage instructors to spend time discussing the mental models that students have related to failure in educational improvement as well as the incremental notion of educational reform (Tyack & Cuban, 1995). Below, I briefly discuss each of these concepts and offer examples about how I introduce them to students. These concepts are particularly important when learning to design a PDSA cycle. An Understanding of Organizational and Work Systems If one adopts Bryk’s definition of a system, a system is simply an organization that includes people, processes, and interactions. Theorists have long posited that systems exist in a variety of environmental, economic, organizational, and work structures (Meadows, 2008; Senge, 2006). What makes a system unique is its tendency to be interdependent, complex, and dynamic (Anderson & Johnson, 1994; Meadows, 2008). As Bryk et al. (2015) note, “Adopting a systems perspective makes visible many of the hidden complexities actually operating in an organization that might be important targets for change” (p. 14). The interdependence of the system is often characterized by a tacit set of causal relationships. These relationships contribute to the outcomes that the system produces even when they are not visible. As such, understanding how these relationships contribute to specific outcomes is one of the first steps toward appropriately diagnosing the existence of a problem. Many students enter my classes with some notion of a system, but their understanding tends to be grounded in organizational terms. They assume, for example, that their school district is a system or that a school is a system. These are organizational systems

68

Teaching Improvement Science in Educational Leadership

and thus are relatively easy to define. Rarely, however, do students fully comprehend that a system can also include a work process or specific practice. Indeed, these work systems are often harder for students to identify and yet present the greatest opportunity for many of them to intervene. This is one of the reasons that process mapping is often used in improvement science. Thus, it is important to help students begin understanding the different types of systems where improvement science can be introduced. I typically emphasize the difference between an organizational system and a work system. For example, I often begin by asking the students to assume that they are an educator who wishes to improve something in his or her school. We begin by mapping all of the various components of the school so that students see their school as a system within which a variety of changes might be introduced. I often encourage students to provide other examples of organizational systems that are drawn from their own professional practice. Notably, they often point to examples that could be easily located on an organizational chart (e.g., human resources, facilities, transportation, etc.). I use the generation of these examples to illustrate the breadth of the organizational systems that exist. I also use these examples to help students see how situated their thinking in organizational terms is and thus how important it is that they begin to recognize other types of systems where improvement activities might be located. Next, I narrow the students’ focus to a specific classroom. In this example, I help students understand that a system can be as small as a classroom and is constituted both by the physical space (e.g., doors, walls, windows, materials, furniture), as well as the relationships or interactions within this space. Depending on how an improvement problem within this system is framed, it could mean that teachers are seeking to improve their instruction, the quality of students’ engagement with their instruction, the quality of the students’ engagement with their peers, or the quality of the students’ understanding. The framing of the problem within this system dictates how the educator might intervene. The classroom thus represents a smaller organizational system that an educator might

Teaching the Design of Plan-Do-Study-Act Cycles

69

address. I have also encouraged students to consider professional learning communities, grade-level teams, and academic departments when considering small organizational systems. A particularly important way for students to begin thinking about systems is to identify various work systems that they are direct participants in. For example, it is possible to think of systems as a set of discrete practices, procedures, or work processes. To illustrate, I will often use the example of a walk-through or learning walk protocol used in school districts. These protocols guide school administrators during their observation of classroom instruction. This process might involve a set of steps or decision points. As one example, Downey and colleagues (2004) suggest that this process might be broken down into five decision points that broadly relate to the students’ orientation to the work, curricular decisions, instructional decisions, learning outcomes, and safety issues. Administrators then assess teachers on the basis of their performance in each of these areas and engage with teachers in a reflective conversation about their performance on completion. The observational process represents a work system that could allow educators to introduce meaningful changes and thereby improve the performance of the system as a result. Depending on how a problem is defined, it could be possible to intervene in the system by redefining the walkthrough process itself or altering the form or discussion protocol used during the walk-through. I find that this shift often mystifies students. They recognize work systems and routinely participate in them, but when asked to break apart these systems to understand how they work or why they fail, their responses tend to be somewhat dismissive. Indeed, a challenge for individuals learning about improvement science is to become adept at identifying systems within their own organization as well as particular processes within which they might intervene. Ability to Identify Leverage and Leverage Points Novice users of improvement science must also become familiar with the concept of leverage and leverage points. Leverage simply

70

Teaching Improvement Science in Educational Leadership

describes the opportunity for change within a defined system (Senge, 2006). A leverage point, on the other hand, is simply a place in the system with the most potential for significant impact (Anderson & Johnson, 1997; Senge, 2006). I have found that the concept of leverage and a specific discussion of leverage points is not widely covered in leading texts on improvement science despite their extensive discussion of systems (Bryk et al., 2015). Yet, from my perspective, understanding how leverage influences a system and how a leverage point induces systemic change is of great importance. This concept helps individuals seeking to define a change idea for introduction in a PDSA cycle. In my instruction, I often use practical examples to help students learn how to determine where a leverage point exists within a system or process. For example, I routinely ask students to map the process of changing a lightbulb or making a piece of toast. I then ask them to locate the step in the process where they can either accelerate the process (i.e., change the lightbulb more quickly or make the toast more efficiently) or improve the outcome (i.e., restore light to a dark room or produce a better piece of toast).1 The point at which a process can be made more effective is commonly referred to as the leverage point within systems thinking. To reinforce and expand their learning, I often introduce students to process mapping at this point. Process mapping is simply the preparation of a workflow diagram with the goal of gaining a clearer understanding of how a process and its parallel processes work. I then relate this learning to their emerging understanding of leverage point, work system, or organizational system. In Figure 4.1, I provide an example of a process map related to changing a lightbulb. The figure shows the entire process from start (i.e., light goes out) to finish (i.e., light comes back on). In the figure, I highlight the places where leverage might be found within the process and thus where a change idea might be directed as a result. 1 Both activities are widely used when introducing systems thinking. Readers of this volume can find these activities online. In my courses, I often ask students to watch a Ted Talk by Tom Tujec entitled Got a wicked problem? First tell me how you make toast. https://www.youtube.com/watch?v=QJmI2KZBCWw

Teaching the Design of Plan-Do-Study-Act Cycles

The room goes dark.

They realize bulb burned out.

Problem Identified

71

I change the lightbulb.

The light is back on.

Change Idea

Aim or Goal

Leverage Point

Figure 4.1. Illustrative process map and leverage point used with the changing a lightbulb activity.

Capacity to Define Change Ideas After defining a system and locating a leverage point by mapping a specific process, I then engage students in generating specific change ideas. These ideas are reflected on their driver diagram and are part of their larger theory of improvement. According to Bryk and colleagues (2015), a change idea is “an alteration to a system or process that is to be tested through a PDSA cycle to examine its efficacy” (p. 199). This definition leads many to conceptualize change ideas in simple terms. They attribute any action that is different from what exists before as a possible change idea or inevitably associate any change with the basis for a PDSA. Yet, as Langley and colleagues (2009) note, improvement science depends on the identification of changes which are both specific and identifiable—not broad or vague (p. 6). This specificity is necessary so that changes can be tested and measured in a PDSA. As Langley and colleagues (2009) note, many novice users of improvement science identify changes that simply respond to problems, create more of the same, or wait for the perfect change. These approaches are common but misguided, as they miss the potentially generative aspect that comes from improvement science. With my students, I often use the lightbulb or toast-making example to help them envision possible

72

Teaching Improvement Science in Educational Leadership

change ideas. Their ideas often range from adjusting where they store lightbulbs to the temperature at which they cook their piece of toast. Indeed, in my own practice, I find that novice users of improvement science often become so fixated on adjusting existing practices that their change ideas often resemble what currently exists. To address this concern, it can be helpful to consider where plausible change ideas begin. To paraphrase Langley and colleagues (2009), change ideas can emerge through a systematic investigation of an existing system or as an entirely new practice that fundamentally differs from what existed before. In education, we have tended to rely heavily on the latter of these approaches by adopting new practices without fundamentally considering their efficacy (Bryk et al., 2015). This has, in the words of Tyack and Cuban (1995), fueled the notion that education improves through various forms of “tinkering” rather than through the systematic identification and testing of viable changes in a local context. I reinforce the concept of tinkering by returning to the lightbulb example to illustrate how we tend to “tinker” with changes rather than introducing them systematically in a PDSA. In the case of changing a lightbulb, I note that our response to the loss of light in a room prompts us to take action quickly to restore light. These actions might include flicking the light switch on and off, jiggling the bulb to determine whether it is loose, or rushing to the circuit box to reset a breaker. This is motivated by our desire to keep doing what we were doing before. As such, this represents our common response to many educational problems, particularly those related to students. When data suggest that students have deviated from an anticipated learning trajectory, we feel compelled to take action to restore the trajectory as quickly as possible. We do not, however, introduce our changes systematically nor consider whether the change we are making is, in fact, going to exert the greatest leverage. We simply try to find the circuit breaker that will restore the trajectory and allow us to resume our normal work. Novice users of improvement science must understand how this tendency has limited our efforts to find effective solutions and practices. Our success

Teaching the Design of Plan-Do-Study-Act Cycles

73

depends on how users formulate their understanding of the problem they are trying to solve. Within the context of improvement science, change ideas emerge in relation to a specific understanding of a problem and as a product of one’s understanding of the broader system wherein it resides. Thus, the formulation of the problem, its breadth or complexity, and the degree to which the problem is systemic in nature all contribute to the possibility that a change can be successfully scaled. As change ideas are only scaled once they show evidence of effectiveness in various contexts, it is therefore paramount that individuals using improvement science identify changes that have some viability beyond the immediate context. In other words, users should consider the grain size of the problem being addressed and the change idea selected. Teaching this skill to an inexperienced user of improvement science is difficult as they often do not consider how a change will be adopted beyond their immediate context. As Langley and colleagues (2009) note, “It is not enough simply to show in a test that a change is an improvement. The change must be fully integrated into the system. This takes some planning, and usually some additional learning” (p. 8). Thus, conditions for implementation must be considered well in advance of the adoption of a particular change idea. With novice users of improvement science, it is important to help them understand how the PDSA leads to the introduction of a change idea as well as how the measures derived through the PDSA enable them to learn about the effectiveness of the change relative to a specific aim. One of the major impetuses of measurement in a PDSA is to encourage users to consistently reflect on their practice to identify ways to further additional improvement in the process. Thus, it is not sufficient to simply say that light has returned after correctly inserting the new lightbulb or finding the discharged circuit breaker. Users must think about the process itself and note places where the changes enable them to approach their goal more efficiently, effectively, or with less variation. For my students, I often ask them to consider what they would do if they found that the most efficient way to restore light was to follow the same steps every time (i.e., to create a checklist or protocol for changing the lightbulb). I

74

Teaching Improvement Science in Educational Leadership

focus specifically on how they would test this protocol in a variety of different settings as well as implement the protocol across these settings with some degree of integrity. As students begin to consider implementation of change ideas, they realize that improvement science requires much more than running successful PDSAs. In practice, success requires considering the capacity of users within the organization and the structures that potentially impede the implementation of new practices, as well as ongoing efforts to gauge the effectiveness of changes once they have been adopted. All of these conditions influence how well a change will be accepted, adopted, and implemented. Accepting the Possibility of Failure Finally, novice users of improvement science must learn to accept the inherent risks that come with the introduction of a change idea. Failure is always possible and no change idea is perfect. Indeed, in my practice, I often hear practitioners ask, What if this does not work? or What happens if we fail? To be clear, there are no definitive answers to these questions. Each improvement cycle is unique and includes the possibility of failure regardless of the amount of careful planning. My general guidance to novice users of improvement science, and that of other improvement scholars (Langley et al., 2009), is that a change idea must proceed through three iterations before it can be deemed a success or failure. Three tests allow individuals to test their initial idea, refine the initial idea based on data, and test the revised idea under slightly different conditions to ensure that the change is, in fact, a sustainable improvement. When users adopt changes after only one test, they risk accepting a change that worked in a specific context but that cannot be replicated. This, I assert, is a threat to implementation that many novice users of improvement science do not comprehend. Indeed, based on research on implementation (Fixsen et al., 2005; Metz & Bartley, 2012), this move risks both the quality and fidelity of the implementation.

Teaching the Design of Plan-Do-Study-Act Cycles

75

Using Cases to Facilitate Learning As stated at the beginning of the chapter, improvement cases presented using problem-based learning enable novice users of improvement science to become more comfortable with the concepts as well as the inherent risks that improvement efforts introduce. In my own teaching, I leverage opportunities to teach using PBL with teaching cases developed from my work with educators in public school settings or obtained from scholarly literature. Effective teaching cases are relevant to the reader, realistic in their context, engaging in their content, challenging to solve, and instructional so that they both build on and add to the student’s knowledge (Kim et al., 2006). These cases can illuminate different aspects of the improvement science process ranging from the identification of the improvement system to the analysis of outcomes associated with a simple PDSA. Typically, I structure the cases and activities associated with them to enable novice users of improvement science to learn progressively more sophisticated skills. At times, I invite students to consider a case multiple times to unpack different improvement concepts. Thus, the structure of each case differs depending on the particular learning objective, as does the focus (e.g., organizational, team, work process, etc.). To guide readers of this volume in using and developing teaching cases, I have included an illustrative case example as an appendix to this chapter. This case helps students understand the relationship between the specification of a problem and the development of an effective PDSA. This case is situated in an elementary school that enrolls students in kindergarten through sixth grade. The school enrolls a large proportion of students who are racial and ethnic minorities and serves a large number of students who are from low-income families, as measured by free-or-reduced-price lunch eligibility. Given these demographics, state accountability measures have pointed to persistent concerns about student academic performance, especially in reading and mathematics. I find this case particularly useful in helping students understand how improvement cycles quickly become misaligned when users have failed to specify

76

Teaching Improvement Science in Educational Leadership

the problem found within a particular system. At base, the case introduces readers to the importance of situating a problem within a system and defining a problem so that successive PDSA cycles can be conducted. Reflections on the Case Feedback from students who have used this case often find that it demonstrates how a small group of educators can use improvement science to identify and test practices related to a locally derived problem of practice. Indeed, students often marvel at how naturally PDSA cycles fit within a classroom teacher’s practice. Yet the case demonstrates many of the common errors that novice users of improvement science make when designing their first PDSA. Recognizing these errors in advance helps students overcome some of the initial challenges associated with designing an effective PDSA. Notably, it reveals how quickly users of improvement science can find themselves stuck in the inquiry process without a clear pathway toward scaling their ideas. This is particularly true when users have inappropriately defined the system, incorrectly specified the problem of practice, or selected change ideas that are not likely to contribute to improvement. Indeed, this case demonstrates well how an error at any one of these steps can quickly lead users into a failed improvement effort. Further, the case demonstrates how poorly conceived change ideas and misaligned measures can prompt users of improvement science to reach inaccurate conclusions. I have also found that the case also prompts discussion of several other factors that are beneficial to novice users of improvement science. First, the case highlights the importance of using research and other types of professional guidance when constructing change ideas. In this case, teachers drew exclusively from their own limited knowledge of classroom instruction and did little research to understand what alternative reading practices might be used. Studying successful reading practices with high-needs populations might have led them to identify different practices that might have been more impactful. Second, the teachers did not consider how

Teaching the Design of Plan-Do-Study-Act Cycles

77

their own instructional practice, which constituted its own system, contributed to the students’ performance. For example, the teachers might have increased the frequency of small groups provided each week as a possible change idea. Alternatively, they might have approached their lowest-performing students with a different set of practices. Either of these efforts might have produced more relevant data to guide further transformative adjustments in the teachers’ instruction. Third, the effect of the anchor chart could not be measured directly. The anchor charts functioned as part of the teacher’s overall instructional approach, but their effect on student learning was nested within other practices. Absent an intentional point of comparison (e.g., a control group), it was nearly impossible to discern whether the change was actually contributing to the gains. Fourth, the formative assessment was not the same from week to week given the changes in content covered. Thus, it was not truly measuring the effect of the change idea but rather students’ understanding of the new material. Related to this, the repeated administration of the Nonsense Word Fluency Assessment is not recommended in the technical guidance. Finally, the case hinged on the adoption of change ideas that did not specifically address the problems that the teachers were actually trying to solve. Indeed, teachers did not fully specify what problem they were trying to solve. Rather, they began making changes without a clear plan. In effect, they adopted the first change that came to mind and hoped that it would yield a significant benefit. Pedagogical Planning Cases like the example provided offer many opportunities for graduate students and professionals engaged as learners of improvement science. Notably, cases of this kind allow instructors to provide their students with opportunities to learn how to specify a system, identify a leverage point, formulate a problem of practice, design iterative PDSA cycles, identify appropriate measures, and analyze outcomes. In my practice, I use cases like this to provide students with opportunities to unpack various aspects of the improvement

78

Teaching Improvement Science in Educational Leadership

process as well as to practice using improvement tools. Indeed, I have found this case particularly helpful in demonstrating how to improve the design of a PDSA as well as to think through the assignment of practical measures. Table 4.1 provides an overview of a prototypical lesson drawn from my introductory class on improvement science. As noted, the purpose of this lesson is to help students understand how to design an effective PDSA cycle. Prior to introducing this case, I often foreground the kinds of concepts presented previously. To begin this activity, I ask each student to read the case independently and to take particular note of the design issues that arise. Next, I assign the students to small groups and ask them to develop a fishbone diagram that articulates the causes of the failed design. To guide their formulation of this diagram, I follow Ahlstrom’s (2014) advice and present this as a why question that asks, Why did the improvement cycle not produce the results we anticipated? This process provides the students with an opportunity to practice one of the core techniques of a root cause analysis (e.g., a fishbone diagram) as well as to become familiar with the causes of failure in the case situation. Next, I ask students to discuss each of the causes and assign them to categories that broadly articulate the reasons that the second-grade team’s PDSA encountered difficulties. I stress that students must come to consensus about the composition of the categories. Finally, I ask each team to interrogate one of the causes further and to develop a set of proposals that could yield a different result. Importantly, when teaching improvement science skills, I often provide students with opportunities to practice using various improvement tools. In this case, I encourage students to map their understanding of the errors presented in the case using a fishbone diagram. This practice encourages students to identify specific causes that contributed to the failure of the PDSA. Further, it gives students opportunities to use improvement tools in novel ways. I find that this strengthens the students’ understanding of the improvement science process as well as shows how it can be situated within their broader leadership practice.

Teaching the Design of Plan-Do-Study-Act Cycles

79

Table 4.1. Illustrative Lesson Plan Using Case Learning Outcomes

By the end of this lesson, students will:

Materials Needed

To complete this activity, participants will require the following materials:

• • •

Recognize and identify the boundaries of an organizational system or work system Understand how to define a leverage point within an organizational system or work system Understand common issues in designing a plan-do-study-act (PDSA) cycle

• A copy of the case narrative • A package of Post-It notes or 4×6 index cards • Markers or pens

Participant To complete this activity, participants will complete the following steps: Instructions

1. Distribute the case text and a copy of this instruction sheet. Ask students to individually read the text of the case. (5 minutes) 2. Before proceeding to small groups, clarify any points of confusion about the case and address any questions about case. (2 minutes) 3. Assign participants to groups of three or four, depending on class enrollment. Ask each group to designate a timekeeper and nominate a spokesperson who will share the group’s understanding. (1 minute) 4. Each group will develop a fishbone to analyze the sources of failure in the PDSA. Instructor presents a prompt for a fishbone which asks: Why did the improvement cycle not produce the results we anticipated? Encourage students to generate as many explanations as possible in the limited time provided. Where necessary, encourage students to use the 5 Why’s process to produce additional causes. Students use Post-It notes or 4×6 index cards to record the reasons. If using this protocol in an online environment, encourage students to use IdeaFlip or another online application that facilitates brainstorming. (10 minutes) 5. Ask students to discuss each of the causes they generated and then group them so that related causes are together. Groupings should be identified through discussion and the final composition of each group should be based on consensus. Once each group is finalized, students should assign a categorical description to the group. The description should broadly describe all the causes presented in the group. (10 minutes) 6. Once all groups have completed their analysis, ask the spokesperson for each group to share the results of their analysis with the entire class.

Possible Discussion Questions

In small groups, please discuss the following questions. Please be prepared to share your thinking with the class. • What was the problem that the second-grade team was trying to solve? • What assumptions did the second-grade teachers make about the work process or organizational system that influenced the formulation of the problem they were trying to solve? • How did the assumptions bear on the selection of change ideas included in each PDSA cycle? • How would you have defined the system and selected change ideas?

80

Teaching Improvement Science in Educational Leadership

Conclusion Throughout this chapter, I have focused on concepts that must be foregrounded in order for students to understand and be successful with improvement science, particularly with PDSA cycles. Additionally, I have provided an illustrative case drawn from my own practice to demonstrate how teaching cases might be structured to support student learning. This chapter reflects my belief that teaching improvement science requires that instructors situate their pedagogy within realistic problems that compel novice users to identify both failures in the improvement processes as well as the skills necessary to facilitate the improvement process. Problem-based learning facilitated by rich and highly detailed cases enables users to acquire these skills within the safety of the classroom setting. In doing so, cases allow users to acquire the decision-making and analytic skills necessary to become effective users of improvement science. References Ahlstrom, J. (2014). How to succeed with continuous improvement: A primer for becoming the best in the world. McGraw-Hill. Anderson, V., & Johnson, L. (1994). Systems thinking basics: From concepts to causal loops. Pegasus Communications. Barrows, H. S. (1986). A taxonomy of problem-based learning methods. Medical Education, 20, 481–486. Barrows, H. S. (1994). Practice-based learning: Problem-based learning applied to medical education. Southern Illinois University School of Medicine. Bolzan, N., & Heycox, K. (1998). Use of an issue-based approach in social work education. In D. Boud & G. Feletti (Eds.), The challenge of problem-based learning (2nd ed., pp. 194–202). Kogan Page. Boud, D., & Feletti, G. (Eds.). (1998). The challenge of problem-based learning. Kogan Page. Bridges, E. M. (1992). Problem based learning for administrators. ERIC Clearinghouse on Educational Management. Bridges, E. M., & Hallinger, P. (1995). Implementing problem-based learning in leadership development. ERIC Clearinghouse on Educational Management. Bridges, E. M., & Hallinger, P. (1996). Problem-based learning in leadership education. New Directions for Teaching and Learning, 68, 53–61.

Teaching the Design of Plan-Do-Study-Act Cycles

81

Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Downey, C. J., Steffy, B., English, F. W., Frase, L. E., & Poston, W. K. (2004). The three-minute classroom walk-through: Changing school supervisory practice one teacher at a time. Corwin Press. DuFour, R., & Eaker, R. (2009). Professional learning communities at work: Best practices for enhancing students’ achievement. Solution Tree Press. Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. University of South Florida. Fountas, I., & Pinnell, G. S. (2016). Guided reading: Responsive teaching across the grades (2nd ed.). Heinemann. Higgins, L. (1994). Integrating background nursing experience and study at the postgraduate level: An application of problem based learning. Higher Education Research and Development, 13(1), 23–34. Kim, S., Phllips, W. R., Pinsky, L., Brock, D., Phillips, K., & Keary, J. (2006). A conceptual framework for developing teaching cases: A review and synthesis of the literature across disciplines. Medical Education, 40, 867–876. Kurtz, S., Wylie, M., & Gold, N. (1990). Problem-based learning: An alternative approach to legal education. Dalhousie Law Journal, 13(2), 797–816. Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: A practical approach to enhancing organizational performance. John Wiley & Sons. Meadows, D. H. (2008). Thinking in systems: A primer. Chelsea Green Publishing. Meltzer, D., Chung, J., Khalili, P., Marlow, E., Arora, V., Schumock, G., & Burt, R. (2010). Exploring the use of social network methods in designing healthcare quality improvement. Social Science & Medicine, 71(6), 1119–1130. Merchand, J. E. (1995). Problem-based learning in the business curriculum: An alternative to traditional approaches. In W. Gijselaers, D. Tempelaar, P. Keizer, E. Bernard, & H. Kasper (Eds.), Educational innovation in economics and business administration: The case of problem-based learning (pp. 261–267). Kluwer. Metz, A., & Bartley, L. (2012). Active implementation frameworks for program success. Zero to Three, 32(4), 11–18. Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization. Currency. Tyack, D. B., & Cuban, L. (1995). Tinkering toward utopia. Harvard University Press.

82

Teaching Improvement Science in Educational Leadership

Appendix: Illustrative Teaching Case Assume you are working with a team of second-grade teachers. The teachers have various levels of experience, and their classrooms include students at varying reading levels. As such, one classroom serves mostly high-performing (i.e., gifted) students and one classroom predominantly serves students who qualify for special education or who receive support as English-language learners. The remaining classrooms serve a variety of students with different reading levels. Prior to working with you, the teachers had no previous experience with improvement science and were largely unfamiliar with the concept of designing a plan-do-study-act (PDSA) cycle. You have been hired to help them improve their reading achievement, which is one of the school’s improvement goals for the current academic year. In your first meeting with the teachers, they explain that reading achievement has declined in recent years as measured by the state’s annual assessment of student learning. The teachers note that there is considerable variation in student outcomes, and wide achievement gaps have emerged in their recent assessments. The teachers do not believe it is an instructional issue, as the instructional process is relatively similar across classrooms, and the district has recently required teachers to follow common practices in reading. Per the district’s guidelines, all teachers must use the approved reading curriculum, adopt assessments from the curriculum, and provide classroom instruction that is differentiated by three student performance levels (e.g., low, mid, and high). All teachers participate in a weekly 90-minute professional learning community meeting, which allows them to work with an on-site instructional coach. The coach supports instructional decision-making and provides professional development. The district encourages teachers to use Fountas and Pinnell’s (2016) Guided Reading Program, which includes assessments at the beginning, middle, and end of year to determine the student’s reading level. Teachers express a strong desire to use improvement science to understand and respond to the cause of their students’ poor reading

83

Teaching the Design of Plan-Do-Study-Act Cycles

performance. To develop such an understanding, the teachers complete a root cause analysis (RCA), which includes the completion of a fishbone diagram. Through this activity, the teachers generate a series of possible hypotheses that focus mostly on the child’s reading behavior and to a lesser extent on their own instructional practice. After completing their fishbone diagram, the teachers observe that students have difficulty in reading because they cannot read for a sustained period of time with progressively more complicated texts. Teachers observe that some of their lowest-performing students struggle with basic phonics. Finally, teachers perceive that classroom behavior issues and off-task behaviors also contribute to the students’ reading difficulties. In particular, they note that many of their higher performing students are distracted by the off-task behaviors of students who are struggling with reading. Table 4.2 presents the results of their RCA, with categories listed as a bold header row and causes listed below. Table 4.2. Illustrative Root Cause Analysis Rigor

Personnel Support

Scaffolds & Curriculum Supports (What?)

Lessons (How?)

Instructional Strategies

Work is too difficult, above reading level

Students are not being challenged at their just right spot

Our scaffolds/ supports are not accessible to students

A new reading curriculum was adopted

Not sure how Lack of background to address knowledge gaps in instruction

Kids are expected to do more than ever before

Tailored instruction (connections) missing

Including the use of visuals in all areas of instruction

We are teaching with materials that are new to us (teachers) so our comfort level may still be low/ growing which may be causing us to be less engaged or motivated with using them

Students may not have sufficient background knowledge to aid them in both interest and comprehension of the shared text

Understanding how to answer questions in book; test- taking strategies

Time Time needed for actual reading and writing instruction More 1:1 time with each student

84

Teaching Improvement Science in Educational Leadership

Rigor

Personnel Support

Scaffolds & Curriculum Supports (What?)

Lessons (How?)

Instructional Strategies

Rigor within lessons is lacking

Longer reading instruction for students in Title 1

Plan scaffolding as a team to roll out for all levels

Lack of experiences —prior knowledge —hard for them to understand

Not sure how to navigate a textbook or follow directions

Too rigorous for students

Lack of support in classrooms during reading

Students need supports with an expiration date

Our new reading materials do not offer much in terms of anticipatory sets or connection to prior knowledge (or previous learning)— so lack of a “hook”

For some students, writing is already tedious, so the task of answering comprehension questions in complete sentences (orchestrating writing with skill application) seems daunting

Academic Instrucvocabulary tional times build is low around support (title, HOSTS, LAP, etc.)

Lack of strong Students aren’t com- phonics ing in with instruction enough background knowledge in reading

Lack of supports

Students are not engaged in lessons

Time

Lessons are not engaging

Articulating a Theory of Improvement Given their initial understanding of the problem, the teachers formulate an aim statement using Fountas and Pinnell as their outcome measure. This statement serves as the focus of their theory of improvement (i.e., driver diagram), and it states: By spring 2022, 80% of second grade students will read at grade level as measured by Fountas and Pinnell. To this, teachers identify four primary drivers that they assume will increase their students’ performance. The

85

Teaching the Design of Plan-Do-Study-Act Cycles

drivers relate to whole group instruction, small group instruction, phonics instruction, and teacher learning. In addition, they determine that secondary drivers relate to instructional scaffolds (e.g., supports for students), the student’s reading stamina, and intentional use of the new reading curriculum. Related to this, the teachers generate a series of change ideas. These change ideas include anchor charts, visual queues, use of a reading timer, incentives for reading growth, peer-to-peer observations, as well as opportunities to look through the reading curriculum. Figure 4.2 presents their driver diagram. Aim Statement

Primary Drivers

Secondary Drivers

Change Ideas

By Spring 2022, 80% of second-grade students will read at grade level as measured by Fountas & Pinnell.

Classroom Instruction

Whole group instruction

Reading Timer

Small group instruction

Rewards System

Phonics instruction

Book Selection Quick Checks

Scaffolds and Supports

Anchor Charts Phonics Review Lessons

Figure 4.2. Illustrative driver diagram developed by second-grade team.

Teachers’ First Plan-Do-Study-Act (PDSA) Cycle Eager to begin their work, teachers decide to introduce anchor charts with visual queues aligned to key reading concepts associated with fiction text. The teachers perceive that this idea is small and easy to introduce as part of their routine practice. Moreover, they feel it could be tested across a 3-week period using the formative assessments they have recently adopted from their district’s required curriculum. They predict that the anchor charts will increase student performance on their formative assessments and that this will lead to an improvement on the Fountas and Pinnell assessment. They introduce the PDSA to all students in all classrooms without the

86

Teaching Improvement Science in Educational Leadership

use of a control group. One teacher develops the anchor charts for the entire team. Each teacher tapes the chart to the students’ desks. Importantly, the concepts presented on the anchor chart align with the material presented in the curriculum for each week. In the first week of their PDSA, they introduce the anchor charts to the students and monitor the students’ performance. The teachers administer a formative assessment at the end of the week and record their scores in a Google Spreadsheet after each administration. At the end of the 3-week period, they compare the results of their formative assessments over time and to those achieved before the anchor chart was introduced. They conclude that the anchor chart has improved their students’ performance on the formative assessments as scores have steadily increased. Anecdotally, they also perceive that students are excited by the charts and more attentive in their whole group reading lessons. They think they’re onto something!

Teachers’ Second Plan-Do-Study-Act (PDSA) Cycle Given their success, teachers were eager to introduce another change in their practice. They decide to add the anchor charts to their small group instruction. They hypothesize that the additional exposure will help students make connections between whole group and small group instructional activities. Further, they determine that teaching the concepts on the anchor chart a second time will likely help students who did not show improvement in the previous PDSA cycle. Again, the teachers intend to use their formative assessments to monitor student performance. Unlike the first time, however, the results on their formative assessment did not improve dramatically. Many students performed at or near the same level from week to week. A few students actually performed worse! This led the teachers to conclude their change was not an improvement.

Teaching the Design of Plan-Do-Study-Act Cycles

87

Teachers’ Third Plan-Do-Study-Act (PDSA) Cycle Frustrated with their results, the teachers grudgingly decide to add further changes to their small group activities. This time, they focus specifically on phonics instruction and use a different assessment from their weekly formative assessments. They also decide to accentuate phonics instruction only for their lowest-performing students in their small groups. This decision was informed by an analysis of midyear scores reported on the district’s Fountas and Pinnell (2016) reading assessment as well as a recent administration of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Nonsense Word Fluency Assessment. Results from both assessments led teachers to conclude that limited understanding of phonics could possibly explain their students’ poor performance. In their third PDSA, teachers introduce phonics instruction in their small groups for their lowest-performing students in short, 5-minute lessons. The teachers decide to abandon their use of formative assessments and instead measure their students’ performance using a DIBELS Nonsense Word Fluency Assessment. Over a 3-week period, teachers work with students to identify letter sounds with written words and emphasize blends as part of their short small group lessons. They administer the DIBELS Nonsense Word Fluency Assessment at the end of each week. At the 3-week’s end, they note that student performance has not improved and that their efforts appear to be losing traction.

chapter five

Embedding Improvement Science in One Principal Licensure Course Principal Leadership for Equity and Inclusion SUSAN P. CARLILE

Portland State University

DEBORAH S. PETERSON Portland State University

Abstract

I

mprovement science provides a comprehensive change process that increases the likelihood educators will reduce disparities for English learners (ELs) in classrooms. To address persistent disparities, mandates for continuous improvement in educational outcomes for EL students have been embedded in Oregon’s newly redesigned state standards for principal preparation. These new standards require an enhanced focus on equity, cultural leadership, equity-focused instruction, equitable distribution of resources, partnerships, advocacy, welcoming environments for families, and a relentless focus on clinical practice that supports the success and well-being of each student. These new standards align with the six principles of improvement science: 1. See the system. 2. Embrace measurement. 3. Learn through disciplined inquiry. 89

90

Teaching Improvement Science in Educational Leadership

4. Organize as networks. 5. Be problem-focused and user-centered. 6. Attend to variability. Focusing on this alignment, instructors in the principal licensure program at Portland State University (PSU) created a new course— Principal Leadership: High Leverage Practices for Linguistically and Culturally Diverse Students and Families. This new course reflects the new standards and embeds improvement science throughout. In addition, the course embeds the “high leverage practices” of the Collaboration for Effective Educator Development, Accountability, and Reform (CEEDAR) to ensure dually identified students are supported in schools. This chapter examines the collaborative revision process developed by the administrator program leadership team as we increased our capacity to deepen and broaden use of improvement science to prepare effective, equity-focused principals. We describe how the course reflects the goals of the program and where it is situated in the curriculum map of the full program redesign. We share the EL curriculum we designed, including the focus on high leverage instructional EL practices.

Background In fall 2019, four members of the Education Leadership Policy Program (ELP) team met in a small office with poster board, brightly colored sticky notes, and fragrant markers in hand. Our mission was to begin the complex and lengthy process of redesigning the principal licensure program at PSU. Longtime team members experienced in school change, equity-based reform, policy, curriculum mapping, assessment, and improvement science were joined in this effort by representatives from special education and curriculum and instruction departments, local superintendents, policy leaders, and influential leaders of color. Realizing we were not alone in looking for ways to improve principal education, our goal was to borrow from

Embedding Improvement Science in a Principal Course

91

our experienced and successful colleagues (Haas & Brown, 2019) and include their work in our planning. By the end of the first month, the team had reviewed the newly developed Teacher Standards and Practices Commission (TSPC) Principal Licensure Program Standards, TSPC School Leadership Standards, the TSPC English Language Learner Standards, and CEEDAR High Leverage Best Practices. The team had also created a program of study and a curriculum map. From the initial conversation, our problem of practice was that a continuous change process which addresses the needs of students and families and which meets the 2022 TSPC Principal Licensure Standards and EL Standards is not currently embedded throughout the PSU principal preparation program. This problem of practice centered all efforts on improvement science as the continuous change methodology that would guide both the process and content of our work. Critical to this foundation was the creation of a new prerequisite course, Improvement Science for School Leaders, which would increase the likelihood that our students could immediately apply improvement science methodology in their schools and make a stronger connection to it in each subsequent course in the program. We recognized that the growing body of research shows that curricular redesign improves academic outcomes (Yamada & Bryk, 2016) and that curriculum in gateway courses, aligned with subsequent expectations, significantly influences achievement throughout a program (Matthews & Newman, 2017). Our voice for educational equity and our focus on improvement science was never stronger than in the creation of a new principal licensure course that focused specifically on EL proficiency. This course, created to increase understanding of the current state of a district’s EL program and identify the knowledge, skills, and abilities in language, culture, planning, implementing and managing instruction, assessment, professionalism, and technology needed by EL students, including dually identified ELs, was significantly influenced by the history and context of the university where we work.

92

Teaching Improvement Science in Educational Leadership

Our Working Theory: The Redesign Process From our initial discussions, the group developed a root cause analysis and a theory of improvement anchored by the aim statement: By June 2023, PSU principal candidates will be capable of leading continuous improvement efforts that strengthen the ability of schools and districts to provide EL-identified students an education that enables them to succeed. Our aim statement, drivers, and strategic actions are described in Table 5.1. Development Progression of Improvement Science Planning was anchored by the six high leverage elements of the developmental progression of improvement science (LeMahieu et Table 5.1. Aim Statement, Key Drivers, and Strategic Actions AIM: By June 2023, PSU principal candidates will be capable of leading continuous improvement efforts that strengthen the ability of schools and districts to provide EL-identified students an education that enables them to succeed.

Key Drivers

Strategic Action

Integrate the discipline of improvement science tools and methods into preparation and ongoing development of schools and district leaders

Focus attention on the six high leverage elements of the development progression parameters

Target high leverage problems of practice

Build and share knowledge on common “weighty, wicked” problems of practice

Develop and maintain mutually beneficial district/university partnerships

Seek input from PSU departments within the College of Education (e.g., special education, curriculum, and instruction) and local districts

Anchor clinical practice as the foundation of the course, with content as a support for the practical learning experience

Develop relationships with schools and district personnel to support authentic, field-based experiences

Analyze the state of the district/school’s current practice and use the data for improvement

Probe for EL program strengths and weaknesses with equity audits and empathy interviews extended specifically to collect data on EL programs and students

Embedding Improvement Science in a Principal Course

93

al, 2017). We mapped the Principal Licensure Program Standards, School Leadership Standards, English Learner Standards, and CEEDAR High Leverage Best Practices to the course activities and goals described in Table 5.2. Table 5.2. Six High Leverage Elements of the Developmental Progression of Improvement Science Mapped to the Principal License Standards, School Leadership Standards, EL Standards, and CEEDAR Best Practices Mandates in Standards See the system

Require that administrators examine the school environment and their own cultural biases, determining if exclusionary factors for ELs exist and if the school professionals possess the skills and dispositions to make culturally and linguistically appropriate decisions about curriculum, instruction, and assessment.

EL Historical Timeline project

Keep current with new instructional techniques, research results, advances in the EL field, and public policy issues. Candidates use such information to reflect upon and improve instruction.

Change ideas

Demonstrate knowledge of the history of EL teaching as well as policies and laws regarding EL teaching practices.

Embrace measurement

EL Course Goals/Activities

Use data that is timely, finegrained, focused on EL subgroup performance. Implement multiple and varied assessments that allow learners to demonstrate knowledge of content regardless of language proficiency level.

Theory of Improvement: Root cause analysis Equity audit, empathy interview Problem of practice Aim statement Driver diagram Primary and secondary drivers PDSA cycle Action plan for distributing resources and managing school operations—in equity audit action plan. Platform paper synthesizing leadership to provide systems that support EL learners.

Equity audit of EL programs in their district and school culture. EL and bilingual data analysis and school plan of action. Use data from the equity audit, the empathy interview, and/or book study assignment to collaborate with teachers, families, and/or students to design and implement a plan for increasing culturally responsive practices that enhance the educational experience of families and students receiving EL services. Use principles of improvement science when measuring the effectiveness of your change idea.

94

Teaching Improvement Science in Educational Leadership

Mandates in Standards

EL Course Goals/Activities

Learn through disciplined inquiry

Rapid, disciplined cycles of testing in the field-based environment through clinical practice.

Theory of improvement: PDSA cycles

Organize as networks

Collaborate with university and local districts.

Participate in a school or district- level field-based experience that is extensive and focused on equity, diversity, and inclusion and shows significant internal growth during the clinical experience.

Be problemfocused and usercentered

Examine root causes through the eyes and voices of EL students, teachers, and families and in collaboration with each specific group.

Root cause analysis Empathy interview of EL student, family member, or colleague and reflection paper regarding possible problem of practice in their clinical practices and implications for school leadership. Theory of improvement

Attend to variability

Improve reliability for specific subgroups of ELL students and families and in many different organizational and community contexts.

Identify alignment between the following three important documents: High-Leverage Practices, High-Leverage Practices in Special Education, and Promoting Principal Leadership for the Success of Students with Disabilities. Teacher observation practice using CEEDAR High Leverage Practices and culturally responsive instructional strategies.

Lesson Design Following the guidelines in the syllabus template adopted by the PSU College of Education and the information in Table 5.2, the team developed a course catalogue description; a graph describing the course learning outcomes, standards and assessments; identified required texts; developed a course schedule; and wrote directions for each of the six assignments, including purpose, procedure, resources, and an assessment rubric. Assignments included the following: 1. English Language Learner Literature Review

Embedding Improvement Science in a Principal Course

95

2. Key Events in the History of English Language Learners 3. Instructional Leadership for Equity: English Language Learner Classroom Observations 4. English Language Learner Equity Audit Infographic 5. English Language Learner Empathy Interview 6. English Language Learner Platform New Learning Current redesign efforts were significantly influenced by our curriculum development over the past 8 years that focused directly on reducing educational disparities based on race, ethnicity, home language, and socioeconomic status. We learned that our assets included collaboration within the department and in local districts, professional development of faculty and candidates, culturally responsive recruitment, and curriculum revision that centered on the needs of culturally and linguistically diverse learners. We also learned that our past success using improvement science as a model for meaningful, sustainable, democratic curriculum that included teacher and student voice and enhanced the dignity of students and teachers in both the content and field-based experience (Carlile & Peterson, 2019) served us well. In addition, the viability of past successes in other areas, including grants and ongoing efforts to increase the percentage of principal licensure graduates of color, increased our efficacy. For example, when we increased the number of principal licensure graduates of color from 14% to 46% over 5 years, we enhanced the voices for underrepresented students (Oregon Educator Equity Report, 2019). These initial stages, called the chartering and network phases (LeMahieu et al., 2017), included identifying root causes, collecting equity data, writing a problem of practice, identifying change ideas, and seeking input from our networks. The six principles of improvement science helped us to map the Principal Licensure Program Standards, School Leadership Standards, English Language Learner Standards, and CEEDAR High Leverage Best Practices to the course activities.

96

Teaching Improvement Science in Educational Leadership

Critical new learning emerged in the spreading phase (LeMahieu et al., 2017). It was during this time that our course redesign efforts were forwarded to the College of Education faculty for vetting. By university policy, colleagues evaluated the curriculum in this course and subsequently required or requested changes. All but one of the reviewers had never been K–12 educators, had never served as practitioners nor as professors in the licensure programs, and were not familiar with or did not recognize the efficacy of improvement science as an improvement methodology. We learned that we could not assume support for or understanding of the improvement science tools, such as the PSDA tool, or other data-gathering tools, such as the equity audit. Additionally, considerable debate ensued among professors who believed school administrators should excel in traditional research and demonstrate skills in conducting research and professors who believed school leaders should be able to use data and research to lead improvement efforts. Based on professional respect for our colleagues, we embraced the opportunity to improve our syllabus but did not embrace the critique to teach traditional research methodology in our principal licensure program. Upon receiving written feedback on this syllabus, we incorporated small group discussions, one-on-one phone calls, Zoom conferences, frequent check-ins with our department chair, and displayed our willingness to respond positively to critiques. This process demanded considerably more clarification and collaboration with our colleagues than we had anticipated. After making all department-required revisions, the course was approved and we then submitted our course to the program and policy committee. This committee required additional changes in the structure of the syllabus, point values for each assignment, and language regarding our commitment to equity. In the 4 months since our initial presentation of this syllabus to faculty, we have collaborated with reviewers, made required changes, and recently received approval to advance the course to the entire College of Education faculty for input and revision prior to approval for advancement to the Graduate Studies Office. In retrospect, it might have been possible to predict that scaling up our improvement—that is, “addressing the conditions that

Embedding Improvement Science in a Principal Course

97

make improvements permanent, thus institutionalizing them” (LeMahieu et al, 2017, p. 19), was itself a process worthy of consideration. In fact, Hayagreeva “Huggy” Rao, an Athol McBean Professor of Organizational Behavior and Human Resources in the Graduate School of Business at Stanford University, advises that it is precisely in the scaling up of promising interventions where many of them fail (Rao, 2016). Rao poses a question about how a team spreads the innovation without screwing it up, surmising that over-confidence is related to how smart you think you are (Rao, 2016). Our team was confident when we finished our redesign. We had designed a course that included stakeholders, research, experts, and we were done. Things looked calm. However, when we sought approval from our departmental colleagues, we experienced significant pushback on both the process and the design. Looking back, we might have used a “premortem” process (Rao, 2016), which includes randomly dividing a group of collaborators into two groups, one of which imagines the scaling initiative to fail and the other half imagining it to succeed. To inform our next move, we might have developed a flowchart to describe how success and failure might occur and how we might facilitate our next move. We could learn (Rao, 2016) to spend as much time planning the marriage as we spent planning the wedding. Fortunately, we did begin our redesign work 3 years in advance of the state-required program changes. While many believed we were working too early on the changes, those of us who know that higher education faculty can move at the speed of shifting plate tectonics (Cassuto, 2016) created a timeline and completed our work such that the university faculty and committees would have 2 years for the syllabus approval process. Recommendations for Faculty For other institutions considering or being required to make program changes or syllabi changes, we make the following recommendations: • Address state and federal mandates for enhanced use of data and evidence by incorporating improvement science technologies that are intentionally and relentlessly focused on equity and culturally relevant practices.

98

Teaching Improvement Science in Educational Leadership

• Think of improvement science as a multifaceted, “technical enterprise” (Biag, 2019, p. 111) with the capacity to address systemic oppression when the tools are centered on equitable outcomes for historically underserved children and families. • Promote inclusion and collaboration by actively and consistently seeking input from representations of key constituents, including university colleagues and those in local educational districts. • Recognize that the curriculum in gateway courses, such as a prerequisite course in the basics of improvement science, will provide a foundation for the integration of improvement science in all subsequent learning: course learning outcomes, assessments, texts, and equity-based activities in both the initial and advanced administrative licensure programs. • Include in the chartering phase of the redesign a review of program, department, and university course revision guidelines, factoring in an examination of timelines, expectations, communication, and the discrete and often conflicting, vague, and out-of-date requirements. While we created a 3-year plan and solicited this information prior to beginning our process, we noticed that some of our colleagues were either unavailable, not responsive, reluctant to share this information, or lacking in complete information, which is understandable in a large, complex university. Our goal was to work with compassion and understanding in all of the scenarios and to move forward with the best information available at the time we made decisions. Each of us in the program redesign program have led complex schools and departments within educational organizations and were able to nimbly and quickly adjust to a changing landscape. • Anticipate challenges related to the sociopolitical aspects of designing curriculum based on improvement science methodology. Members of the university and program teams responsible for vetting the course redesign may be unfamiliar with IS or have an uneven or incomplete understanding and will require enhanced communication and more collaboration than what was experienced in past redesign efforts.

Embedding Improvement Science in a Principal Course

99

• Recognize in the chartering phase that colleagues in different departments may be wedded to traditional research methods and may have significant challenges releasing themselves from the very traditional methods with which they have become most comfortable. Even while those research methods have failed to reduce educational disparities among the most vulnerable in our schools, particularly in education, changing the perspective and mindset of our esteemed colleagues presented more challenges than we anticipated. • In the spreading phase, consider adoption of a “prediction” process that helps the team identify the ways that the new curriculum might not be ready for prime time; that is, there may be unforeseen areas of concern that could be addressed prior to the launch, before you go live. • Know that the revision will take longer, be more complicated, and test the best communication skills, patience, and goodwill of everyone on the team. On our team, our mutual support of one another, our internal recognition of the professional sacrifices our team members made to complete our work, and our belief that the resulting changes would significantly improve principal preparation in our state, an area of our considerable expertise, made our internal process much smoother.

Conclusion Though the new principal preparation redesign, including the new course focused on EL learners, will not be implemented until fall 2022, PSU is currently piloting portions of the curriculum and is committed throughout the intervening months to strengthening university–district partnerships, advancing improvement science as an equity-based change process, forming networked improvement communities, and preparing our graduates to meet the needs of English learners. These are key considerations for meeting the new TSPC and EL Standards, to reflect CEEDAR High Leverage Practices, and to embed best instructional practices into clinical

100

Teaching Improvement Science in Educational Leadership

practice. Most importantly, this work advances social justice initiatives and interrupts educational disparities in our English learner communities. A final and important consideration is how we are implementing these improvements effectively, reliably, and to scale. We know that improvement science helps us prepare future leaders to embrace their own expertise, fosters their understanding of systemic oppression, and provides tools for continuous improvement. We also know that curriculum revision requires building that same knowledge among our university colleagues who may not yet know how improvement science can contribute to or replace traditional, perhaps repressive, research methods. References Biag, M. (2019). Improvement science in equity-based administrative practicum redesign. In R. Crow, B. N. Hinnant-Crawford, & D. T. Spaulding (Eds.), The educational leader’s guide to improvement science: Data, design and cases for reflection (pp. 91–125). Myers Education Press. Carlile, S. P., & Peterson, D. S. (2019). Improvement science in equity-based administrative practicum redesign. In R. Crow, B. N. Hinnant-Crawford, & D. T. Spaulding (Eds.), The educational leader’s guide to improvement science: Data, design and cases for reflection (pp. 197–216). Myers Education Press. Cassuto, L. (2016). What will doctoral education look like in 2025? The Chronicle of Higher Education. http://chronicle.com/article/What-Will-Doctoral-Education /234666 Haas, E. M., & Brown, E. (2019). Supporting English learners in the classroom: Best practices for distinguishing language acquisition from learning disabilities. Teachers College Press. LeMahieu, P., Grunow, A., Baker, L., Nordstrum, L., & Gomez, L. (2017). Networked improvement communities: The discipline of improvement science meets the power of network. Quality Assurance in Education, 25(1), 5–25. Matthews, R. S., & Newman. S. (2017). Chief academic officers and gateway courses: Keys to institutional retention and persistence agendas. New Directions for Higher Education, 180, 63–75. Oregon Educator Equity Report. (2019). https://www.oregon.gov/eac/Documents/ 2019%20Educator%20Equity%20Report.pdf

Embedding Improvement Science in a Principal Course

101

Rao, H. (2016, March 23). Scaling excellence [Conference session]. Carnegie Summit on Improvement in Education, San Francisco, CA. Yamada, H., & Bryk, S. (2016). Assessing the first two years of effectiveness of Statway: A multi-level model with propensity score matching. Carnegie Foundation for the Advancement of Teaching.

chapter six

Embedding Improvement Science in Principal Leadership Licensure Courses Program Designs DEBORAH S. PETERSON Portland State University

SUSAN P. CARLILE

Portland State University

MARIA EUGENIA OLIVAR Hillsboro School District, Oregon

CASSANDRA THONSTAD Newberg School District, Oregon

Abstract

I

n this chapter we will share how the faculty in the principal licensure program at Portland State University integrated improvement science (IS) into its principal licensure program courses in response to the state’s new school administrator licensure standards that emphasize improvement. We will share the timeline and our successful strategy for collaborating among the program faculty to complete the redesign work. Of course, any change process includes frustrations and barriers, and we will also share these in the hopes it will inform future improvement processes. One of our goals while revising our programs was to ensure that all improvement efforts focused on equity, culturally responsive leadership, and inclusion of 103

104

Teaching Improvement Science in Educational Leadership

students receiving special education services. We close the chapter with the experiences of seated administrators who learned IS in our licensure programs and find IS to be an empowering and effective change strategy.

Background How best to achieve racial equity in education is the subject of intense national, state, and local discussion. Oregon’s Department of Education (ODE) has implemented several strategies to overcome its racist history and educational disparities with specific initiatives aimed at supporting racially and linguistically diverse K–12 students. State initiatives include a focus on migrant education, English learners, youth with immigrant history, Native American education, and African American/Black student education (ODE). Oregon officially endorses several strategies for eliminating educational disparities: high expectations, leadership and focus, accountability, professional development, and family and community engagement. Related to leadership, ODE (2020) notes: A successful school leader is a strong educator and communicator with a powerful, clear focus on achieving academic success. Leadership begins with a principal, but it is not limited to the person at the top. At the best schools, leadership is systemically shared by all educators and stakeholders. (para. 3)

The PSU principal preparation faculty believe that IS is the best strategy for sharing leadership among all the stakeholders in a school or district and for improving educational outcomes for children of all cultural and linguistic backgrounds. Further, we believe IS is not new or a “fad”; rather, IS “build[s] upon foundational concepts of esteemed educational philosophers John Dewey (1990), Paolo Freire (1993) and Michael Fullan (2011, 2013)” (Peterson & Carlile, 2019, p. 167). We believe that authoritarian solutions such as directives to “increase family engagement by having more families come to math nights” or “the only data you may consider during

Embedding Improvement Science in Principal Courses

105

improvement efforts is attendance” from central office leaders or principals stall or stop improvement efforts by disregarding the “expertise of our families, students, and teachers and the funds of knowledge they bring to our schools” (p. 169). Context of Using IS at PSU In our effort to understand the potential use of IS in principal preparation programs, our small team of professors engaged in a 2-year learning process with encouragement of our dean and chair. The process included us identifying numerous professional development activities that each faculty member chose whether or not to engage in; faculty also had the right to decline using IS. Some did, in fact, decline. Other faculty members who chose to explore IS used their university-allotted professional development funds and, in some cases, the dean’s discretionary funds. A full description of this exploration phase is found in The Educational Leader’s Guide to IS (Peterson & Carlile, 2019, pp. 168–175). Next, we piloted teaching improvement through a year-long IS project with interns in our principal preparation program. In this 3-year pilot, interns learned about IS, engaged in personal improvement projects, and reflected on how IS could increase equity in their schools. Interns then worked within their schools and with their stakeholders to identify a problem of practice and then complete fishbone analyses, driver diagrams, and root cause analyses. Interns next led an equity improvement effort with several plan-do-studyact (PDSA) cycles and concluded the year with a presentation on their work and the implications for leading for equity. This 3-year pilot gave us information about how to adjust our teaching to support intern use of IS. This learning phase cemented our belief that IS is an empowering improvement strategy for school leaders. New Principal Licensure Standards In the winter of 2019, the state licensing agency completed new rules regarding principal preparation program and school administrator

106

Teaching Improvement Science in Educational Leadership

standards with implementation required within 3 years. The budget crisis common among institutes of higher education in this nation was also present in our university. Two colleagues received release time equivalent to 150 hours of work; two other colleagues volunteered their service, research, publication, and previously earned doctoral release time to the project. What began in the summer with two professors and our program coordinator meeting for 10 hours to craft a plan and process for improving our program became an 8-month redesign process with four team members. Our redesign efforts resulted in increasing and revising the course requirements in the principal licensure program, adding additional clinical practice experiences, creating three new courses, and significantly redesigning the remaining courses to reflect new standards. After agreeing on our process and work plan, we examined the new standards. We were curious if we would find reference to IS as an improvement strategy. We did not; however, throughout the program and administrator standards we did find terms and concepts of IS. For example, the scope and responsibilities of school administrators include “Support the continuous improvement and capacity of the school administrator profession” (Scope and Responsibilities of School Administrators, 2019). The school administrator standards include references to “improvement” in five of the ten standards, and one standard focuses solely on improvement (Oregon School Administrator and English Learner License Standards, 2019). The School Administrator Standards in the State of Oregon then take one additional step to ensure it is clear that school leaders must have the skills and mindset to focus on improvement. Standard 10 reflects the basic principles of IS and also includes a social justice orientation. Peterson (2017) defines social justice as an orientation that includes both a goal and a process (Adams et al., 2016) in which the dignity of each person’s unique identity—including the intersections of their race/ethnicity, socioeconomic status, home language, national origin, sexual orientation, gender identification, ability, religion, or any other manifestation of diversity that advantages or disadvantages a person by virtue of membership in that group (Gay, 2010)—is respected and enhanced. (p. 34)

Embedding Improvement Science in Principal Courses

107

The Process of Revising Our Program We used the Oregon licensure standards to guide us as we determined what current curriculum must be protected and what we must change in order to prepare future leaders to reduce educational disparities. We identified a problem of practice: “Current PSU principal licensure curriculum does not consistently lead to equity-based administrator practice and does not align with the new state administrator licensure standards.” Next, we analyzed root causes. As a result of this, we identified four drivers: • Mapping curriculum to embed the standards with each course, ensuring all standards are met over the course of the program and that courses are developmentally sequenced • Anchoring IS throughout the program with a gateway course in IS • Redefining the role of the supervised clinical practice experience to enhance equity-based experiences • Adding two new courses on dual-identified English learners and students with special needs With our program coordinator facilitating our collaborative work, we next identified the current and new courses for which each of us would create or redesign the curriculum. We collaboratively identified a process for determining content for the new courses and to create curriculum alignment. This curriculum map included the course title in the vertical column and the course outcomes tied to licensure standards, English learner standards, and high leverage practices identified by the Collaboration for Effective Educator Development Accountability and Reform (CEEDAR) in the horizontal row. We reviewed our work to ensure that sequential courses in the program and the standards addressed in each were developmentally appropriate (Allen, 2004; Dottin & Voegele, 2020) and reflected our program’s social justice orientation. What became clear to us in those meetings was that once the curriculum map was in place, we would continue the work in small teams

108

Teaching Improvement Science in Educational Leadership

on all drivers simultaneously. Our team had experience as school leaders and professors in leading collaborative school improvement processes, centering our work on leadership for equity, curriculum planning, assessment for improvement, and working independently. To ensure a cohesive redesigned program, college leaders supported our strategy of reaching out to colleagues in curriculum and instruction and special education, working independently in small teams, and holding twice-monthly meetings to check in and share progress. We worked within our university’s existing policies and guidance regarding equity and inclusion and then reached consensus as a team on what we would include and how we would communicate our beliefs. We agreed to reach consensus on every aspect of our redesign, including concepts that would be common across 22 courses in both the principal and district leadership programs: • Culturally responsive methods of instruction overview • Program philosophy and orientation to social justice • Honoring and acknowledging the land and its peoples • Culturally responsive student activities • Diversity and inclusion perspective The discussions and consensus were important foundations for later discussions when faculty and committees questioned our choice of words, and even that we included so many references to equity. Our teams’ strong consensus and deep beliefs in communicating our social justice orientation prevailed. During independent work time, it became clear that one prerequisite, or gateway course, would be significantly enhanced if we included explicit projects that gave interns the knowledge, tools, and experiences to lead improvement efforts by engaging stakeholders from diverse racial, ethnic, linguistic, and socioeconomic backgrounds, using data and evidence, and by conducting improvement cycles. As Carlile and Peterson (2020) note in chapter 5: The growing body of research indicates that curricular redesign improves academic outcomes (Yamada & Bryk, 2016) and that curriculum in gateway courses, aligned with subsequent expectations,

Embedding Improvement Science in Principal Courses

109

significantly influence achievement throughout a program (Matthews & Newman, 2017).

Our new state standards required school leaders to use research in their context, with their stakeholders, by collecting meaningful data and engaging in improvement cycles to lead improvement efforts. This focus contrasted with our currently required course, “Principles of Educational Research and Data Analysis I,” which includes the following course description: Research paradigm; measurement and test characteristics; planning and evaluation; library resources; identifying research problems; planning research; types of research; research designs, central tendency, variability and relationships; sampling, sampling error, and hypothesis testing; crossbreaks; one, two, and multiple group, and multiple independent variable designs; computer applications; information systems.

The existing course description and curriculum do not reflect the new standards, nor does the curriculum prepare school leaders to effectively lead improvement efforts. Thus our team boldly recommended that the traditional course remain in our MA program but be removed from the principal licensure program. We created a new course focusing on improvement. Further, the team decided to embed IS in every course, requiring 360 hours of clinical practice (60 hours beyond the state requirement) and reflecting the new standards that explicitly stated school leaders should use research for two purposes: evaluation of teaching and developing the professional capacity of school personnel (Standard 6) and developing a professional community for teachers and staff (Standard 7) (Scope and Responsibilities of School Administrators, 2019). At a department meeting, we shared our team’s bold assertion that we needed to remove the traditional research course and replace it with a course reflecting new standards. This assertion was met with professional critique and healthy skepticism by several department faculty. While our team members were very experienced and skilled in navigating the sociopolitical context in our practitioner

110

Teaching Improvement Science in Educational Leadership

roles in K–12 public education, we were less experienced navigating within the context of higher education. Our inexperience contributed to obstacles shared later in this chapter, in “Lessons Learned.” The new course focused on continuous improvement processes that reflected new licensure standards and IS methodology. This course would be the first course in the principal licensure program, keeping clinical practice at the heart of our program as well as our mission of preparing effective, equity-focused leaders (Carlile & Peterson, 2019). We had experience providing field-based practicums where interns could directly apply new knowledge to their practice and increase student achievement. Now, interns would be prepared, in the initial phases of their licensure program, to facilitate yearlong, equity-focused school-based team projects that included the voices of students, teachers, and the community at their practicum sites. Another valuable aspect of the redesign project was that while our current program was very strong and equity-focused, we needed to improve how we prepared future principals to support the growth and development of teachers serving English learners and providing special education services. Our colleagues in curriculum and instruction and special education guided us as we created two new courses: (a) Principal Leadership: High Leverage Practices to Promote Inclusion and Equity and (b) Principal Leadership: Linguistically and Culturally Diverse Students and Families. In a time when most of our school leaders are White, middle class, and native English speaking and our K–12 student population is increasingly diverse, we must prepare leaders who can successfully lead and implement effective services for English learners and students receiving special education services (Haas & Brown, 2019), and who can appropriately evaluate special education teachers and English learner specialists (Holdheide et al., 2010). These two new courses thus embedded research on leadership for diversity and inclusion as well as IS tools and principles: collaborating with stakeholders to identify problems of practice; using data on EL and special education programs and student outcomes to identify a problem of practice; conducting an empathy interview with a student, family member, or colleague receiving or providing EL and special education services regarding a problem of

Embedding Improvement Science in Principal Courses

111

practice; and reviewing research identifying potential changes that could be discussed and tested in short PDSA cycles. While our redesigned program will not be implemented for 2 more years, we will continue to collect weekly and quarterly feedback from our students, alumni survey data, and informal feedback from former interns now serving as school leaders. We believe that while our current redesigned program reflects our best thinking and best work as of today, we are committed to continuous improvement and will not wait until the next state-mandated redesign to ensure our program effectively prepares future leaders for equity. In the next section, we share the experiences of two leaders for equity, Cassandra Thonstad and Maria Eugenia Olivar.

Principal Interns Learning How to Use IS in Schools Cassandra Thonstad’s Story As an instructional coach in the Newberg School District in the 2015–2016 school year, I was a part of the district-wide Teaching and Learning Council tasked with overseeing many aspects of a recently awarded grant through a statewide education advocacy organization. Together we learned about a way of thinking called “IS.” We learned how to identify a true problem of practice, establish many complex contributing factors through a fishbone diagram, implement the PDSA cycle, and recognize the cyclical nature of refining the driver diagram based on what we learned. As a trained math teacher, this process spoke to me. I could immediately see the strategy leading to lasting improvements and began utilizing the learning in my work as an instructional coach. Teachers appreciated the strategic adjustments and the methodical changes, and we began to see that our work was improving practices in the classroom. When I entered the principal preparation program at PSU, the professors were just beginning to integrate IS practices into their courses. Professor Peterson made it clear that she was new to IS

112

Teaching Improvement Science in Educational Leadership

and would learn by doing as suggested in the methodology. By integrating IS in one cohort, the change would start small where we could learn, study, and adjust with an option to scale up in future cohorts or in other programs in the department. Modeling this “learning by doing” mentality and “failing forward” allowed those in the program to join in wherever they were with their knowledge of IS and encouraged implementation of IS in the way it was intended: to learn by doing. Throughout the term, students and staff collaborated on IS projects, sharing findings and challenging the assumptions and biases that emerged. Together, we continued to utilize PDSA cycles with research-based improvement ideas to see what would improve our systems as they currently existed. We built on each other’s learning and were able to lead improvements in several school district systems. When our cohort graduated, we were well equipped to continue learning and strategically improve systems that had not resulted in improvement. Based on the principal license pilot with IS, the Continuing Administrative Licensure Program now has an elective course on IS, improving learning in numerous school districts and for thousands of students in Oregon. Maria Eugenia Olivar’s Story I, too, learned IS in my principal preparation program where the professors were piloting the curriculum. When they first introduced IS to me, I was filled with excitement, knowing that I had joined administration and leadership at the right time. I would be learning strategies oriented toward social justice, data-based thinking, and a focus on practitioners’ experiences. This lens matched my philosophy and vision for improvement. Instead of the traditionally oppressive, subtraction/substitution-based approach to change leadership, we were using social justice principles and processes. We designed a personal trial first, using IS to develop an attainable goal with at least four drivers. I decided on a goal, shared it with my husband, and we each collected separate data for comparison on approach and outcomes. We used the same drivers to guide

Embedding Improvement Science in Principal Courses

113

our PDSAs and collected data. The results were unique to each of us and were very different. They revealed the areas where one of us needed to adjust their practice. My partner followed the PDSA as designed and reached his goals. I did not come even close to realizing my personal goal. I learned: Context matters. This experience was illustrative of how we as practitioners often, unintentionally, derail efforts when one small portion of a strategy or system fails, without noticing the depth of the impact in the results we later obtain. Our misperceptions are at the very root of the disillusionment we often feel after investing time, effort, and hard work with disheartening results. This revelation about systems and structures was what convinced me to adopt IS as the improvement strategy in my shared leadership vision, working collectively for the improvement of not only our individual practice but in our systems at the school, district, and state levels. This experience strengthened my conviction that radical changes fragment relational trust and negatively impact the ability of leaders to lead. Instead of implementing fast and learning slowly, we learned to implement well by starting small (Bryk et al., 2015). Investing in improving our practices in schools through the use of IS tools and processes is consistent, sustainable, and realistic. IS is tied to students’ needs, and the data from PDSA cycles inform our practices as we learn how to improve. Using professional learning communities (PLCs), I learned how to incorporate design-based thinking in my grade-level teams, analyzing student data and adjusting our pedagogy in PDSA cycles for improving student engagement, student oral participation in the partner language (Spanish), consistency in implementation of classroom ritual and routines, family engagement, and program redesign. The power of those closest to the change engaging in the PDSA cycles was evident in many grade-level meetings, but one particular example stands out. In a second PDSA cycle with a science teacher, we heard about his “aha” moment when the data revealed, for the second time, that it was inconsistencies with the classroom routines that threw students off. The inconsistencies created disengagement, resulting in behavior issues that kept students from

114

Teaching Improvement Science in Educational Leadership

learning. Something so seemingly small and inconsequential as discussing the data sufficed for our colleague to move from being okay with participating in the PDSA cycles, to leading our PLCs and PDSA cycles, to then requesting additional collaboration time with our building coach, and ultimately starting a book study on IS with his grade-level peers. The idea of improvement allows for vulnerability and safety for each member of the team to contribute to the creation of the right conditions to reach a common goal. IS has been a powerful and effective strategy that has improved academic outcomes for all students in our K–12 Dual Language Program, and particularly for our students of color. Lessons Learned Now that our program redesign work has been completed, we reflect on several lessons about budgets, timelines, processes, communication, and influencing our colleagues. Working within a large, complex public university has its strengths and its challenges. One of PSU’s strengths is its commitment to shared governance. Shared governance has many strengths; in particular, the wisdom of many content-area experts contributes to stronger university outcomes. However, one drawback is the time it takes to discuss, reach consensus on, and receive approval prior to implementing changes. When our state determined that 3 years would be sufficient time for universities to change their principal licensure programs, they either knew of the glacial speed of change in universities (Cassuto, 2016) or they did not care about the time constraints. We designed a process that included consensus decision-making within our program and a timeline in which our program and syllabi revisions would be submitted for discussion and revision within 1 year of the new rules being approved. We expected that we would need the second year to obtain university approval, which in our shared governance process includes approval from the chair, the department faculty, the faculty representatives on our college’s program and policy committee, the entire faculty of the college, the dean, the university’s graduate

Embedding Improvement Science in Principal Courses

115

council, the faculty senate, and the president’s office. Four months after presenting our materials at the college level, our materials are now under consideration by the full faculty. The third year of our redesign process will be used to submit our materials to the state for approval to continue as a principal preparation program in our state. Next, we learned that strong communication is critical during a change process. Internally, we communicated weekly via email and twice monthly in meetings in which we shared our progress and thinking on our assigned work. These meetings were documented in meeting notes and emails to the Dean’s Office, the chair, and the chairs of the shared governance committees. In this way, we had the support and guidance of those we respected and whose guidance would be of assistance. Where we fell short, however, was in communicating informally with those within the system who had more traditional concepts regarding school leadership and the culture of academia. We never reached out to those who teach traditional research courses to share with them the new state standards that did not include research. We should have. We were not able to get meetings with the leadership of approving committees. We should have tried harder. Although we made several attempts to schedule a meeting, we should have found other avenues to secure a meeting, avenues such as a personal phone call, dropping by the leader’s office to request a quick meeting, or asking our chair to help facilitate a meeting. These small steps would have helped us proactively address concerns colleagues raised later. Discussing issues informally on an ongoing basis could have given us the chance to influence our colleagues’ understanding of the state’s focus and also help us improve our equity focus, equity terms, how we use data and research, and even smaller issues such as why we included the names of syllabus authors on official documents. These informal discussions could have given our colleagues the opportunity to respond professionally to these and other issues, and our team would have been able to refine our work while we were in the process of the work and not at the conclusion of our work. A final lesson learned is about humility and continuous improvement. While our team comprises state and nationally known experts

116

Teaching Improvement Science in Educational Leadership

on culturally responsive school leadership, our program and courses reflect our best thinking only as of today. We must continue to critically examine our program and syllabi. We need to collect data on whether our changes have resulted in improved outcomes for school leaders and ask questions: Can interns effectively lead complex improvement processes in their schools? Are they leading efforts to reduce or eliminate racial disparities? Are they known as culturally responsive leaders whose schools enhance the dignity of every child, family member, and staff member in their care? The improvement process must continue—even after we have “official” approval.

References Adams, M., Bell, L., & Griffin, P. (2016). Teaching for diversity and social justice. Routledge. Allen, M. J. (2004). Assessing academic programs in higher education. Anker. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Carlile, S. P., & Peterson, D. S. (2019). Improvement science in equity-based administrative practicum redesign. In R. Crowe, B. N. Hinnant-Crawford, & D. Spaulding (Eds.), The educational leader’s guide to improvement science: Data, design and cases for reflection (pp. 197–216). Myers Education Press. Carlile, S., & Peterson, D. S. (2020) Embedding improvement science in one course: Principal leadership for equity and inclusion. In R. Crow & D. Spaulding (Eds.), Teaching improvement science in educational leadership: A pedagogical guide. Myers Education Press. Cassuto, L. (2016). What will doctoral education look like in 2025? The Chronicle of

Higher Education. http://chronicle.com/article/What-WillDoctoral-Education/

234666 Dewey, J. (1990). The school and society and the child and the curriculum. University of Chicago Press. (Original work published 1956) Dottin, R. L., & Voegele, J. D. (2020, April, 9). Creating alignment in your program assessment work: The academic innovation mini-fund program assessment planning and reporting program at Portland State University [PowerPoint presentation].

Embedding Improvement Science in Principal Courses

117

Freire, P. (1993). Pedagogy of the oppressed. Continuum International Publishing. Fullan, M. (2011). Motion leadership: The skinny on being change savvy. Corwin Press. Fullan, M. (2013). Stratosphere: Integrating technology, pedagogy, and change knowledge. Pearson Canada. Gay, G. (2010). Culturally responsive teaching: Theory, research and practice (2nd ed.). Teachers College Press. Haas, E. M., & Brown, J. E. (2019). Supporting English learners in the classroom: Best practices for distinguishing language acquisition from learning disabilities. Teachers College Press. Holdheide, L. R., Goe, L., Croft, A., & Reschly, R. (2010). Challenges in evaluating special education teachers and English language learner specialists. https:// files.eric.ed.gov/fulltext/ED520726.pdf Matthews, R. S., & Newman, S. (2017). Chief academic officers and gateway courses: Keys to institutional retention and persistence agendas. New Directions for Higher Education, 2017(180), 63–73. Oregon Department of Education. (2020). Education Investment Board: Oregon Equity Lens. https://www.oregon.gov/ode/students-and-family/equity/equity initiatives/Pages/default.aspx Oregon School Administrator and English Learner License Standards, Rule 584-2350010 (filed 2/25/2019, effective 2/25/2019). https://secure.sos.state.or.us/oard/ viewSingleRule.action?ruleVrsnRsn=255682 Peterson, D. S. (2017). Preparing scholarly practitioners: Redesigning the EdD to reflect CPED principles. Impacting Education: Journal on Transforming Professional Practice, 2017(2), 33–40. Peterson, D. S., & Carlile, S. (2019). Preparing school leaders to effectively lead school improvement efforts: Improvement science. In R. Crow, B. N. Hinnant-Crawford, & D. T. Spaulding (Eds.), The educational leader’s guide to improvement science: Data, design and cases for reflection (pp. 167–182). Myers Education Press. Scope and responsibilities of school administrators, Rule 584-235-0010 (filed

2/25/2019, effective 2/25/2019). https://oregon.public.law/rules/oar_584-235-

0010 Yamada, H., & Bryk, A. S. (2016). Assessing the first two years’ effectiveness of Statway®: A multilevel model with propensity score matching. Community College Review, 44, 179–204.

chapter seven

The Essential Role of Context in Learning to Launch an Improvement Network EMMA PARKERSON

Carnegie Foundation for the Advancement of Teaching

KELLY MCMAHON

Carnegie Foundation for the Advancement of Teaching

BARBARA SHREVE

Carnegie Foundation for the Advancement of Teaching

S

Abstract

ince 2010 the Carnegie Foundation for the Advancement of Teaching has been bringing together researchers, K–12 teachers, school and district leaders, and university faculty members to work on accelerating improvement in education. Initial efforts were focused on developing a coherent set of principles and methods for using improvement science in education in order to help educators transform their systems. After launching two successful networks that used improvement science to make headway on longstanding inequities in education, Carnegie shifted its focus to making emerging know-how about networked improvement science accessible to the field. Toward this aim, Carnegie began offering intensive support to teams of educators who were interested in learning to launch networked improvement communities (NICs). In this chapter, we draw on lessons that we have learned as instructors of programs that guided groups of educators in developing the necessary components to form new NICs. 119

120

Teaching Improvement Science in Educational Leadership

Our account highlights a point that many educators learn early in their careers: Instructors have to be ready to learn with their students and to adapt based on the unique conditions presented by each group of learners. As instructors practice, they develop a nuanced understanding of essential contexts that influence their adaptations, such as learners’ prior knowledge, their identities, and the extant cultural conditions in which learners work. In this chapter, we focus on the role of essential contexts as groups form NICs. More specifically, we focus on how instructors attend to essential contexts in the instructional choices they make. Background Networked improvement communities, or NICs, are a distinct form of collaboration, bringing the social learning that networks afford together with the rigor and discipline of improvement science. NICs are scientific-professional learning communities distinguished by four characteristics: (a) they are focused on a well-specified common aim; (b) they are guided by a deep understanding of the problem, the system that produces it, and a shared theory of improvement; (c) they are disciplined by the rigor of improvement science; and (d) they are coordinated to accelerate the refinement of interventions along with their more rapid diffusion out into the field (Bryk et al., 2015). These attributes differentiate NICs from other common forms of collaboration in education in that NICs are communities of accomplishment. When individuals come together to form a NIC, they seek measurable improvement on a wicked problem they experience in their context, or across multiple contexts (Gomez et al., 2016). As well, thanks to the rigor of improvement science, NICs produce specified changes that lead to improvement along with knowledge about when and how to adapt those changes into varied contexts. As a result of individuals working in NICs, the systems in which they are actors develop organizational capacity and improvement capability to apply to future improvement work. Typically, a small initiation team stewards the formation of a NIC, and that leadership team eventually forms a centralized network

Role of Context in Learning to Launch an Improvement Network

121

hub. The work of the hub is to connect network members and foster community, to support learning in local sites, and to engineer social learning, keeping the network learning in the same direction. The hub’s effort to coordinate action and learning across multiple improvement teams makes the work of NICs distinct from that of a single improvement team. While NICs represent one specific approach to problem-solving in education, elements of this approach are often taken up by any educator doing collaborative improvement work. For example, while all improvers seek to understand what works for whom and under what conditions, the imperative for doing so is heightened in a NIC, given that, by design, NICs bridge multiple contexts so that the network learns from and attends to the varying conditions that lead to variation in performance. We believe that the lessons learned from instructing educators in the formation of NICs are translatable to many contexts where improvement science is enacted in education, whether within a single school, across schools in a district, or in a university–district partnership. Guiding the Emergence of Networked Improvement Communities With proof of concept and much learning from its early efforts, the Carnegie Foundation has shifted focus from operating networks to designing programs to help others who want to do so. These offerings teach learners how to launch a NIC through an eclectic set of methods drawn from continuous improvement/improvement science, design thinking, and systems thinking, among other sources. Carnegie’s network initiation programs are characterized by experiential learning (Kolb, 1984), where learners have practical, hands-on experience with guidance from Carnegie instructors. Instructors design activities to support learners with appropriate scaffolds, guidance, and feedback throughout the multiple phases of the NIC initiation so that they are equipped to serve as hubs that lead others in testing, social learning, and improvement. As a result of their participation, learners do not learn about launching but actually launch their networks while building skills they will need as network leaders. We call this

122

Teaching Improvement Science in Educational Leadership

a doing to learn approach that privileges doing improvement work for the purpose of learning transferable skills that can be applied to work beyond the immediate NIC effort. Carnegie’s approach to NIC initiation is anchored in theory about how these networks develop over time, articulated by Jennifer Russell and colleagues (2017) in “A Framework for the Initiation of Networked Improvement Communities”. In alignment with this framework, Carnegie instructors support groups to move through activities and experiences that build a foundation for their shared improvement efforts (the NIC core technology) and to form as NICs (the NIC’s social organization). Learning designs around the development of core technology are well proven in other industries (e.g., Institute for Healthcare Improvement’s Breakthrough Series Collaborative, Cincinnati Children’s Hospital Intermediate Improvement Science Series—I2S2). The Carnegie Foundation has built from these tried and tested approaches, deploying them in a networked context in education. This approach has most recently been codified in a program called the NIC Design Learning Lab. The Learning Lab is a 7-month program for teams who seek to initiate a NIC. The program includes four instructional workshops punctuated by “action periods” during which participants use the knowledge and skills gained to further the establishment of a NIC (see Figure 7.1). In the spirit of doing to learn, participating teams are expected to complete work in their own contexts before returning to the next workshop.

Problem Identified

Commission Network

Pre-Work

Workshop 1

ACTION PERIOD

Workshop 2

ACTION PERIOD

Workshop 3

ACTION PERIOD

Workshop 4

Figure 7.1. Learning lab program structure. Problem Identified

As participating groups engage in workshop activities and continue that work in their home context between sessions, they build Commission Network

Pre-Work

Workshop 1

ACTION PERIOD

Workshop 2

ACTION PERIOD

Workshop 3

ACTION PERIOD

Workshop 4

123

Role of Context in Learning to Launch an Improvement Network

knowledge of improvement principles, methods, and mindsets situated in the social organization of their network. The Learning Lab program is shaped by a syllabus that describes how participants are introduced to foundational concepts of improvement science and the power of networks to accelerate learning. The syllabus also outlines a sequence of experiences that are important steps in a network’s formation (see Figure 7.2). Pre-Learning Lab

Month 1

Month 3

Month 5

Month 7

Commission the Network

Understand the Problem

Focus Collective Effort

Develop a Theory of Change

Prepare for Launch of Testing

0

1

2

3

4

Core Principles of Improvement

Process Mapping

NIC Basics

Cause & Effect Analysis

Problem Identification & Scoping

Data Investigation

User Experience Investigation

Consolidating Learning from Investigations Crafting an Improvement Aim Developing a Theory of Improvement

Revising Aim & Theory Consulting Expert Knowledge Developing a Measurement System Managing Network Knowledge

Inquiry Methods to Test Changes Articulating a Learning Strategy Planning and Managing Learning Routines

Figure 7.2. Learning Lab design.

As described in Figure 7.2, teams move through four key stages in their development toward launching a NIC: analyzing the problem to be solved, building the theory, operationalizing the theory, and preparing for launch. Each of these phases is marked by technical milestones that an initiation team reaches before proceeding to the next phase (e.g., consolidating and codifying learning from an analysis of the causal system). Leading up to each of these milestones, there is individual learning around both technical improvement skills (e.g., how to create a process map) as well as social learning (e.g., how to engage diverse perspectives in creating said process map). Individual learners also develop improvement dispositions as they are introduced to systems thinking, have opportunities to view a problem from different vantage points, and learn ways to lead others in improvement. In this way, any team’s ability to achieve the milestones, such as setting a specific goal for the network to achieve, will be dependent on individuals and the team acquiring requisite knowledge.

124

Teaching Improvement Science in Educational Leadership

In addition to the technical milestones that an emerging network achieves, readiness for launch is also signaled by a shared systems orientation to problem-solving and a group’s habits of drawing on evidence to guide its next actions. Throughout the Learning Lab, teams are supported to develop social connections through the establishment of community agreements, which are then strengthened throughout the program as individuals and teams reflect on those agreements and how they manifest through the learning process. These practices and dispositions are indicators that a group is beginning to function as a scientific professional learning community. Instruction in the Learning Lab—and any NIC initiation effort— is both complicated and complex due to a need to attend to both individual and group development. Instructors design and monitor learning conditions to foster improvement capabilities among individuals and as a collective. While there is a familiar learning journey that each newly forming network travels, described in Figure 7.1, the instructional strategies that instructors use to foster the development of improvement mindsets and skills within newly forming networks vary in order to accommodate the essential contexts that groups bring to their work. Instructional Cases and Methods of Analysis Over the course of 2 years, we facilitated four separate teams that were seeking to launch networks focused on different educational problems. These problems ranged from student literacy proficiency rates to outcomes of clinical student teaching. While not all of these teams participated in the formal NIC Design Learning Lab program, the instruction for each team was guided by the Learning Lab syllabus. In all instances, NIC initiation took place through four multiday workshops over several months. Between each of these workshop sessions there was an action period during which teams were expected to engage in inquiry and activity in their contexts. For this chapter, we drew on our collective experiences as designers and instructors in these workshops. As well, we conducted a systematic review of facilitator agendas, instructional materials like

Role of Context in Learning to Launch an Improvement Network

125

slides and handouts, and after-action reviews for the first two workshops in each series, which focused on the analysis of the network’s problem and the system that produces it. Through this review, we sought to identify patterns around instructors’ decisions to tailor instruction or to design alternative activity structures not drawn from the planned curriculum. In the sections that follow, we offer an explanation of the essential contexts we attended to as instructors and follow with illustrative examples of how we responded to those contexts by adapting instructional activities. We conclude by discussing the implications of these adaptations and offer suggestions for instructors of improvement science.

Four Essential Contexts Related to Initiating NICs Teaching improvement science to a group is quite complex. A script or set curriculum would surely simplify instructors’ work; however, our task as instructors is to make Bryk and colleagues’ (2015) learning imperative understood. Teaching to accomplish this is not simply a matter of telling or showing; rather, there will be fits and starts as participants learn through efforts to apply improvement science in their local classrooms, schools, or districts. Over the course of this learning, participants will need to see and respond to problems differently, to apply new tools, to relate differently to their own identities, and to relate to one another in new ways. Our reflective analysis highlights instructors’ intentional variation from the standard protocol as we designed learning conditions to these ends. While there may be others, we identified four essential contexts that are central to learning to launch networks: participants’ relationships to the problem they seek to solve, participants’ extant improvement capabilities, participants’ professional identities, and group dynamics. Participants’ Proximity to the Problem They Seek to Solve As we have described, NICs may take many forms and are characterized by diverse membership that fosters a wisdom of crowds. As

126

Teaching Improvement Science in Educational Leadership

such, it is possible that learners in any given network initiation program have varied relationships to the problem the network seeks to solve. These relationships influence learners’ abilities to acquire a new stance toward their problems, as is likely to happen when using improvement science to attend to a problem. For instance, if participants are too proximal to their problem, they may be challenged to approach the problem as a learner. Engaging with the problem with curiosity is an essential stance to developing a holistic understanding of the problem and the system that produces it. According to Peter Senge (1990), when we work in systems, as educators do, each of us tends to focus on the part of the system where we work; that one part is very compelling. As a result, it is often true that when individuals work to improve their system, they believe they have a complete understanding of a problem when, in fact, their understanding is incomplete because they have not considered the entire system. This is why complex system problems require a disciplined, rigorous investigation to uncover root causes. Proximity to a problem is also an important dynamic when learners have been working on the same problem for some time. In this case, they may hold tight to beliefs about what it will take to solve the problem that are grounded in meaningful prior experience. Yet often these experiences are anecdotal. Using improvement science requires improvers to engage in disciplined inquiry and draw conclusions based on evidence. This requires learners to remain open to the principled practice of investigating the problem, which can be challenging when learners bring with them significant expertise in the problem area. By contrast, if participants are too distal from the problem, they may underestimate the challenge of gaining a holistic and nuanced understanding of the system. In so doing, these learners often are challenged to engage deeply with improvement science. Being a novice is hard. If learners are not fully committed to persisting through the discomfort of learning new ways to attend to problem-solving, then instructors will have to design activities that promote more practice with understanding this new approach so that learners discover the value of improvement science through “early wins” of learning about their problem.

Role of Context in Learning to Launch an Improvement Network

127

Participants’ Extant Improvement Capability Another essential context that instructors must attend to is the extant improvement capabilities that participants possess. In order to help elucidate multiple perspectives of stakeholders who work in systems, improvers must be able to gather and interpret multiple forms of data, including conventional outcome data as well as qualitative sources like empathy interviews and journey maps. Improvers need to be able to interpret these different types of data and make inferences with an inquiry mindset. As well, improvers must be able to interpret and apply evidence-based research to their local contexts. But working with data and evidence in order to learn has not always been promoted by policies and other practices in education. The implementation of accountability policies has created, in some cases, cultures that situate data activities as tasks of enforcement or compliance. As a result, for some educators, these cultural ties between data and practice may need to be revisioned such that educators are able to embrace data and evidence as resources for advancing learning to get better. Depending on participants’ previous training and experiences, they may require opportunities to learn or relearn scientific methodologies that are part of improvement science. Often, it is true that there is variation among individuals within initiation teams in this regard. This presents a social learning challenge for instructors in supporting teams to leverage extant expertise to move forward, while also building the expertise of individuals in the team. Participants’ Professional Identities Participants possess professional identities that they bring with them to workshops that bear on their ability to work and learn to collaborate with one another. We know that in education, professionals possess specialized knowledge associated with their roles (Cohen, 2011; Darling-Hammond & Bransford, 2005). For example, English teachers have expertise in how to write, whereas a faculty researcher may be an expert in qualitative methods. The extent to

128

Teaching Improvement Science in Educational Leadership

which participants have worked together previously and understand one another’s professional identities may influence their expectations for how they relate to one another. Even teachers working within the same school may not have had previous opportunities to collaborate. The ways in which individuals relate to one another, as well as the organizational contexts where they work, influence how they think about attending to changing practices, systems, and collective efforts to improve. Becoming a NIC, for most educators and researchers, demands a transformation of their professional identities and how they relate to one another and their organizational roles. One of the features of networked improvement science is that all members of a network are seen as improvers working together in the community of common accomplishment, but behaving in this way is likely to require a transformation of identities of individuals, as well as the group. Group Dynamics The nature of interactions among individuals who are charged with accomplishing a task together can influence the degree of challenge the group will face. For example, in a team sport like basketball, where five players need to score more points than their opponents to win, the degree to which the players are willing to develop their individual talents to contribute to the team’s chances of winning will influence their chances at succeeding. Furthermore, the degree to which team members cooperate with teammates in applying those talents will have an even greater influence on their chances of success. Should individual players on a team compete with one another, the team must solve internal disputes as well as beat their opponents in order to win. Becoming a network requires individuals to learn how to cooperate with one another in order to become an adaptive group. Cooperative, trusting, safe team dynamics are needed to learn from failure, which is a core activity in improvement. Team dynamics must be accounted for as an important context within which individuals learn improvement science and how to become a network.

Role of Context in Learning to Launch an Improvement Network

129

Instructors must work with teams to develop dynamics supportive of improvement and to practice them with guidance and feedback along the way.

Attending to Essential Contexts While Guiding NIC Initiation Instructors supporting teams through the initiation of a NIC must consider how the essential contexts of a team shape how activities are taken up. Considering essential contexts leads to myriad decisions when enacting the syllabus activities that are part of network chartering and initiation. With the syllabus in mind, an instructor must regularly consider two questions: Who is in the room? and What relationships and understandings am I trying to foster? With this understanding, which often only emerges through interaction with the team, instructors must adjust their instructional plan to attend to the twin goals of building individual learners’ understanding of improvement science methods at work in a healthy NIC and building the group’s ability to apply that knowledge productively toward a shared goal. In this section, we offer three instances from our practice that demonstrate the ways in which the essential contexts influenced our instruction, as well as the ways in which the essential contexts interact with one another in practice. Each of these examples is drawn from a real NIC that has successfully completed the network initiation program and is now testing changes to system structures, processes, and norms to advance its goal. Examples of instruction are all situated in the early stages of the NIC initiation process, focused primarily on causal system analysis, or understanding the problem and the system that produces it. Example 1: Relying on Extant Research Skills to Advance Understanding the Problem Our first example is of a NIC located in higher education. In this case, Carnegie instructors supported a group of faculty from across

130

Teaching Improvement Science in Educational Leadership

different universities to initiate a NIC to improve student outcomes in a particular type of program. The convening organization for this NIC was a third-party intermediary. Instructors sought to foster an understanding of multiple analytical tools that, when put to use in local systems, revealed an understanding of the problem that an initiating team was seeking to solve and its root causes. When instructors considered who was in the room, they quickly realized that participants were extremely proximal to the problem, with responsibility for improving it and having been actively working on it for some time. As well, learners were well read in the related research and versed in traditional data analysis and research practices. Recognizing those strengths in the group, instructors opted to spend the majority of instructional time on hands-on activities with more novel improvement tools that could expand participants’ views of their own system. For example, instructors guided participants to develop maps of core processes within their programs, making tacitly held processes explicit so they might be improved. While instructors still explicitly identified research literature as one of four important sources of knowledge that improvers draw upon in understanding their problem, they did not guide the network through an analysis of literature. Instead, they explicitly encouraged participants to draw on their existing knowledge of the research literature as they consolidated learning about the problem from newer tools such as process maps (described above) and empathy research techniques that helped faculty understand students’ perspectives on and experiences with the programs they sought to improve. Example 2: Centering on Group Dynamics Before Improvement Methods An additional example of attending to essential contexts emerged from supporting a NIC bringing together a group of schools in a public school district focused on improving student outcomes within a particular content area. Although the schools exist within the same organization, this group of educators from schools and the district

Role of Context in Learning to Launch an Improvement Network

131

central office had not previously worked together as a collective on this kind of endeavor. While there were some existing relationships, there was little existing organizational structure to build on to support cross-role teams to collaborate between school sites and with the district team. Initiation team leaders and instructors recognized that building these collaborative conditions would be necessary for the group to develop as a NIC. As discussed earlier, a NIC leverages the power of the collective to accelerate learning. This kind of collective learning requires the establishment of trust and habits for sharing among network members that value learning from failure as well as success. These are the conditions that instructors sought to foster within this emerging network during an activity to consolidate learning from distributed investigation activities. When instructors centered on the relationships and understandings they sought to foster, they made a conscious decision to have each site team showcase a key learning from its inquiry. Instructors took an expansive view of learning from inquiry in pursuit of fostering safety among those in the room, and participants shared new understandings of the system in which the problem was situated that were grounded in evidence, insights into the use of a particular inquiry tool, and learnings about the process of documenting learning. Instructors’ decision to focus on the act of learning with an appreciative tone at this moment in time helped to foster community and begin to grow relationships where people could share the success of how they built understanding of their context, even when the evidence they were drawing on revealed challenging outcomes. Building on this experience of having their own learning valued by network members from varied roles within the district while also seeing the value in the experiences of others, teams were then able to turn with greater confidence and sense of safety to surfacing strengths and gaps that their separate investigations revealed related to their problem of focus.

132

Teaching Improvement Science in Educational Leadership

Example 3: Creating Opportunities for Deliberate Practice by NIC Initiation Leaders A final example also emerged from a NIC formed within a single public school district seeking to improve outcomes for a particular focal population of students. In this instance, instructors primarily worked with a small initiation team, most of whom shared a similar vantage point of the system from the central office. While more distal from the day-to-day work of student learning in classrooms, this group intended to engage professionals in other parts of the system who were more proximate to students and their work through the causal system analysis process. Instructors quickly realized that there was a need to foster trust between the initiation team and those whom the team was hoping to engage; they worked in different parts of their system and the groups held different kinds of authority in that system. In considering who made up the initiation team that would be working with Carnegie, the instructors decided to guide the initiation team in testing each improvement tool themselves before deploying it with others in their system. For example, during one workshop, the initiation team brought data from a root cause analysis process that surfaced many potential causes of the problem the network was hoping to solve. The initiation team thought that their next step would be to support network members at the school level to discern which of the hypothesized causes was supported by evidence. To prepare the initiation team for this work, instructors suggested an activity that the initiation team might use to guide network members in this process. Rather than coplanning with the initiation team for how to facilitate the activity, the instructors urged the initiation team to engage in the activity themselves first. This decision helped the initiation team understand more fully the complexity of the task they would be facilitating with others. As a result of their own direct practice, the team was able to move forward with additional context for how they might engage others with the tools. Doing the work together before taking it out to others also fostered empathy for those who would be working with the tools for

Role of Context in Learning to Launch an Improvement Network

133

the first time, which supported initiation team members in the development of a supportive, appreciative stance when facilitating these activities with network members. This, in turn, garnered more trust between network members and the initiation team, which would likely not have emerged organically given the authority dynamics. With increased trust among members, the group was well positioned to make progress with their root cause analyses. In each of these examples, as instructors we made decisions about how to proceed with learning after giving careful thought to who the learners were and what learning they were trying to foster. These instructional shifts did not drastically change the content with which the individuals engaged, the stated objectives of the sessions within the syllabus, or the milestones that the network had to reach to be ready to launch. By recognizing the role of essential contexts that are relevant to participants being positioned to take up a networked improvement approach, instructors adapted their plans for activities and changed the learning conditions so that they were better suited to participants’ specific circumstances.

Discussion The fundamentals of improvement science are well known and draw on decades of application and research in other industries. There are texts available to advance individuals’ learning of improvement skills (e.g., Langley et al., 1996), and these understandings provide a strong foundation for the practice of improvement science in networked communities. Yet the development of social conditions for doing improvement work in a network is less articulated (Khachatryan & Parkerson, 2020). Through our practice, we have found that teaching improvement science tools while attending to the formation of community and social learning in groups is complex. This can be seen clearly in each of the examples described in this chapter. This kind of instruction requires the creation of conditions for both individual and group mastery of improvement content and dispositions.

134

Teaching Improvement Science in Educational Leadership

Our analysis of facilitator materials affirms our belief that a syllabus is necessary when facilitating the initiation of NICs. The development of NICs occurs across a particular, standard arc from understanding the problem and the system that produces it to focusing collective efforts within the network to generating ideas for change (see Figure 7.2). This is content readily codified in a standardized syllabus. As well, we have evidence that NICs are best supported in the context of doing to learn through application in local contexts, social learning, and collective sensemaking. These principles are also readily adapted into a standardized syllabus including constructivist session designs. Our analysis also affirms a second hypothesis, that a syllabus alone is insufficient; an instructor will not experience success in guiding networked improvement by using a standardized approach, because the contexts in which networked improvement takes place are highly varied. Leading instructional experiences such as Learning Lab requires instructors to actually practice improvement science within their teaching. Instructors must embrace the six core principles of improvement in relation to their own practice as they plan, enact, reflect on, and modify their instruction. In particular, the second core principle is central in such facilitation: “The critical issue is not what works, but rather what works, for whom and under what set of conditions” (Carnegie Foundation, n.d.).

Recommendations for Instructors Attending to variation in the context of NIC initiation calls for the practice of adaptive integration. As Bryk and colleagues (2015) discuss in Learning to Improve, improvement science is about making standardized tools or interventions—such as a syllabus—work under a variety of conditions. Adaptive integration is a practice that calls for improvers to make “adaptations to the intervention itself ” while also “addressing some site-specific problems, necessary to solve, for the intervention to be integrated well” (p. 80). This brings us to recommendations to facilitators supporting groups or teams

Role of Context in Learning to Launch an Improvement Network

135

who are learning improvement methods by engaging in collaborative improvement activity. Know Your Learners Put concerted effort into understanding both the capabilities of individual learners and the capacity of the learners as a group to act on their learning. As we have engaged with more groups in the activities of initiation, we have begun to formalize this as an individual preparation activity prior to the first workshop. In more recent instructional experiences, we have asked participants to individually complete an improvement self-assessment that provides facilitators a window into each learner’s current improvement knowledge and relationship to the problem and system. As a group, we now know that it is essential for participants to come together early to articulate the problem, the team, executive sponsorship, and organizational expectations for the NIC, along with guidance from the instructor prior to the first workshop. These activities provide the instructor with a deeper understanding of learners’ prior knowledge and their ability to act on their learning. Clearly Define Benchmarks of Success Drawing on understanding of the group and the individuals within it, ensure that you have a clear, nuanced understanding of the benchmarks or milestones that are indicators of success for the group you are teaching. Learning improvement is iterative and complex; you will likely have to abandon plans that you had, and this is made easier by having clarity about your outcomes for learners in the short and long term. Facilitators may find it beneficial to prioritize learner outcomes, indicating outcomes that are must-have and nice-to-have for a network to make progress toward its next milestone.

136

Teaching Improvement Science in Educational Leadership

Understand Interconnections Ensure that you understand interconnections among activities in your facilitation plan. Instructors inevitably have to adapt instruction in the moment, and it is important at those times to know how modifications will impact what may be planned for what comes next and where you want learners to be. Facilitators may find it beneficial to spend time clarifying interconnections among sessions (Is there prior knowledge that must be put to use in this session? Is there knowledge or skills to be re-examined in subsequent sessions?). Attend to Both Technical and Social Objectives in Your Design Review every agenda through two lenses. First, focus on the technical instruction around improvement methods. Then, review with an explicit eye toward group dynamics, social learning, and psychology of change. In our practice, we have found it useful to engage different facilitators with expertise in each of these areas to do separate reviews, who then come together to discuss. Practice Improvement Science in Your Teaching Make predictions about what will happen during instruction and the degree to which objectives will be met. Reflect individually on your own practice, which may include reviewing and analyzing videos of your instruction and coaching. Instructors should also engage in collective reflection by using tools such as an after-action review. These opportunities to reflect on teaching, adaptations to teaching, participant learning, and the instructional design process create space for continual improvement. The analysis we engaged in to write this chapter revealed to us that we had tacit social objectives that we know emerging NICs need to learn that, to date, have not been made explicit on our standard syllabus. We discovered through conversation about facilitation artifacts that, as experienced instructors, we each held expectations for the development of relationships and interpersonal dynamics

Role of Context in Learning to Launch an Improvement Network

137

within the teams that we support. As we prepared for each workshop, we would add these objectives as we developed customized, NIC-specific facilitator materials. One of our key learnings is that social objectives, as well as technical ones, warrant inclusion and deeper attention within the syllabus, along with recommendations for how they might be adapted across essential contexts.

Conclusion The essential contexts, examples, and recommendations in this chapter represent learnings from our practical experiences facilitating the initiation of NICs. As networks continue to emerge, especially in varied contexts and sizes, the field has an opportunity to develop more robust knowledge about best practices in networked improvement. Collectively, we have much to learn about how to advance both individual and group practice of improvement. As introduced in the discussion, impactful facilitation requires that facilitators practice improvement in their own work. To allow for collective improvement in teaching networked improvement to groups, we need to understand with greater nuance how to attend to essential contexts. For example, does one essential context more often lead to changes in particular instructional exercises or around particular content? By studying these moments, we will strengthen our ability to be explicit about what we are doing as facilitators, and why. Such study may be accomplished through three complementary means: a community of practice for instructors guiding networked improvement, developmental evaluation and analytic partnership for initiation of NICs, and research on the methods of NIC initiation and their relative impact. Community of Practice Instructors of networked improvement across organizations should join together to begin building a more robust knowledge base about instruction and coaching of NICs. For example, a community of

138

Teaching Improvement Science in Educational Leadership

facilitators could form to reflect upon and study their instructional adaptations across various essential contexts, together building capacity in the field around networked improvement and problem-solving. Developmental Evaluation and Analytic Partnership Instructors need not act alone in the work of initiating NICs. There is growing momentum around developmental evaluation that supports NICs to continually improve, such as Carnegie’s Evidence for Improvement initiative. Analytic partners in improvement work can and should play a significant role in documenting instructional approaches and reflecting with improvers about what worked for whom under what conditions. Research The field would benefit from more research that draws connections between the success of NICs in their improvement work with the methods deployed in the initiation process. We recommend here an expansive view of success, to include both technical and social objectives (Has a network achieved its aim? Has a social infrastructure developed that persists over time?). Together, these recommendations advance a networked approach to guiding networked improvement. Instructors of networked improvement science writ large must embrace the improvement ethos of “possibly wrong, definitely incomplete.” In so doing, and through continuing to make public our instructional practice around networked improvement, we expand the collective opportunity to get better at getting better. As the field around these methods continues to develop, we have an imperative to join in community and learn more, together, faster in pursuit of making headway on longstanding inequities in education.

Role of Context in Learning to Launch an Improvement Network

139

References Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Carnegie Foundation for the Advancement of Teaching. (n.d.). The six core principles

of improvement. https://www.carnegiefoundation.org/our-ideas/six-core-

principles-improvement/ Cohen, D. K. (2011). Teaching and its predicaments. Harvard University Press. Darling-Hammond, L., & Bransford, J. (2005). Preparing teachers for a changing world: What teachers should learn and be able to do. John Wiley & Sons. Gomez, L. M., Russell, J. L., Bryk, A. S., Mejia, E. M., & LeMahieu, P. G. (2016). The right network for the right problem. Phi Delta Kappan, 98(3), 8–15. Khachatryan, E., & Parkerson, E. (2020). Moving teachers to the center of school improvement. Phi Delta Kappan, 101(6), 29–34. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development (Vol. 1). Prentice-Hall. Langley, G. J., Moen, R. D., Nolan, K., Nolan, T., Norman, C., & Provost, L. (1996). The improvement guide: A practical approach to enhancing organizational performance. Jossey-Bass. Russell, J. L., Bryk, A. S., Dolle, J., Gomez, L. M., LeMahieu, P., & Grunow, A. (2017). A framework for initiation of Networked Improvement Communities. Teachers College Record, 119(5), 1–36. Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. Currency.

chapter eight

From Learning to Leading Teaching Leaders to Apply Improvement Science Through a School–University Partnership SEGUN C. EUBANKS

University of Maryland College Park

MARGARET MCLAUGHLIN

University of Maryland College Park

JEAN L. SNELL

University of Maryland College Park

CHAROSCAR COLEMAN

Prince George’s County Public Schools

Abstract

T

he University of Maryland (UMD) College of Education began offering an EdD in supervision and administration in the early 1950s, and the program curriculum remained relatively consistent over the next 5 or 6 decades. However, the program as it exists today has changed substantially over the last 8 years from what was a traditional EdD structured like an abbreviated PhD to a fully recognized professional practice doctorate that is substantially distinguished from the PhD offered by the college. Today, our 60-credit, 36-month doctoral program annually enrolls a new cohort of 15–20 school leaders from surrounding local school systems in Maryland who embark on a program of study that embeds the principles of improvement science within the core leadership skills as defined by 141

142

Teaching Improvement Science in Educational Leadership

the National Educational Leader Professional (NELP) Standards. In this chapter, we will explore the impetus for the evolution of our practitioner doctorate program, and the influence and impact of improvement science in this program transformation.

Background The UMD EdD program was designed to recruit individuals employed as leaders in local school systems. Over time, the program began to focus on enrolling students in cohorts from specific school systems. Maryland’s school systems are countywide, with the exception of Baltimore City, and several proximate systems, notably Montgomery and Prince George’s Counties, are large and able to support a cohort of district leaders. While this cohort model was successful when it was originally introduced, the program began to struggle for a variety of reasons, not dissimilar from other universities. Perhaps the most crucial factor was a lack of commitment among the college faculty to teach and advise in the EdD program, and a strong preference to be affiliated with the on-campus PhD instead. This fact, coupled with the traditional curriculum and expectations for the dissertation, created a large backlog of EdD students who were considered “ABD” or “all but dissertation.” The dean at that time made a commitment to reducing the backlog of students, and over the next few years a number of students completed their programs. However, with only a few champions among the faculty, the EdD program became dormant except for one small and innovative cohort. The one exception to this period of dormancy was in 2007 when two faculty members, Willis Hawley, the former dean of the college, and David Imig, in collaboration with the National Education Association (NEA), launched a small cohort of about 20 students with the idea of developing a practitioner-based doctoral program for educational leaders working in the policy sector focused on education policy and its impact on equity.

Teaching Leaders to Apply Improvement Science

143

The UMD EdD Gets a Jump Start In 2012, the superintendent of Prince George’s County Public Schools (PGCPS), William Hite, approached the dean of the College of Education, seeking a collaboration with the university to offer its EdD to selected system administrators. The University of Maryland, which is located in Prince George’s County, has a unique commitment to supporting development in the county. For this reason, the superintendent’s invitation aligned with the dean’s priorities. However, there were no dedicated tenure-track faculty, and the existing program curriculum and dissertation requirements needed an updated approach. The new approach came from the Carnegie Project on the Education Doctorate (CPED), which launched in 2007 and was directed by David Imig, a professor of practice in the UMD College of Education and one of the champions of the EdD. Thus began the first major changes not only to the EdD curriculum but also to its underlying purpose and philosophy, guided by the CPED framework and its definition of the “scholarly practitioner” (CPED, n.d.). A small group of tenured and nontenured faculty, led by the former associate dean, began to plan an entirely new executive leadership– style EdD program that would be cohort-based, offered through professional weekend seminars and hybrid delivery courses, and most importantly, with a dissertation organized around investigating an authentic problem of practice. A key aspect of the curriculum was requiring students to complete as part of certain courses components of their culminating “dissertation in practice,” which was to be based on a comprehensive analysis of an identified problem of practice in the students’ school system (this process will be explained in more detail later). As the first cohort of 18 PGCPS administrators enrolled in the approved program in 2012, the key faculty involved in the EdD began to realize that more changes, specifically in the curriculum, needed to be made. One big change was the compact schedule, wherein the core leadership seminars were scheduled over 5 intensive weekends. This required a leap into more online delivery, but more important

144

Teaching Improvement Science in Educational Leadership

for the program was what became known as “ensemble teaching” or having several faculty share teaching, including developing a syllabus, online modules, and a block of time during a Saturday session. This opened up the courses to a wider range of expertise and also provided a way to engage nonprogram faculty in a limited way. Two additional cohorts of PGCPS administrators were enrolled in subsequent years, as well as three other cohorts of administrators from adjacent county systems. The program has enrolled 84 system administrators from six school systems. Of that number, 12 students are completing coursework and 49 have graduated, representing a 59% graduation rate within 36–40 months. Another five students are currently involved in collecting their dissertation research. While the revised EdD curriculum and expectations clearly moved the degree away from a traditional PhD model, a number of challenges persisted. Among the most immediate challenges were the policies of the university’s Graduate School, which recognized the EdD, but except for the credit hour requirements, imposed the same dissertation requirements as the research doctorate. As such, our dissertation framework was revised with each successive cohort based on prior experiences and understanding. Furthermore, in 2017, the College of Education joined the network called iLEAD (Improvement Leadership Education and Development) sponsored by the Carnegie Foundation for the Advancement of Teaching. Through engagement with iLEAD, the principles of improvement science began to be more explicitly embedded into coursework and expectations within the dissertation in practice and became the signature pedagogy of the program. The New “Improved” EdD Program In spring 2019, the University of Maryland’s Graduate School finally recognized a new category of doctoral degrees, the professional practice doctorate: The Professional Practice Doctorate is a rigorous and adaptable graduate degree that meets the evolving professional needs of strategically identified target audiences. The Professional Practice

Teaching Leaders to Apply Improvement Science

145

Doctoral degree is granted only upon sufficient evidence of high attainment in professional practice. It is not awarded for the completion of course and seminar requirements no matter how successfully completed. These degrees differ from the research and scholarship Doctor of Philosophy (Ph.D.) degree. The following degree programs are recognized as Professional Practice Doctoral Degrees at the University of Maryland: Doctor of Education (Ed.D.), Doctor of Musical Arts (D.M.A.), and Doctor of Audiology (Au.D.) (University of Maryland Graduate School Professional Practice Doctoral Policies https://academiccatalog.umd.edu/graduate/ policies/professional-practice-doctoral-degrees-policies/ retrieved 11/29/20)

The dissertation requirement was replaced with a required 6 semester hours of doctoral capstone credits. A five-member Doctoral Capstone Examining Committee is still required and at least three members must be appointed to graduate faculty but could include professional-track faculty as long as their primary academic affiliation is with the university. A second prompt for change in the program came from a request from the Maryland State Department of Education that the college submit its EdD program for approval for Superintendent 2 certification. This would be the first such approved program in the state and was intended to serve as a model for other programs. In order to obtain approval, the program had to be fully aligned and consistent with the 2018 NELP standards for district leaders. So, beginning in 2019, the EdD program again went through a yearlong major revision process to integrate the NELP district-level standards, improvement science principles, and the new Graduate School policy. The newly revised program maintains many of the former features, including the same coursework, albeit with major revisions to sequence and syllabi, a new investment in a leadership apprenticeship/mentoring component, and of course a capstone project. All changes have been approved, and the first cohort to enroll in the revised EdD program is being recruited to enroll in fall 2020.

146

Teaching Improvement Science in Educational Leadership

Teaching Improvement Science Principles The teaching of improvement science principles was introduced early in the EdD and has been increasingly integrated into the program curriculum, including the dissertation in practice, as the program has evolved. The program relaunch in 2012 reoriented the coursework and dissertation to center on problems of practice, and the newest and current program iteration for the superintendent certification weaves NELP standards and the pedagogy of improvement science throughout the coursework, culminating in an improvement-centered doctoral capstone. The following sections share in more detail the strategies employed in teaching improvement science principles, specifically user-centered problems of practice, seeing the system that is producing the problem, and networked improvement communities (NICs). We will describe how we teach the principles of user-centered problems of practice and seeing the system that is producing the problem through coursework and the doctoral capstone. We then describe a unique partnership with program graduates that serves to accelerate learning through networked improvement communities. The chapter concludes with a first-person narrative by a program graduate of how the learning in the EdD program and engagement in improvement science principles after graduation have impacted his work and the district he serves. Introducing the Principles of Improvement Science On the very first day of the very first class of the UMD EdD program (Seminar on School District Leadership), we introduce the principles of improvement science as the guiding framework for their leadership and for their doctoral studies. We introduce students to improvement principles and the problem of practice through both Bryk et al.’s Learning to Improve (2015) and Mintrop’s Design-Based School Improvement (2016). Bryk, as the seminal work of improvement science in education, provides the theoretical framework for becoming user and problem focused and for understanding why so many well-meaning education reform efforts have failed to produce

Teaching Leaders to Apply Improvement Science

147

significant improvement in student outcomes. The educators we teach come from large and complex school districts that have long histories of failed or faltering attempts to achieve equity for student outcomes. Mintrop (2016) provides students with an accessible and direct application of the principles of improvement through case studies of four school leaders. The principles of improvement science, of defining and analyzing problems of practice and developing “design-based” strategies and solutions, extend from this first course through subsequent coursework. Below we discuss specifically how we teach two principles: user-centered problems of practice and seeing the system that is producing the problem. User-Centered Problems of Practice Perhaps one of the most difficult principles for newly enrolled EdD students to grasp is how to define a problem of practice (POP). Students begin the journey of identifying, articulating, and analyzing a problem of practice at the beginning of the first class, and they continue to refine and adjust this problem throughout this first course as well as subsequent courses. While each student develops an individual problem statement, there are many similarities, as might be expected given the commonality of problems faced by school systems across the state. Students work collaboratively and individually to facilitate refinement of their problem statements as well as analyses. In working with students both collaboratively and individually, we have found some very common challenges and misperceptions in the problem development phase of learning. Perhaps the biggest challenge is shifting students from thinking about “topics” of interest (e.g., teacher efficacy, distributive leadership) to authentic and enduring “problems.” They also engage in what Byrk et al. (2015) refer to as “Solutionitis,” whereby they orient a problem in terms of the solution (e.g., need new teacher induction programs or lack of resources to hire more staff). Another typical start to problem identification is what we have come to call the problem of “outcome.” As expected, students initially frame a problem that is associated with their system’s strategic goals and stated in

148

Teaching Improvement Science in Educational Leadership

terms of a quantifiable state or system accountability indicator (e.g., 25% of fourth-grade English learners score below basic on the state reading assessment, Black males are twice as likely to be suspended compared to their White peers). While these are legitimate system problems for the school leaders enrolled in our program, stated in this way these problem statements do not necessarily indicate the practices that create the problem. Therefore, we engage the students in the exercise of the “5 Why’s.” Through online and face-to-face small group work, we ask students to continuously question their initial assumptions about why a problem occurs and to discover the underlying causes of specific elements of the problem they are exploring. This process serves basically to get them to move to a deeper analysis of the underlying problem and to gain clarity on the real problem they are trying to address. From this exercise, students revise and develop an initial draft of a problem of practice statement. They must then begin the process of answering crucial questions: What is the scale of this problem in your school/district? Why is addressing this problem important? What are the equity implications of this problem? In addressing these questions, students must examine at least one data source in their system as well as do a preliminary review of relevant professional and scholarly sources. It often takes many sessions and layers of analysis to help students separate their problem of practice from a solution or topic that they were strongly committed to studying. The initial problem statement is then revised as students move through additional coursework to reflect their expanded depth and breadth of knowledge of the complexities of their problem. One common example of an initial problem identified by our leaders has been the concern about disproportionate suspension rates for African American males in school systems across Maryland. Several administrators selected this as an urgent problem and concluded that the solution would be professional development workshops for all teachers in a school, focused on “equity.” However, after careful review of the research and evidence regarding the suspension pipeline, our leaders identified several key causal factors,

Teaching Leaders to Apply Improvement Science

149

including principal actions and perceptions, past student performance, peer influences, school course structures (such as strict prerequisite grades or courses), and teacher perceptions and actions. Teaching our students to be more explicit and precise in the problem of practice means having them focus on existing actions or the lack of school system actions and initiatives that contribute to the problem, such as the patterns of discipline referrals for incidents of “disrespect” or “disruption,” or the use of suspension as a consequence for such referrals. Problem identification is a key component of our program and guides the learning of the student throughout the EdD. Focusing on a specific well-articulated problem leads to the next phase of the improvement science journey: conducting a causal systems analysis to identify root causes of the problem. This, in turn, positions the problem as something that can be within the sphere of influence for a school or school district to improve through systematic and disciplined inquiry and leads to consideration of alternative solutions or change ideas that can promote sustained improvement. Seeing the System That Is Producing the Problem A second key principle of improvement science as defined by Bryk and his Carnegie colleagues (2015) recognizes today’s public educational landscape as a highly complex array of interconnected parts and players. As such, no individual actors can be successful in achieving sustained results without confronting the system of which they are a part. In sum, a number of forces have made everything about schooling more complex—its goals, as well as the ideas, tools, and technologies that guide its work. As in health care, few [education] professionals can absorb and effectively respond to all of this complexity in their daily work. So it is not surprising that a chasm exists between what we seek to accomplish and what we actually achieve. (p. 63)

According to Bryk and colleagues, the way toward improvement starts with making these complex systems “visible” so that educators can identify the most promising strategies for change and improvement.

150

Teaching Improvement Science in Educational Leadership

We have embraced Bryk’s stance through adoption of a suite of tools championed by Bryk and his Carnegie colleagues to assist our students in “seeing” the public educational system into which they are immersed. During the same first course during which we lead students through problem identification, we also introduce our students to three core improvement science practices: causal systems analysis (“fishbone” diagram), the driver diagram, and a personal theory of improvement. Students move through the sequence of problem identification and justification followed by a causal systems analysis, the identification of key drivers of improvement, and finally their theory of improvement. Students apply these practices through in-class guided practice and specific assignments that are embedded across several courses. Causal Systems Analysis Developing a problem-specific “fishbone” diagram requires that students deeply interrogate a problem of practice. Part of this process is to engage in “empathy interviews” with those closest to the problem, including students. Our students work together to create open-ended interview questions. Then individual students conduct interviews with all relevant stakeholders. The interviews help our students consider the experience of all those, including students, teachers, and parents, with the problem of practice. These interviews inform and, in many cases, help shift our students’ perspectives on the factors that contribute to the problem and move them from a perspective of blaming one part of the system to recognizing how all involved were implicated in a problem not of their own making. In addition to interviews, our students also begin to identify credible sources of evidence that further support the problem or challenge some of the causes that students may have hypothesized. Similar to problem identification, the initial fishbone is revisited through subsequent courses as students learn more about specific topics and have opportunities to do deep reading. We endeavor to move students from “one and done” to an understanding that working with a problem takes time, and solutions can evolve with more information.

Teaching Leaders to Apply Improvement Science

151

Driver Diagrams The next step in the introduction of improvement science practices leads students through developing a diagram that identifies key drivers for improvement. This shifts students’ thinking from studying the problem to studying how to improve the problem. First, students must state a measurable aim or improvement goal, which may also require measurable interim improvements. Students then use the driver diagram as a tool for succinctly defining change areas such as transforming systems policy or improving teacher capacity. Similar to the fishbone exercise, students need to support their improvement drivers with a variety of credible sources, including describing past attempts by the system to address the problem. Theory of Improvement Ultimately, students are expected to construct a personal theory of improvement that emerges from the driver diagram. The theory of improvement will define what an individual student will propose to develop in his or her dissertation. During the introductory course, students are not expected to be able to have a defensible theory of improvement and the culmination of the initial course is a presentation by each student of their refined problem of practice, causal systems analysis, and an initial driver diagram. This framework is then carried forward through subsequent courses and through a series of individual writing assignments aligned with key sections of the dissertation. Dissertation in Practice/Capstone Project As our doctoral students progress through the 2-year leadership course sequence, they revisit and refine their problem of practice, fishbone diagram, driver diagram, and theory of improvement to reflect their expanded knowledge. After the completion of their coursework, these student leaders transform their problem investigation into their culminating capstone (formerly the dissertation in practice).

152

Teaching Improvement Science in Educational Leadership

Throughout this process, access to district data and resources can often be challenging. The program serves mainly school leaders and district administrators who have at least some access. At the same time, we focus our students’ capstone work inside their sphere of influence so as not to be overreliant on broader district support. The capstone is a three-section written document. Section 1 includes a succinct statement of the problem of practice, evidence of the scope of the problem in a school system, and consequences or impacts of the problem on the system. These sections are followed by a thorough causal systems analysis, which is a fishbone diagram. Students are required to develop a fishbone diagram to represent 3–5 major causal factors associated with their problem. To be credible, students must develop a narrative to support each “rib” of their fishbone, with evidence from the research literature and relevant policy reports. Students are expected to extend their narrative discussion through an identification of at least three “primary drivers” for improvement. These drivers must be supported with credible evidence from the research literature which shows how the specific interventions have supported the achievement of their aim. Following the causal systems analysis, students present their driver diagram, which illustrates the student’s theory of action of how the proposed strategy/change/intervention will lead to the improvement of the problem. Students are then expected to engage in specific improvement activities to test their theories of action. Section 2 of the capstone provides details on how the student will implement the specific improvement activities that grew out of the driver diagram and theory of action. These activities could include implementing some short cycle improvement activities (PDSAs) or could focus on evaluating the impacts and outcomes of a specific ongoing improvement strategy adopted by the system in order to recommend revisions or other actions. Another improvement-oriented strategy could entail an in-depth investigation of a potential improvement initiative, such as the design of a specific professional development activity. Finally, the capstone must conclude with Section 3, which not only describes the experiences and outcomes of the change initiative

Teaching Leaders to Apply Improvement Science

153

described in Section 2 but also presents a detailed plan for next steps to address the problem. These next steps are not standard recommendations for further research or consideration. Rather, they must be specific, time-bound, and grounded in a clear, explicit knowledge of the system in which the problem resides. Students are also required to present the capstone to a committee of faculty and relevant system-level administrators or other key stakeholders.

Accelerating Learning Through Networked Communities As noted above, the UMD EdD program embraces the development of the scholarly practitioner (CPED, n.d.). The implication of the scholarly practitioner in the context of improvement science is that the practitioner learns from the EdD about both the scholarship of improvement and its direct application to real problems of practice in schools. The EdD dissertation gave some opportunity for students to apply principles of improvement in their school systems, and the program encouraged graduates to continue applying these principles once they left the program. However, like most programs, UMD had no formal mechanism for tracking graduates or measuring how the program was impacting their practice. Yet what we did have was a growing partnership with PGCPS and a cohort of over 30 recent EdD graduates servicing as principals, principal supervisors, associate superintendents, and other positions of leadership within the school district. UMD wanted to continue to play a role in helping graduates to bring their expertise and research experience to bear in helping to improve student outcomes in PGCPS. Our goal was to advance the power of networks that move beyond the interests and pursuits of a single researcher and toward a collective and disciplined approach to solving problems that exist across the district (Bryk et al., 2015). We also aimed to practice the core principles of research–practice partnerships that focused on developing shared goals and coequal partnership between university faculty and district leadership (Penuel & Gallagher, 2017). Following a series of meetings between

154

Teaching Improvement Science in Educational Leadership

the Center for Educational Innovation and Improvement (CEii) at UMD and the Office of Talent Development at PGCPS, a postdoctoral network was established called the Post-Doctoral Improvement Science Action Network (PDISAN). “Improvement Science” was an essential element of the name, in that a central goal of the network was to spread and scale the theoretical constructs and practical tools of the improvement science paradigm. “Action” was also essential in its implication that the network is committed to enacting real change and not simply engaging in the academic exercises of exploration and study. PDISAN invited graduates from the UMD/PGCPS EdD cohort program, along with PGCPS school leaders and UMD faculty, “to examine school improvement research, promote evidence-based practices, publish quality research, and lead improvement networks across the districts that aim to solve tough problems and improve student outcomes” (PDISAN, 2018, p. 1). The network identified five core functions and goals. 1. To find common or related problems of practice and promising practices that could be tested and/or spread in PGCPS schools which stem from the research topics, findings, and implications of the dissertations completed by UMD EdD graduates 2. To work with PGCPS and UMD faculty to review data and research evidence related to the school challenges we address and design intervention strategies based on the data and evidence 3. To collectively study improvement science and explore its use as the signature strategy and process for helping address problems of practice in PGCPS 4. For PGCPS leaders and EdD graduates to work with UMD faculty to document and publish research–practice partnership outcomes 5. For network members to lead school improvement efforts in partnership with UMD and PGCPS and serve as ambassadors for improvement inside PGCPS and beyond The network was launched with an inaugural meeting in the spring of 2018 attended by both the PGCPS superintendent (CEO)

Teaching Leaders to Apply Improvement Science

155

and the UMD College of Education dean. UMD faculty conducted a review of over 20 dissertations from UMD/PGCPS graduates and categorized them. Network participants then met in clustered groups, and over the course of several meetings identified a number of core problems of practice which represented the array of analysis and research conducted in their dissertations. Table 8.1 represents this analysis. Table 8.1. PDISAN Problem of Practice Analysis Theme

Problem of Practice

Principal Support

We do not have enough effective school leaders who can consistently create conditions for high levels of student learning and achievement in their buildings throughout the district.

Technology/School Improvement

The district curriculum does not adequately integrate contemporary technology tools and resources in ways that advance student learning and growth.

Parent Engagement

Too many families opt out of our district’s neighborhood and comprehensive schools, perpetuating both the perception and reality of neighborhood schools’ performance.

Teacher Support

Too few of our teachers have the capacity to create classroom conditions that foster consistently high levels of student growth and learning across our district.

Student Support

There is significant variability in AP enrollment rates and student AP course outcomes across our district, and Black and Latino students are least likely to enroll in or successfully complete AP.

Following the analysis, PDISAN had a series of meetings with the new PGCPS superintendent, Monica Goldson, who became an active participant in PDISAN meetings. At the PDISAN 2018 summer retreat, the postdoctoral fellows engaged with Goldson to develop a systemic problem of practice that would become the first collective effort of PDISAN to design and implement a networked improvement

156

Teaching Improvement Science in Educational Leadership

community. Goldson expressed concern that while the district was focused on improving achievement in literacy, math achievement across the district was low, with Black and Latinx students scoring significantly lower. Network members invited UMD math faculty and PGCPS math curriculum staff to PDISAN meetings to study student test score results more deeply. This analysis led to a clear and focused problem of practice identified as: Students of color in grades 3–8 are not demonstrating effective numeracy skills as measured by PARCC and PSAT. During quarterly convening in the 2018–19 school year, PDISAN met with UMD faculty to do a causal systems analysis, to plan and conduct math learning walks in six schools, and to develop a driver diagram with several change ideas. The work of this math NIC remains active, though circumstances related to the COVID-19 pandemic have caused the team to pivot strategies that may focus on teaching mathematical discourse via distance learning. PDISAN also focused on the problem of practice related to principal support: We do not have enough effective school leaders who can consistently create conditions for high levels of student learning and achievement in their buildings throughout the district. PDISAN members looked to address this problem by building principal capacity to lead school improvement via the design and implementation of the PGCPS School Performance Plan. Every school in PGCPS is required to develop a comprehensive School Performance Plan with clear and ambitious student learning goals. However, for many principals the process was more closely aligned to compliance than to improvement, and rarely were learning goals being met. Fortunately, Goldson and members of the PGCPS Office of Accountability joined PDISAN to engage in planning and dialogue around these issues. Perhaps the most significant step in the process was Goldson’s revision of the PGCPS Coherence Framework that “provides a common visual and language by which employees—and community partners and other stakeholders—can articulate and/or carry out a consistent approach to academic and operational improvements” (PGCPS, 2015, p. 19). The outer ring of the framework represents the core strategic approach the system will use to plan for improvement. In 2019, Goldson’s modification added the plan-do-study-act

Teaching Leaders to Apply Improvement Science

157

(PDSA) improvement cycle to the outer ring of the framework (see Figure 8.1).

Figure 8.1. PGCPS coherence framework.

In the spring of 2020, UMD faculty conducted about 10 hours of professional development training for about 35 instructional directors (principal supervisors), principal coaches, and other central office administrators on improvement science principles and particularly on planning and executing PDSA cycles. As a direct result of the training, PGCPS has revised its School Performance Plan to incorporate elements of PDSA and has planned training for principals. In addition, UMD and PGCPS will select several schools to participate in a School Performance Plan networked improvement community through the 2020–2021 school year, designed to enhance and scale principal capacity to use improvement science tools in the school improvement process. While the realities of the COVID-19 pandemic may modify specific plans, the commitment to learning, implementing, and scaling improvement science principles remains strong on the part of both the university and the school district.

158

Teaching Improvement Science in Educational Leadership

Reflections from an EdD Graduate: Impact Across the System This section of the chapter offers first-person insight from a UMD EdD graduate, Charoscar Coleman. Coleman earned his doctorate in school system leadership as part of a cohort of leaders from the Prince George’s County Public Schools system. He entered the program as a high school principal, and subsequent to graduation was promoted to instructional director, a position in which he supervises principals. Coleman’s narrative is reproduced in the paragraphs that follow. While a student at UMD, my problem of practice focused on

the college readiness of minority high school students. A widely

used measure of students’ college readiness is advanced placement.

At the time, I was principal of a diverse high school with an enrollment of 2,255 students. From this total enrollment, 46.5% of students were identified as free and reduced-price lunch recipients, 12.6% received special education services, and 90.5% were African

American (MSDE, 2015). The school posted an 87.41% four-year adjusted cohort graduation rate, and 59% of all graduates enrolled

in a two- or four-year college or university within 12 months of graduation (MSDE, 2015). While the school offered an array of honors and AP classes, the gap in students’ college readiness was compelling to me because my advanced-track students also demonstrated

this readiness gap in my district. My problem of practice honed in on the enrollment of minority students in advanced placement

courses and their achievement on AP end-of-year exams. More

specifically, the middle-class students in the district where I was a

principal performed well below state and national averages on AP exams. Ultimately, my dissertation, An Investigation of Site-Based Administrators’ Perceptions of the School-Based Factors That Influence Students’ AP Enrollment and Success, provides further

research and findings on variables that impact AP performance in the context of my school district.

One of the key levers for change I was able to identify from

research was the fact that students often lack the prerequisite skills to perform well in AP courses. Therefore, to address this gap

Teaching Leaders to Apply Improvement Science

at my school, our leadership team created a feeder/pathway pro-

gram called Puma Pride for incoming ninth-grade students specifically identified for AP eligibility. Our goal was to prepare these

students for the rigor of AP and ultimately increase student enroll-

ment in AP courses while simultaneously improving their success

rate, which the College Board defines as earning a 3, 4, or 5 on the culminating AP exams. Our Puma Pride program established an

expectation for students to complete eight AP courses by the end of their senior year and to achieve a score of 3 or better on all of their AP subject exams.

After serving as a high school principal for almost ten years, I

became an instructional director (ID) in the same urban/suburban district. In this central administrator role, my primary responsibil-

ity is to coach and supervise principals with a focus on continuous

school improvement. In my role as an ID, I have become committed to finding ways to infuse the principles of improvement sci-

ence to benefit our work as leaders across the district. Given that improvement science is a way of thinking that empowers those

closest to a problem to have agency in helping to directly solve

their problems, I have become invested in helping my principals and district colleagues understand the connections between their toughest problems and the school systems and structures in which

these problems are embedded. I have come to believe in the value of learning to “see the system” more fully, and have become con-

vinced that by engendering a greater level of understanding by all members of the organization about the context in which our

problems live, we can spur more authentic engagement around addressing a problem.

My exposure to the tools and framework of improvement sci-

ence was deepened through my direct involvement in the Carnegie iLEAD network and built directly on my experiences in the UMD doctoral program. Through my participation in the UMD-PGCPS partnership, which is a member of the iLEAD network, I have had the opportunity to attend several of the Carnegie-hosted convenings

of iLEAD teams from across the country. During these conferences I have learned about improvement science both from literature and

159

160

Teaching Improvement Science in Educational Leadership

practical application in schools through site visits and collaboration

with other school district teams and university partners. Within the first year of my participation in the network, it became clear to me that improvement science could serve as the conceptual anchor for our school district’s work with school performance planning.

As I was becoming involved in the iLEAD network, my dis-

trict was rolling out a new school performance planning (SPP)

template. Each of our schools was being held accountable to a

new, comprehensive set of improvement metrics that aligned to each of the components of ESSA (Every Student Succeeds Act, the 2015 reauthorization of the federal Elementary and Secondary

Education Act). The school district mandated that all principals complete an SPP to ensure a plan existed across schools to address

the requirements of ESSA. Additionally, our new chief executive

officer shifted the school district improvement planning approach from using Datawise to plan-do-study-act (PDSA). This change was

symbolized through the district Coherence Framework which was formerly bound on the outside by Datawise and now is bound by

PDSA. This shift from Datawise to PDSA-driven SPPs created some significant gaps in knowledge and skill for central office and schoolbased leaders who were now responsible for utilizing a new SPP

format and implementing PDSAs. This shift occurred without substantive district-wide training on how to engage principals or their

supervisors in effective improvement planning, short-cycle testing, or the broader concepts of improvement science.

Given my exposure to the tools and framework of improvement

science through both my doctoral program and my participation in the iLEAD network, it made good sense to me to advocate for the

integration of the principles of improvement science as the foundation for PDSA implementation across my district. As such, it is my hope that we can better organize ourselves to help to develop our

principals into effective, adaptive leaders who can elevate a techni-

cal improvement planning document (SPP) into collective action

that creates lasting change and impact. This opportunity to integrate the principles of improvement science as a guiding framework

for our school’s improvement work is unfolding presently. PGCPS,

Teaching Leaders to Apply Improvement Science

161

in partnership with UMD, is in the process of training central office

leaders on the principles of improvement science. Additionally, a training plan is under development for school principals and school

teams. Furthermore, our School Performance Planning document has been revised to more closely align to improvement science principles.

As a leader in PGCPS, as a member of both the iLEAD and

PDISAN communities, and as a doctoral graduate of UMD, I am excited for my district’s future. My district leader colleagues in PGCPS have begun to embrace the principles of improvement science in a meaningful way as a pathway toward developing the capac-

ity of our principals as effective adaptive leaders who can address the

entrenched problems of practice in their schools and lead continuous

improvement. Our school-based improvement process is now being informed by the principles of improvement science, a robust training

program is being developed, key data sources to measure improve-

ment are being identified, and opportunities for additional NICs are being developed.

Conclusion The UMD EdD program has evolved over the last decade into a robust and rigorous professional practice doctorate with the principles of improvement science embedded in our pedagogy. As this chapter has demonstrated, this evolution has been challenging. However, due to the persistence of a small group of committed faculty and college leaders who have championed our cause, the program has become both a powerful leadership development opportunity for local educational leaders and an invaluable lever in helping the university to strengthen and deepen its partnerships with Maryland public school districts. What we’ve learned through this journey is that teaching school administrators (or even other faculty) to think and act like “improvement scientists” requires them to unlearn very ingrained habits of mind and systems of operation. One of our favorite teaching tools

162

Teaching Improvement Science in Educational Leadership

is a YouTube video called The Backwards Brain Bicycle by a group called Smarter Every Day (2015) about an engineer learning how to ride a “backwards” bike. The lessons in the video—that we all have deeply ingrained cognitive biases and that knowledge does not equal understanding—provide powerful analogies to learning about and applying the principles of improvement science to our work as educational leaders. After all, in order to stay committed to the work of improvement, we as educators can find hope in our collective and ongoing learning.

References Bryk, A., Gomez, L., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Carnegie Program on the Education Doctorate (CPED). (n.d.). The CPED Framework. https://cped.memberclicks.net/the-framework Maryland State Department of Education. (2015, November 4). Maryland report card. https://reportcard.msde.maryland.gov/Graphs/#/Graduation/GradRate/ 1/6/3/1/16/1519 Mintrop, R. (2016). Design-based school improvement: A practical guide for education leaders. Harvard University Press. Penuel, W., & Gallagher, D. (2017). Creating research-practice partnerships in education. Harvard University Press. Post-Doctoral Improvement Science Action Network. (2018). Letter of Invitation [Letter.]. University of Maryland, College of Education. Prince George’s County Public Schools. (2015). SY 2016–2020 Strategic Plan. https:// www.pgcps.org/strategic-plan/ Smarter Every Day. (2015). The Backwards Brain Bicycle [Video]. YouTube. https:// www.youtube.com/watch?v=MFzDaBzBlL0

chapter nine

Empowering Incremental Change Within a Complex System How to Support Educators to Integrate Improvement Science Principles Across Organizational Levels JACQUELINE HAWKINS University of Houston

MONICA MARTENS University of Houston

Abstract

T

he purpose of this chapter is to provide practical activities to illustrate the connections between the political landscapes within which people function, change management within systems, and the principles of improvement science. It joins the theoretical study of incremental change with the application of pedagogical tools to assist faculty with their teaching, curriculum development, and ongoing improvement of EdD programs. The material presented here has been used as part of a research course sequence for 1st- and 2nd-year students and in courses situated closer to graduation. It is also useful when working with partner organizations on regional improvement initiatives. The learning objectives address: • how to assess a political landscape for change within the context of professional employment; 163

164

Teaching Improvement Science in Educational Leadership

• how to implement change management within an educational system; • how to use the principles of improvement science to understand cycles of improvement within complex systems; • how to focus all members on the process at hand as they become change agents; and • how to help leaders, change agents, and stakeholders (categories may overlap) to understand each other. The activities and tools are intended to help emerging leaders in doctoral programs to enhance their self-confidence, develop their creative abilities, and hone their interpersonal relation skills. By extension, they arrive at manageable doctoral research plans.

Background Decades of accountability research show that educational approaches and interventions work for some students in some contexts while failing a disproportionate number of students who are representative of historically marginalized groups. Our students arrive in the EdD program fully aware of this predicament. Dissatisfaction with educational outcomes (in general) and equitable educational opportunities (in particular) has prompted leaders to try a variety of approaches that are evidence-based (e.g., Institute of Education Sciences, n.d.). However, positive outcomes are often undetectable or erratic at best. Sometimes the fault is due to a siloed approach that lacks a plan to integrate efforts within a system. In other cases, change agents at multiple system levels do not take into account differing focal points and perspectives. Change in one area might produce a domino effect with unintended negative consequences. Additionally, a promising tool borrowed from one setting may not be generalizable to another (Bryk et al., 2015; Mintrop, 2016). Students learn to consider this background and history in relation to three practical aspects of problem-solving from the learning science perspective emphasized in the program:

Empowering Incremental Change Within a Complex System

165



Credo 1. Start with your proximal knowledge—your informal theories and intuition.



Credo 2. Evaluate your perspective in relation to a larger body of knowledge (the distal). Knowledge is like a vast river with energy, undercurrents, boulders, and side shoots. It is salient in paradigms and theories, research, laws, and policies.



Credo 3. Balance systematic replication of what works in your environment with equal attention to understanding why it worked. Relate context to action.

Students who are studying change management in both K–12 and higher education settings will commonly run across parallel conversations about these “compass points” as educators and leaders search for ways to link theory/intuitive thinking with practice (e.g., Bryk et al., 2015; Love, 2012; Mintrop, 2016).

The Nexus of People, Change, and Approach Doctoral students and educational leaders (often one and the same) necessarily expend a great deal of energy on planning how to guide improvement. They share that the ways in which people work together is a particular challenge. From their sphere of influence they learn to be attentive to the uniqueness of their environment, composition of stakeholders, and the historical institutional background. With this understanding, while developing this chapter we decided to aggregate these ideas under the concepts of political landscape (context of change), change management (determining if movement occurred and the degree of change), and improvement science principles (the cornerstones for actualizing change). In Figure 9.1 we represent that these notions are not mutually exclusive but are interwoven. Failure to engage any of the three may result in failure of the whole. In offering this graphic we also allow for the fact that one idea may seem to predominate others, based on particular

166

Teaching Improvement Science in Educational Leadership

circumstances and professional backgrounds. Doctoral students do not necessarily develop a full understanding of one area and then move logically to the next. Instead, they independently bring these themes into focus over time as they find relevance, interweave new knowledge, and experience the juncture of their university studies and their professional work.

Political landscapes (proximal—distal)

Change management

Principles of improvement science

Figure 9.1. Context matters—Three interrelated issues.

Political landscape refers to the levels at which change can occur—within a small workgroup, a single campus, a district, an institution of higher education, a consortium of institutions, nonprofits, regions, states, tribes, the United States, and other countries. Most often our students are engaged in change focused at a small group or local level but are likewise aware of how the next level affects their actions. Change management refers to the various manners of organizing and leading people, both in terms of strategy for building relationships and tools to aid the group’s work. We recognize that management can involve numerous roles—top-down or prescriptive, consultative, collaborative (Knackendoffel et al., 2018). In this chapter we presume a management strategy that is in harmony with the ideals of shared governance and stakeholder involvement to the greatest extent possible. This term also includes the tools that

Empowering Incremental Change Within a Complex System

167

accompany the management of change, such as effective and purposeful measurement. Change management necessarily involves consideration of timing of change, the logical steps for guiding change, and assessment of the change process. Improvement science principles refer to the six interconnected aspects articulated by Bryk et al. (2015). As a reminder, they are to (a) make the work problem-specific and user-centered; (b) focus on variation in performance; (c) see the system that produces the current outcomes; (d) understand that we cannot improve at scale what we cannot measure; (e) use disciplined inquiry to drive improvement; and (f) accelerate learning through networked communities (pp. 12–17). We especially explore the networked improvement community (NIC) (Bryk et al., 2015), because it is a topic of interest to graduate students as they progress through the program. Because the notion of NICs speaks to relationships between people as opposed to steps and tools, it is a less tangible principle to grasp, particularly if students have few examples from their work background of being involved in a NIC. Indeed, all three notions involve people and, by extension, social relationships, in which critical thinking can be a harmonious experience when trying to foster growth and change. Thus, political landscape, change management, and the IS principles all address the working relationships among people and how adults react to change in the workplace. When we discuss the topic of group dynamics, we often tie this to the notion of measureable outcomes because (a) effective group work is aided by attention to robust measurement for many reasons and (b) it is a primary focus of the IS approach, which our students practice to prepare for doctoral research.

Chapter Organization and Context for Development This chapter is organized into five learning goals. Each one relies on certain aspects of the three interrelated topics. An example of an assignment or pedagogical tool is offered for each of the five. Certainly they are interchangeable. Some have been used for group

168

Teaching Improvement Science in Educational Leadership

work within the classroom and others for individual assignment work. In many cases, these tools and activities could be used in workshops with adults in a professional development or consulting situation. We offer them as pedagogical examples to stimulate further thinking (see Table 9.1). Table 9.1. Pedagogical Examples How To

Tool/Activity

1. Assess a political landscape for change

• Guided conversation with critical friends

2. Implement change management

• Note-taking guide for NICs • Data inventory project

3. Apply IS principles to improvement cycles and change outcomes

• Nesting of cycles • Local examples

4. Focus all members of group on change • Pilot a mini SoC & LoC protocol process • Development of PD program using CBAM • Innovation configuration map 5. Help leaders and change agents understand each other

• 7/14/28 • Graphic representation of ideas

The EdD Professional Leadership: Special Populations program at the University of Houston attracts student-practitioners from a variety of work settings. A large proportion of our students continue to represent the traditional focus of the education doctorate—teachers, specialists, and administrators in K–12 schools who manage special programs and the needs of an increasingly diverse student body. We also enroll students who work as staff, faculty, and administrators in higher education, both 4-year and 2-year institutions. Some of our students are practicing clinicians (e.g., psychology and speech therapy). A small percentage teach the fine arts. We also enroll practitioners from nonschool settings, such as juvenile delinquency rehabilitation and public health. A few students have worked across these various settings. Therefore, each cohort can differ significantly from the prior one—in one year we might have great variation in new students, and in another year they may share more common ground in terms of career focus. We likely attract such a diverse student body because we deliberately title our program “Special Populations.” By this we mean

Empowering Incremental Change Within a Complex System

169

to be inclusive of any group of learners that is at risk for not thriving in society and connect directly with CPED’s first principle for a program, that it “is framed around questions of equity, ethics, and social justice to bring about solutions to complex problems of practice” (CPED, 2019, para. 7). Thus, special populations encompass children and adults, those with special needs designations and those without, and all learners who are at risk of being socially and economically marginalized due to inadequate opportunity. Within our metropolitan area, we have many challenges, not the least of which is the growing socioeconomic divide that affects many of those we would deem to be part of a special population (Kinder Institute for Urban Research, 2016). The cohorts across our EdD program share a common interest in bridging the research-to-practice gap. Also, each of their problems of practice relates to one or more of the four areas in which our faculty specialize: early literacy, the transition pipeline, professional development, and systems change. During their time with us, students learn how to guide cyclical improvement in incremental stages, no matter the content area. This is facilitated since coursework focuses on skills and processes that can be applied to problems irrespective of content area or professional affiliation. From this perspective, students can generalize their knowledge to other content and to future professional activities that focus on leadership writ large. The following tools and activities are examples of how we prepare them to address inequity in education systems in our region. Goal 1: How to Assess the Political Landscape for Change Within the Context of Professional Employment Educational leaders necessarily decide how to approach a change project based on their sphere of influence—that is to say, their current level of power, autonomy, and group membership within which they work (Mintrop, 2016). We relate this to feasibility and interpersonal relations in order to determine how to study and work toward incremental and impactful change. For students, consideration of this idea is particularly critical when determining the range of doctoral research in which to engage.

170

Teaching Improvement Science in Educational Leadership

The parallel topic of the networked improvement community (NIC) and the two styles of developing—convening current partners in the workplace or developing a NIC from scratch with a charter— form another layer of understanding for students trying to begin their study of a problem of practice (PoP) (Bryk et al., 2015, pp. 16–17). As we meld these notions with the planning phase of the plan-do-study-act (PDSA) cycle, we call attention to proximal experiences, institutional history, relationship to other organizations, and “the expertise in the room.” Students learn what they know about their organization and how they know it; what is available for further analysis; what is unknown about the topic; and how to use their expertise to implement, study, and act upon what was planned. This planning process also serves to illuminate gaps in knowledge, services, and capacity that often emerge through critical conversations. Activity: Guided Synchronous Conversations with Critical Friends Oral conversation with fellow students is a pedagogical tool in use throughout the doctoral program. But it is especially critical in the two-course methods sequence when the entire cohort is together, even if working at various paces to earn credit hours. Grouped into pairs or trios of critical friends (Bambino, 2002), also referred to as “speed dating” by the students, in multiple course meetings throughout the semester, they discuss their current roles in relation to their problem of practice and ideas about doctoral research. Instructors use guiding questions to shape these conversations. Over many weeks, and with continual exposure to texts such as Bryk et al. (2015), students improve their articulation of fundamental concepts related to PoPs and interpersonal relations— their sphere of influence, their workgroups or NICs, and the steps involved in the planning phase of the PDSA cycle. They also learn to relate their local PoP to regional and federal contexts. This oral discussion practice can last for 15–30 minutes, sometimes with a change of partner.

Empowering Incremental Change Within a Complex System

171

Observed positive outcomes of students range from improved articulation about their PoPs; a recognition of gaps in knowledge; identification of sources of evidence, legal and policy supports, and like-minded organizations; critical thinking and evaluation of texts; preparation for discussions with advisors; cohesion of the cohort overall; and improved written communication. Goal 2: How to Implement Change Management Within an Educational System Depersonalizing the change process is a topic of interest for students and new leaders who are planning an improvement project. They are necessarily concerned with how a group will respond to a call to actionable change. Many tools for depersonalizing this process—such as driver diagrams—have been offered in books about improvement science (e.g., Crow et al., 2019). We suggest two additional activities here: (a) a group discussion facilitated by a worksheet about working groups and (b) a data inventory assignment. Together they further the goal of impartially examining the context of the problem and knowledge base. Activity: A Note-Taking Guide for NICs This worksheet guides the participants in thinking through the ways their organization has responded to the solving of a problem. The guide relies on the principles of IS and poses questions to stimulate conversation about defining problems and examining the history of the organization with this PoP. It also introduces the participants to the importance of data. A brainstorming session to guide discussion about the meaning of context could occur prior to the activity, or a discussion could occur afterwards about what the participants learned in relation to context. The guide is provided in the appendix at the end of this chapter. It is important to note that students may not complete the guide in the order listed. Rather, they are more likely to use the guide as a “parking lot” as they gather information, interview others in

172

Teaching Improvement Science in Educational Leadership

their organization, and begin to build a picture of what is happening within their context as it relates to their PoP. Additionally, this process may teach them that they must research a more focused PoP, since some of the “givens” that they understood to be known need to be explored before they move to a next step. Activity: Data Inventory Students in the first methods course are assigned the task of researching the available data on their PoP. They are encouraged to explore both local (proximal) data but also distal (regional/national) data. In a paper, they are required to discuss the following: 1. Data available that relates to your topic. 2. Format of the data, location, and accessibility. 3. How it can help answer your questions. 4. How it fails to answer your questions. Since a primary value of improvement science (IS) is to work efficiently and to fully examine a problem of practice from all angles, this activity combines an emphasis on discovery of available data with a focus on the larger context within which their POP is situated. As a result, some also begin to conceptualize how cycles of improvement could be nested. They also discover who in the regional area is concerned about the same issue. Finally, for a few, they discover that data needed for their doctoral thesis may be available. Oftentimes, available data can provide evidence of an assessment of need, changes over time, demographic shifts over time and subgroup inequities at any point in time, and the lagged impact of a policy or an initiative. Databases, documents, websites, and policies can all contribute to understanding the history and context of the PoP and may provide a rich source of evidence for a variety of analyses.

Empowering Incremental Change Within a Complex System

173

Goal 3: How to Use the Principles of Improvement Science to Understand Cycles of Improvement to Change Outcomes Within Complex Systems One of the IS metaphors we invoke often in class is the notion of continuous change, often envisioned as advancing spirals of upward momentum. Each cycle informs the next cycle of change. Another way to look at it is through nested levels of improvement. Seasoned leaders know how to engage the various components in IS processes and can prioritize activities that ensure that all individuals involved at the various levels work cohesively to become the agents of change that an organization needs. Doctoral students need an understanding of how these pieces fit together, especially if they have less lived experience in their profession. Activity: Instructor-Led Discussion About IS Principles at Multiple Levels This example (Figures 9.2–4) focuses on the integration of three levels (student, teacher, and campus) with the six IS principles (Bryk et al., 2015). Following the instructor’s presentation, students could develop their own coordinated levels of improvement to identify places where good ideas could go awry. This could occur in small group discussion with the help of critical friends or be presented as an independent assignment. It involves learning to present information in graphical form, which also informs a later activity in this chapter. Lecture notes are included with each graphic. Lecture Notes At the student level, the PoP is that student literacy skills are inadequate as demonstrated by literacy outcomes and that there is unacceptable variation related to diverse literacy outcomes. The current system links these student outcomes to teacher capacity— both for literacy instruction and the ability to use data (student outcomes) to inform instruction. The NIC focuses on the support

174

Teaching Improvement Science in Educational Leadership

that will be necessary to drive improvements in student literacy outcomes. However, without attention to other levels within this system (teacher and campus), the focus may be on what is wrong with some or all of the students rather than the bigger picture of teacher capacity and campus support.

Problem of Practice

Inadequate literacy levels

NIC

Variability

Supports to drive improved student literacy outcomes

Disciplined Inquiry

Early literacy component outcomes

Student Literacy Level

Evidence-based interventions Matched to student assessed needs

Measures

Current System

Supports for instruction and assessment of early literacy teacher capacity

Early literacy Scope, content, quality

Figure 9.2. Improvement science model—student level—addressing literacy in a district.

Problem of Practice

Inconsistent literacy instruction and data use capacity

NIC

Variability

Supports to build teacher capacity

Disciplined Inquiry

Teacher instructional data use capacity

Teacher Capacity for Literacy Instruction

Evidence-based interventions Literacy instructions Data use

Current System

Teacher supports Professional development

Measures

Teacher knowledge, skills, and readiness for change

Figure 9.3. Improvement science model—teacher level—addressing literacy in a district.

Empowering Incremental Change Within a Complex System

175

Lecture Notes Student outcomes are nested within teacher capacity. Therefore, at the teacher level, the problem of practice is identified as inconsistent teacher capacity for literacy instruction and data use. The current system links outcomes to teacher professional development (PD). The NIC focuses on how to support and build teacher capacity for literacy and data use. Again, the focus may be on what is wrong with the teachers rather than what supports can be provided to realize system change. Problem of Practice

NIC

Community context Supports to build teacher capacity and improve student outcomes

Disciplined Inquiry

Inadequate literacy levels Inadequate future literacy levels

Campus Literacy Level

Change management

Evidence-based interventions

Data use Linking instruction to data

Variability

Early literacy levels Student characteristics Teacher capacity

Measures

Current System

Campus supports Professional development Campus improvement plan

Early literacy Teacher capacity Campus readiness for change

Figure 9.4. Improvement science model—campus level—addressing literacy in a district.

Lecture Notes At the campus level, the problem of practice is inadequate literacy levels (current and future). The current system links these current and future outcomes to supports, PD, and the campus improvement plan (CIP). The NIC focuses on the community context and how to build teacher capacity to improve student outcomes. The knee-jerk reaction may be to clone a tried-and-true CIP rather than focus on the community context within which the campus sits.

176

Teaching Improvement Science in Educational Leadership

Integrated approaches are essential to success. Recalling our three credos of learning science mentioned earlier—and our three concepts of political landscape, change management, and IS principles—this activity can be used for development of important capacities: • The capacity to link a local problem or practice (e.g., support for English language learners) to the context of policy at the state, national, or international levels (e.g., magnitude of the problem and for whom, legal mandates, and the Geneva Convention) • The capacity to recognize the relevance of the experiences and knowledge bases of other disciplines (e.g., how environmental turbulence research in business can relate to the educational challenges at a border district), which can increase not only relevance to practice but also encourage leaders to develop a more robust set of skills • The ability to visualize the principles of IS and see their interconnections Activity: Relating IS Principles to Local Examples We bring as many local examples into classes as possible, especially when the research of program graduates demonstrates how planning must adjust to unforeseen circumstances. This is especially enriching during cohort meetings when graduates present their research. Additionally, we assign students the task of engaging with written artifacts. We encourage students to find a document that relates to their PoP (e.g., a strategic plan, a technical report, a campus improvement plan, or a capstone project). As they read the document they use the IS structures (in Figures 9.2–4) as a note-taking guide. We intend for them to use the document to isolate components of the six principles of IS (whether it explicitly addresses IS or not). This activity helps students to take the static blueprint of the six principles of IS and see a dynamic set of processes that show how the principles have been activated. We ask them to identify how the document responds to each of the six principles, which

Empowering Incremental Change Within a Complex System

177

principles were omitted, and how they would change the content and/or the process. Goal 4: How to Focus All Members on the Process at Hand as They Become Change Agents Students explore the nuances of interpersonal relationships around a change event in two courses in particular—one about adult learning theory and another about consultation and coaching. Whether their professional role is advisory, directive, or other, we practice how to depersonalize the nature of change so that parties come willingly to the table and feel included and important in developing strategies. Students are rightly concerned about how to develop and sustain NICs, especially if they have little experience with the process beyond reading examples in textbooks. They need an assessment tool that helps them determine the degree to which others in their context share their level of concern for change, the degree to which they use innovations in their practice, and what else is happening in a context that may detract from the mission of changing outcomes. The concerns-based adoption model (CBAM) originated at the University of Texas Research and Development Center for Teacher Education during the early 1970s and continues to be relevant (Anderson, 1997; Hall & Hord, 2011). The model stipulates that classroom instructional change (CIC) is a process. Teachers vary their approaches. The improvement path is not linear, and the change process is personal. While these statements focus on the individual, the emphasis is not on “discrete innovations” but rather “organization-focused initiatives” (Anderson, 1997, p. 332). To that effect, CBAM provides several tools for understanding how to support participants. Typically, this information is aggregated at the organizational level for a facilitator (Anderson, 1997). Naturally, these tools could extend to working groups guided by the PDSA cycle, so that stakeholders could review their process together. We have used CBAM as a tool to help educators identify innovations in which their organization engages, to take the temperature of their organization’s stages of concern (for the need) for change,

178

Teaching Improvement Science in Educational Leadership

and to determine the degree to which their organization’s members use the various materials available to them as a roadmap for change. Change management is like an exploration of the physics of behavior and movement. CBAM helps leaders to determine if their organization has a history of innovation and gives them a way to determine if current or prior innovations have produced the desired results by exploring the logical pathways. It helps them to identify where members’ stages of concern are. For example, in an organization where there is no notion that there is a problem, it may be insufficient to start with an intervention. Specifically, interventions may be deemed unnecessary, so the first step is to help members realize the urgency of the problem and the importance of intervention. While CBAM helps a leader take the temperature of an organization, we suggest that a more granular approach could help a leader move to individual levels of support—that is, similar to professional learning communities (DuFour et al., 2010) or the IS process that focuses on variations rather than an aggregate approach that targets the midpoint. Specifically, although an organization’s midpoint may look just fine, it may be a combination of two extremes. Consequently, a granular approach that demonstrates the variation may help change agents avoid interventions for a midpoint that may not be representative of either extreme. Activity: Pilot a Mini SoC & LoC Protocol Refer students to the American Institutes for Research website (www.sedl.org/cbam) and introduce them to CBAM’s stages of concern (SoC) and levels of use (LoC) interview protocols and questionnaires. Encourage students to customize the tools to generate their own instruments and have them pilot the instruments with a small group at work. Have students use the visual organizer in Figure 9.5 to identify the stages of concern, levels of use, and outcomes related to their PoP at both the individual level and the group level. What would they decide to do based on the results? Would they do the same for everyone? Where would they start the change? Who would be in their NIC?

Empowering Incremental Change Within a Complex System

? Concern

? Use

? Outcomes

? Action

Low Concern

Low Use

Poor Outcomes

Generate urgency then custom PD

High Concern

Low Use

Poor Outcomes

Deliver custom PD

High Concern

High Use

Poor Outcomes

Observe & develop custom PD

Low Concern

High Use

Good Outcomes

Monitor & custom PD

179

Figure 9.5. Differentiated action for individual or campus stages of concern, levels of use, and outcomes.

Activity: Creation of a Professional Development Activity—Stakeholder-Focused In preparation for devising an Action Plan (chapter 6 of their doctoral thesis), students work through a course about adult learning in which they create a PD opportunity that explores how to guide a group of adults through change. Working through the guidelines of CBAM and IS, the students focus on strategies that involve stakeholder input and development of actionable goals. This involves working to connect the activities of the PD with the desired outcomes in order to drive the improvement project to the next stage. For some, the project is hypothetical at this point, and they must make certain assumptions about their sphere of influence and network to envision a successful PD experience. As a result, they consider deeply who is in the network for the improvement project. They are supported by faculty with current and frequent experience in leading change through PD. Through nested assignments, students explore this topic in oral communications with their instructor and critical friends, by writing about the activity, and by presenting their PD graphically.

180

Teaching Improvement Science in Educational Leadership

Activity: Innovative Configuration Map Educators often task themselves with many initiatives. These can co-occur, follow one from the other, or be additive. Prior initiatives are rarely evaluated in a systematic manner, and educators may continue to engage with activities because they are familiar (or understood to be expected) rather than adjusting them or jettisoning them altogether. Educators can create an innovation configuration map to help determine just how much is happening, how long it takes, what it costs, and what the outcomes are. Figure 9.6 provides a sample chart that can be completed to determine the status of innovations (i.e., everything that’s going on) at a particular location. Time and costs associated with training, preparation, and direct interventions should be included. This activity helps to determine what is working, what is time- and cost-effective, and what needs to be stopped, changed, replaced, or adjusted. One caveat: educators should know in advance that this can become a never-ending process once they see the system components and their interconnections, and that they may be engaged in slaying sacred cows.

Innovation

Time

Cost

Outcomes

Monitor/ Discontinue

One-onone literacy instruction

Literacy specialist 1 hour per day outside of general class

High cost (one adult teaches six students per week)

Student outcomes are poor; limited transfer to general class

Discontinue— Transition to general class support, train teacher teams, provide additional support to small groups

In-class support for literacy

Literacy specialist 1 hour per day in a general class with a teacher

Lower cost (one adult supports general class instruction and general class teacher)

Students transfer learning to general education class; specialist coaches teacher as PD

Continue— Provide customized PD to general class teachers in addition to inclass support to students

Figure 9.6. Mapping campus or individual innovations (example of one completed).

Empowering Incremental Change Within a Complex System

181

Goal 5: How to Help Leaders, Change Agents, and Stakeholders (Which May Overlap) to Understand Each Other Leaders, change agents, stakeholders, and members of the community use various lexicons to describe their relationship to an urgent educational problem. They also have different lived experiences with the problem. Trying to understand this complexity is challenging. Being able to adjust one’s message to the particular audience and allotted timeframe is a helpful skill to develop. In addition to practicing elevator speeches (1–2-minute summaries of their PoP), our doctoral students also practice presentation skills with an emphasis on graphically representing their ideas. Activity: 7/14/28 Slide Presentation Checking for understanding is an important aspect of a leader’s role. Helping students to hone those skills is essential to their future success. They must learn how to present their point in a logical manner, do so in a relatively short period of time to ensure their audience is alert, use visuals that ensure audience members at the back of the room can see the materials, and anchor their work in the slides (rather than reading). Have educators generate a slide presentation that lasts for 7 minutes, consists of 14 slides, and is in 28-point font minimum. Include pictures and graphics—but do not use them to fudge on the font size. Assign practice in pairs and rinse and repeat this activity as a tuning device. In fact, we encourage this activity to be a culminating assignment in that the student repeats it in a different course each semester. In turn, they will progressively adjust the PoP and the capstone. By the time students are at the capstone and are presenting their work they will be seasoned presenters. (In the capstone, faculty may want to adjust the time and number of slides depending on context.) Activity: Graphic Representation of Ideas A picture is said to be worth a thousand words. That is true of art appreciation, but it is also true of communicating for understanding.

182

Teaching Improvement Science in Educational Leadership

Cartoons, graphs, charts, photographs, and so on all help an audience to understand the message. Repeating the 7/14/28 presentation activity with a 7/14/graphic (i.e., no text) activity helps educators to conceptualize their content in graphic form. The care with which they must select graphics deepens their understanding of their content and helps them to realize the different perspectives they will find in their audience. Educators find this a challenge—words, after all, are our life. Once they have adjusted to this format they consider it fun, and it works well with voice-over presentations that can be activated in kiosk format or a website, and/or for presentations to individuals with lower literacy skills. Conclusion A wonderful aspect of the development of students who are also educator-practitioners is that the cocreation of knowledge is experienced both as a student and as practitioner, both in the classroom and in the professional environment. As we enter this next decade and address challenges that sometimes feel insurmountable, our goal is to continually reinvent these activities in order to engage students in constantly evaluating the context of their work for both research and practice. If graduate students can learn to manifest their futures, they can surely impart this skill to their students in whatever age-appropriate ways they can imagine. Doing so requires deep questioning about the ways we work and engage with others. While the activities and tools above are meant to help people move practically through projects at work and improvement of student outcomes, they offer an intangible quality when explored by the individual—that of opening up conversation in unforeseen ways. And this can happen as graduate students develop their confidence in collaborating with others. In particular, while we emphasize writing and creating presentations, we do these in the service of improving our oral communication skills, which form the backbone of the majority of our work. While our students and cohorts are diverse in terms of professional experiences and goals, they all have one immediate need upon

183

Empowering Incremental Change Within a Complex System

entering the program: to devise a manageable capstone (doctoral thesis) project that enables them to explore the topics mentioned above, but from their current sphere of influence. Since they are not resident researchers within the university, as is common among PhD programs, and rely on their professional experiences to supply ideas for an enriching research experience, they must strategize with their advisors about a plan that honors the expectations of the program within the boundaries of their working lives. In Figure 9.7, we show how the integration of the three learning science themes (proximal/distal/action-oriented perspective) is situated with an assessment of context and history, in order to help the student to decide how to enter a PDSA cycle in order to conduct research. From there we develop a plan concerning method and theory.

Issues in education (our department foci: literacy, transition pipeline, professional development, systems change)

Assessment of students’ work environment: readiness for change, their sphere of influence, & their existing networks.

Problems of practice – what are we trying to solve specifically?

Research ?s

Thesis research falls in one or more areas that determine methods.

Act

Plan

Study

Do

Decision process for planning thesis.

Figure 9.7. Narrowing down the research project.

The chapter has provided examples and illustrations of processes that support faculty, students, and administrators to focus on an integrated systems approach that generates feasible problems of practice at various levels within organizations, develops and studies viable innovations (e.g., often as a dissertation in practice), and illuminates tools to realize action plans for future instruction/

184

Teaching Improvement Science in Educational Leadership

professional development. Leaders with more robust integration skills do not follow one standard process. Rather, these leaders engage a broader set of tools that lead to a broader mindset, an understanding of reciprocal determinism, and a respect for diverse views. Essentially, the guise of a seasoned professional (Credo 3) is the ability to recognize their sphere of influence, effect change across multiple levels, determine when and how to effect change at individual levels, tailor activities to contexts, help move systems (and the people who constitute those systems) from Point A to Point B, and engage in dynamic thinking that energizes blueprints for change through action and mindful decision-making. We hope to encourage doctoral students to abandon the myth that only certain people are born to lead and to drive change, because of supposed inherent abilities. People do need strategies, practice, trial and error, and the support of communities to feel comfortable in leading improvement projects. Leadership can become more natural with time as practice is gained, and especially when work is coordinated and knitted together as a fabric. If emerging leaders can be helped in becoming more comfortable in taking on the grand challenges in education, we develop a larger cadre of leaders and support a more equitable future.

References Anderson, S. E. (1997). Understanding teacher change: Revisiting the concerns based adoption model. Curriculum Inquiry, 27(3), 331–367. https://doi.org/10.1111/ 0362-6784.00057 Bambino, D. (2002). Critical friends. Educational Leadership, 59(6), 25–27. Bryk, A., Gomez, L., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. https://doi.org/10.1002/sce.21223 Carnegie Project on the Education Doctorate. (2019). The CPED framework©. https://www.cpedinitiative.org/the-framework Crow, R., Hinnant-Crawford, B. N., & Spaulding, D. T. (Eds.). (2019). The educational leader’s guide to improvement science. Myers Educational Press.

Empowering Incremental Change Within a Complex System

185

DuFour, R., DuFour, R., Eaker, R., & Karhanek, G. (2010). Raising the bar and closing the gap: Whatever it takes. Solution Tree Press. Hall, G. E., & Hord, S. M. (2011). Implementation: Learning builds the bridge between research and practice. Journal of Staff Development, 32(4), 52–57. https://uh. primo.exlibrisgroup.com/permalink/01UHO_INST/1j911kt/gale_ofa276865700 Institute of Education Sciences. (n.d.). What Works Clearinghouse. https://ies.ed. gov/ncee/wwc/ Kinder Institute for Urban Research. (2016). Disparate city: Understanding rising levels of concentrated poverty and affluence in greater Houston. https://kinder. rice.edu/research/disparate-city-understanding-rising-levels-concentratedpoverty-and-affluence-greater Knackendoffel, A., Dettmer, P., & Thurston, L. P. (2018). Collaborating, consulting, and working in teams for students with special needs. Pearson. Love, P. (2012). Informal theory: The ignored link in theory-to-practice. Journal of College Student Development, 53(2), 177–191. https://doi.org/10.1353/csd.2012. 0018 Mintrop, R. (2016). Design-based school improvement: A practical guide for education leaders. Harvard Education Press.

Appendix: Note-Taking Guide—Problem of Practice

1. Work is problem-specific and user- centered. • What needs to be solved? • Who needs this solved? • What’s the user’s input on this?

Your campus/district

2. Performance will vary! • One size does not fit all—context matters! • Differentiated solutions are necessary if user-centered principles guide our work. • Develop a variety of solutions to examine. • Develop a process to determine “what works, for whom, and under what conditions”—group your efforts accordingly. • Document what’s working and share!

Your campus/district

186

Teaching Improvement Science in Educational Leadership

3. Get to know your organization. • How does it function? a. What’s working? Why? b. What’s not working? Why? • Dig into the unknowns—the black boxes in an organization a. Who keeps the organization going? b. How? Why? What do they know? c. What do they do? • Begin the process of developing a theory of practical improvement. Share it.

Your campus/district

4. Measures • What are we trying to change? • What’s the current status at different levels? • How do we measure each level? a. Is it a “good” measure? b. Is it an “accurate” measure? c. Is it a “consistent” measure? d. Graph it! • Are the interventions/strategies achieving their intended outcomes?

Your campus/district

5. Drive improvement—Interventions + Inquiry • Plan—problem-specific, user-centered • Do—implement the intervention • Study—measure the outcomes • Act—adjust, rinse, and repeat until you’re getting the intended outcomes

Your campus/district

6. Networks—Teams—Working together • Specific problems can be big • User-centered solutions can be complex • Scale starts with a network • Identify and support networks • How do you form a network?

Your campus/district

Source: Adapted from Bryk et al. (2015).

chapter ten

Aligning Values, Goals, and Processes to Achieve Results RYAN CARPENTER Estacada School District

KATHLEEN OROPALLO Studer Education

Abstract

I

n order for the educational community to face ever-changing external pressures and priorities, educators must find ways to become agile, and at the same time, not lose sight of each student’s success. In this chapter we explain the journey of a school system as they embarked upon aligning their priorities by undergoing cycles of improvement while at the same time developing systems to achieve defined success. By doing so, they strengthened their collective agility to meet the ongoing challenges and influences that can often distract and become barriers to success, particularly for K–12 settings. This case study will examine the journey of one rural public K–12 school district, in the Pacific Northwest, and its implementation of continuous improvement cycles. The case will start first with their C-suite leaders, then cascading across all district, departmental, and building leaders, using the plan-do-study-act (PDSA) cycle. This story of a district’s commitment to transparently examining, aligning, and prioritizing its values, goals, and actions, while systematically addressing challenges and leaning into improvement tools (defining problem of practice, root cause analysis, leading measures, 187

188

Teaching Improvement Science in Educational Leadership

survey results, and leader rounding) and processes demonstrates a powerful example of the evolution of an organization committed to improvement. Continuous improvement (CI) projects follow unique selection, deployment, communication, and tracking processes. In this chapter we will also examine the role of leadership engagement and leverage powerful shared values such as passion, fortitude, and a willingness to learn. This intentional focus on shared values validated each leader’s role in the improvement process and deepened their shared commitment to improvement.

Background This case study describes work in the Cascades School District, a rural district in the northwest United States serving students in grades preK–12 in two preK–5 elementary schools, one 6–8 middle school, and one 9–12 high school. The student population consists of more than 1,800 students and is 80% White, 17% Hispanic/Latinx, 1% African American, 1% Asian, and 1% American Indian. Fifty percent of the students qualify for free or reduced-price lunch, while 10% are classified as English language learners and 18% receive special education services. The district’s 300 staff members consist of 12 administrators, 140 teachers, and almost 160 support staff. This body of work draws upon research and frameworks from two main continuous improvement organizations: Studer Education and the Carnegie Foundation. It also cites work from researchers in the fields of improvement science methodology, quality improvement, change management, and organizational leadership. The collective body of research and the frameworks deepened CSD’s learning and processes that led to early results. In addition, research on systems alignment and employee engagement were utilized to generate assertions as CSD moved through the improvement process (Harter, 2020; Senge, 2012; Studer, 2004; Studer & Pilcher, 2015). Superintendent Day has served as the district’s improvement leader for 3 years and has been deeply committed to aligning the mission, vision, goals, and values of the organization. Before Day

Aligning Values, Goals, and Processes to Achieve Results

189

was hired as CSD’s leader, the district had never developed a strategic plan to drive core decisions within the organizational structure. Departmental and building leaders operated in silos, making goodfaith decisions that benefited their individual schools, with little awareness of the unintended consequences caused by cumulative misalignment within the organization. Different schools had different priorities, and the district was constantly chasing the next “new” framework and model with limited strategies to successfully implement and/or sustain these new change initiatives. A consequence of system leaders being siloed was a deeply ingrained culture of “we/they”—a process by which individuals transfer accountability and responsibility to others (Studer & Pilcher, 2015). Anytime the district developed an initiative or established a set of decisions to be communicated to all employee stakeholders, leaders would transmit these communications first by stating that this decision came from the district office, and then abdicating all collective ownership of the message and its intentions, creating mistrust and confusion. Over time, these actions developed into a clear division between district administration and building leaders. The organization had arrows pointing in all different directions. As Superintendent Day began auditing the organization, he quickly realized that there were some extremely talented leaders within the district doing exceptional work; however, working in isolation would not make the system reliable. He also noticed a desire on the part of district and building leaders to work more closely as a team. This compelled him to eliminate old definitions of what district support meant. He sought new ways to improve the system’s communication and standards of practice, knowing the old way had prevented the district from leveraging the experiences and expertise of these talented leaders. Soon after he began to probe these same district leaders; they began to collectively recognize that the district had not established any core values and was drifting without any aligned strategic processes. Every building and district-level leader believed passionately in different paradigms and strategies, and they needed to find a way to collaboratively begin to drive a new vision for their district.

190

Teaching Improvement Science in Educational Leadership

During the 2nd year of Superintendent Day’s leadership, he gathered the Executive Board of Directors and the district leadership team to engage school and community stakeholders in developing a 5-year strategic plan. The leadership team, together with the community, formed a strategic planning committee that collaboratively designed a strategic plan and identified five core values and key areas of focus for Cascades School District: • Student success • People • Quality service • Finance and operations • Growth and innovation Upon completion of CSD’s strategic plan, Superintendent Day, along with the leadership team, determined that the next challenge would be to operationalize their plan. In order to develop the organizational efficacy needed to successfully implement their plan, the Cascades School District partnered with Studer Education, a national company, whose proven framework, Evidence Based LeadershipSM, drives successful improvement efforts that lead to sustainable results. The remainder of this chapter shares the Cascades School District’s journey toward aligning the whole organization using this model of continuous improvement and describes how it began to transform district culture and achieve desired results. Defining Improvement The biggest and most profound challenge we’ve had to deal with— and the one that’s requiring the greatest adjustments inside organizations—is the education sector’s move from episodic change to continuous change. —Studer & Pilcher, 2015, p. 87

One of the most challenging questions that educators have grappled with is why some organizations achieve and sustain improvement

Aligning Values, Goals, and Processes to Achieve Results

191

while others fail. What makes one organization more successful at driving improvement and achieving results than another? How do organizations create systems around improvement that not only sustain change but create an ongoing process that makes them resilient and more agile at solving new problems that emerge? A problem is defined as the gap between where an organization is and where it wants to be. The work of improvement is to eliminate the barriers that create the gap, design processes to prioritize, and test change ideas that tackle these gaps (Ahlstrom, 2014). According to improvement expert Ahlstrom (2014), there are three actions that define improvement: eliminate barriers or hassles, solve problems, and improve outcomes. Districts engaged in continuous improvement strive to answer three key questions (Bryk et al., 2015): 1. What are we trying to accomplish? 2. How will we know the change is an improvement? 3. What change will we make that is an improvement? Change begins when organizations align priorities, define success, take action, and use iterative cycles like the PDSA to solve problems that eliminate barriers to success (Bryk et al., 2015; Carpenter & Peterson, 2019); however, cycles of improvement alone do not change a system. Developing leaders and a culture that engages employees are critical differentiators from other improvement methodologies that are only tactic-driven tools (Greco, 2019; Studer & Pilcher, 2015). Superintendent Day chose Evidence Based LeadershipSM because he knew he wanted an improvement process that put people first and would also align his system. The EBL Framework drives improvement by aligning goals, behaviors, and processes while building a culture around systems of improvement to solve problems, enable learning, and develop leaders (Studer & Pilcher, 2015) (see Figure 10.1).

192

Teaching Improvement Science in Educational Leadership

Figure 10.1. Evidence Based Leadership FrameworkSM

Begin With People First: Developing and Engaging People In order to fully realize the kind of improvement that leads to sustainable results, an organization must build its systems and processes around people first by hardwiring actions that create a readiness for continuous improvement (Studer, 2009). When organizations hardwire behavior, they practice and standardize behaviors until their employees are engaging in them 99% of the time (Studer & Pilcher, 2015). In The Improvement Guide, the authors describe this as “creating continuity” throughout the system (Langley et al., 1996). Alignment and consistency of behavior allow an organization to scale the desired improvement throughout the system. Superintendent Day’s earliest step in aligning district actions began when he invested in their leadership development. This commitment came as an effort to live their newly defined core values and mission. He arranged for leaders to come together to learn about the principles, processes, and behaviors of continuous improvement and Evidence Based Leadership (EBL). An early priority was engaging employees in small behavioral changes that created cultural shifts necessary to sustain continuous improvement. The EBL Framework drove the critical behavioral shifts and provided strategies and tools to combat behaviors that can undermine a system’s effort to achieve desired results. CSD focused on three critical elements of the EBL Framework:

Aligning Values, Goals, and Processes to Achieve Results

193

1. Developing and engaging people first around the Nine Principles approach for Organizational Excellence 2. Building a culture of service to engage employees and build collective efficacy 3. Using always actions, along with related tools, tactics, and strategies, to ensure alignment of the system One of the initial steps in engaging leaders was to begin to hardwire the behaviors outlined in the Nine Principles approach for Organizational Excellence (Studer & Pilcher, 2015) (see Figure 10.2). NINE PRINCIPLES® for Organizational Excellence Principle 1: Commit to Excellence Set high expectations to achieve results while living out mission and values. Principle 2: Measure the Important Things Continuously track progress to achieve results with an improvement mindset. Principle 3: Build a Culture Around Service Serve others with great care and concern. Principle 4: Develop Leaders to Develop People Coach people to be their best at work. Principle 5: Focus on Employee Engagement Attend to aspirations and desires in the workplace. Principle 6: Be Accountable Commit individual accountability to achieve organizational goals. Principle 7: Align Behaviors with Goals and Values Apply consistent practices to move the organization in a positive direction. Principle 8: Communicate at All Levels Build connections so that people know why what they do matters. Principle 9: Recognize and Reward Success Value and appreciate people working together to get results.

Figure 10.2. Nine Principles

The Nine Principles approach provided a roadmap of the guiding concepts and processes that are fundamental to EBL and known to achieve systemic results (Studer & Pilcher, 2015). Superintendent Day embedded these principles in every engagement and used them to standardize his leadership development, setting expectations for improvement. His leadership and commitment to modeling the

194

Teaching Improvement Science in Educational Leadership

Nine Principles himself, while engaging his executive-level leadership in understanding each principle, accelerated the necessary cultural shifts and demonstrated key performance expectations. These principles were instrumental in helping his leaders make the shift toward continuous improvement. By example, Superintendent Day provided an early understanding of “what right looks like.” One way he did this was by intentionally developing focused meeting agendas around the Nine Principles and then tracking how these principles were being demonstrated across the system. As he recognized changes in CSD’s processes, he called them out and manage his executive leadership team. As a result of these actions, CSD began to immediately experience behavioral and process alignment shifts that unified leaders as they began to live these principles. Leader Huddles: Creating Routines to Learn and Improve At CSD, one of the earliest processes Superintendent Day committed to was to develop all district and school leaders. He hosted routine conversations the district called “leader huddles” that provided time to introduce key leadership expectations, actions, and processes that aligned with CSD’s core values. Together, he and his leaders focused on solving real problems, eliminating barriers, and making adjustments, while using tools that kept them aligned (Bryk et al., 2015; Deming, 2013; Studer & Pilcher, 2015). Leaders set forth data-driven actions and used a scorecard and simple dashboards to measure and monitor their progress. These short cycles of improvement (PDSA cycles) produced evidence to determine if a change was an improvement or whether they needed to make any necessary adjustments, meanwhile helping them to harvest evidence-based successes that served to boost morale throughout the district. CSD utilized these routinely scheduled leader huddles to drive small incremental improvements. District leaders, guided by Superintendent Day, met every 30, 60, and 90 days, learning and adjusting as they monitored evidence of their actions and slowly began to improve their processes. Overseeing their improvement in this way helped CSD learn and make progress toward their annual goals. Superintendent

Aligning Values, Goals, and Processes to Achieve Results

195

Day and his commitment to routine and deliberate monitoring of their actions helped each leader become accountable to the district’s overall progress and move closer toward results. From these activities, learning increased among the district and building leaders, and critical systemic processes overall began to improve. The scorecard and dashboard built individual leadership accountability around shared goals and actions using a simple stoplight method. The stoplight process used three colors to indicate the status of their action: green for action completed, yellow for action in progress, and red for no action at all. During each huddle, leaders were accountable to the group to indicate whether they had been able to “do what they said they were going to do” and could provide evidence from leading measures to explain their results. Two key questions drove individual leadership accountability as each leader took their turn answering these questions: • Did you do what you said you were going to do? • What did you learn? Close examination of their progress revealed insights about what they were learning from each action and determined if an improvement had occurred. As leaders gained the fundamentals of improvement, they quickly began to harvest successes. For example, the director of district operations and his maintenance team developed key systems for improving the safety and cleanliness of the school. They created a benchmark checklist and 5-point scale that aligned with safety requirements and implemented a daily rating protocol and recorded results on a dashboard to determine if these checks would improve quality. They improved their rating from a 2.75 baseline to a 3.76 in 6 months. By creating a practical leading measure and holding themselves accountable to the leadership team during leader huddles, they were able to improve their processes, creating efficiencies that would later prove to be crucial when COVID-19 occurred. These efficiencies also resulted in cost savings that were cycled back into other student programs and initiatives to improve student success. Small incremental changes like this example helped cascade

196

Teaching Improvement Science in Educational Leadership

communication and processes across the system and provided the superintendent and his leaders with important learning about their improvement process (Sternke, 2019). In another instance, district and building leaders utilized data from leader rounding. Leader rounding is a process for simple check-ins with employees and staff that create feedback loops helping to identify key themes around successes, locate barriers, and solve problems from employees and staff. These themes are shared during the 30–60–90-day leader huddles. As the curriculum department and building leaders rounded with teachers, one barrier brought to the leader huddle from rounding was a lack of teacher understanding of a new K–5 English language arts curriculum tool. This lack of understanding was impeding the implementation of ELA standards instruction and stalling improvement efforts for students. Early identification of this barrier allowed the district to adjust and work with instructional coaches to scaffold teachers’ understanding and accelerate the use of the ELA tool. Utilizing Always Actions to Improve Employee Engagement and Build Trust How people feel about where they work influences productivity (Gallup, 2020). People want to feel proud of where they work. Superintendent Day’s goal in aligning behaviors and beginning to form standards of practice required faculty and staff input. Their involvement in defining the way people engage in the workplace environment was essential to creating a positive, satisfying, and productive work culture (Harter et al., 2002; Studer & Pilcher, 2015). A Gallup (2020) poll revealed that engaged employees are more likely to stay in their jobs, know their purpose, and feel like they’re making a difference—key factors for achieving organizational results (Gallup, 2020). While engaging their people, CSD also built routines and habits that helped them align their key priorities. The EBL Framework refers to these behavioral habits as “always actions,” and they help drive important cultural shifts that establish key behavioral improvement fundamentals for the organization to deepen their improvement efforts (Studer & Pilcher, 2015).

Aligning Values, Goals, and Processes to Achieve Results

197

CSD began to hardwire two employee engagement behaviors: recognition and gratitude. According to Pilcher and Studer (2015), recognizing and rewarding success is essential to the improvement process. These behaviors had an almost immediate effect in eradicating the we/they culture that had been a major contributor to the misalignment of district behaviors and communication (Studer, 2009). By recognizing positive behaviors, the district reinforced a clear expectation of “what right looked like.” Superintendent Day embedded both recognition and gratitude in routine processes by starting all meetings by asking leaders to practice recognition and gratitude at the start of the session. Each meeting agenda began by asking people to recognize others who had contributed positively to ongoing work or who had helped others. Then, attendees were asked to spend 2 minutes generating handwritten thank-you’s for individuals to whom they felt grateful. In the first 6 months, they had sent more than 3,000 thank-you’s and had cascaded gratitude as a routine practice throughout the school system. The CSD needed baseline data to obtain critical feedback and gather evidence from key stakeholders to validate whether or not they had made an impact on employee engagement through their actions. In the fall of 2019, the district implemented an Employee Engagement Survey administered by Studer Education. The Employee Engagement Survey was administered to all employees to assess three areas: (a) perceptions about immediate supervisors supporting a best place to work environment, (b) perceptions of executive leadership supporting a best place to work environment, and (c) perceptions about communication practices. During this process, all employees within the organization had the opportunity to respond. The Employee Engagement Survey revealed perceptions of employees and their direct report supervisors. The survey was administered twice during the academic year so that CSD could formatively measure progress and improvement. The survey helped the school district know if there was an impact on engagement and if the two always actions, recognition and gratitude, had made an impact. Since faculty and staff engagement was critical to creating systems around improvement, the district also

198

Teaching Improvement Science in Educational Leadership

set annual employee engagement goals on its scorecard to measure long-term improvement. The Cascades School District’s baseline data revealed an overall mean score of 4.21 on a 5-point scale. On its face, this was a great win to celebrate with district and building leaders. In general, the employees of the district were engaged. However, as the team disaggregated the data, there were many individual areas that needed improvement. One specific leadership team understanding about using measures that matter was the analysis of the district’s top box percentage. The “top box percentage” is the percentage of employees who select the highest possible score option indicating that they “Strongly Agree.” Research suggests a difference in the loyalty of the people who indicate that they are extremely satisfied (i.e., “Strongly Agree”) compared to those who are just satisfied (i.e., “Agree”) when rating their experience or engagement. When leaders examined the employee engagement measure focusing on top box results, it provided a more strategic approach to evaluating the way in which employees perceived their own engagement within the organization. The more positive these results in top box, the more likely employees would be loyal to the organization and remain in the district (Gallup, 2020). By monitoring this metric, these results made it possible for the district to determine if they were at risk for employee turnover. CSD’s fall survey indicated that 44% of all employees rated the organization a 5, which meant that only 4 in 10 employees “strongly agreed” that their district and school leaders created a work environment that supported their ability to perform at the highest level. These results indicated that if the district wanted to engage employees and provide an environment leading to high performance, they needed to look closely at the item analysis and determine specifically how to improve in these areas. Superintendent Day then applied top box analysis to each item as the team examined the overall survey closely. Using top box as a strategy to monitor employee engagement, the superintendent was able to target particular survey items that had a weak top box and work with leaders during rollout to find solutions. The team also defined what success

199

Aligning Values, Goals, and Processes to Achieve Results

would look like if they were “best in class” with each of these highest and lowest rated areas. This gave individual leaders specific strategies for improvement that helped them meet the success criteria generated by employees and staff (see Table 10.1). Table 10.1. Employee Engagement Survey Fall 2019 Employee Engagement Survey

Fall 2019

Participation

185

Overall mean

4.21

Top box percentage

44.06%

The survey results also helped by identifying practices that district leadership was doing well and reminded them to continue to do them. In Table 10.2, the CSD had the opportunity to recognize and celebrate the five highest-scoring categories ranked by employees in the organization. When leaders and individuals are celebrated not solely based on the numbers from the survey but on their actions and behaviors, it creates a new momentum toward improvement (Studer & Pilcher, 2015). One great win for Superintendent Day and his leadership team was the achievement of establishing a “clear understanding of the mission and goals of the school district.” This validated the intentional efforts of the CSD’s leadership as they strived to align their system. In the fall survey, it had already become evident that the district’s mission, vision, and values were taking hold throughout the system. Areas that they had worked on, recognition and gratitude and system alignment, appeared in the five highest-rated items on the survey. Table 10.2. Item

Mean

Top Box

5: Concern for my welfare

4.46

61.75%

4: Recognition of good performance

4.38

56.83%

1. Provides good resources helpful for my job.

4.38

53.80%

3. Effective staff meetings

4.36

51.65%

C4: Clear mission and goals of district

4.31

39.78%

200

Teaching Improvement Science in Educational Leadership

Leader-Led Results Rollout Another important always action that CSD implemented was a leader-led results rollout of their employee engagement survey. To gather more specific feedback, clarity, and perspective from employees, each district leader, including building principals, met with employees they supervised and engaged employees in a transparent conversation around the results and specific ways they could improve and keep doing what was working. The leader-led results rollout impacted the continuous improvement cycle by providing critical and actionable feedback for improving staff engagement. Inviting stakeholders to participate in a transparent discussion of the results and actively listening to understand what needed to take place to improve each area increased employee’s ownership of the process. This also validated each district leader’s successes in an important way. The rollout process included hearing positive feedback from their own faculty and staff on what was already working. This commitment to a timely, regularly scheduled transparent rollout of the data results demonstrated the value of data to all stakeholders engaged in the process. The Cascades School District asked its leaders to roll out the top three and bottom three scoring items from the survey to all direct reports. There were two main purposes for sharing the top three results: (a) to celebrate the wins and successes that were going well and (b) to gain clarity and specific feedback about leader behaviors and actions that yielded these results. This information served as a way of showing leaders the employees’ perceptions about “what right looked like” and how to create leader actions that would lead to improvements based upon this feedback. There is vulnerability in the results rollout process as leaders place themselves in a stance to listen and receive feedback. Each administrator’s ability to listen to employee voices and learn how to improve helped them build authentic relationships with staff and conveyed a message that employees’ voices and perceptions were valued. Consequently, when leaders committed to one or two strategic actions for improving those scores and followed through with

201

Aligning Values, Goals, and Processes to Achieve Results

them, they began to build credibility through reliability, a behavior essential to executing improvement and building trust (Studer & Pilcher, 2015). Establishing this shared problem-solving and learning with departmental and school faculties and staff hardwired each level of the organization with a common strategy. These actions also began to break down silos when leaders and staff reached consensus. This consensus also created a common understanding of what improvement priorities mattered to employees and how to identify specific actions to respond. The lowest three scores for the CSD are presented in Table 10.3. Because “leaders go first” (Studer & Pilcher, 2015), Superintendent Day rolled out the district results to his employees and senior leadership team. His action plans, developed based on the feedback he received from these stakeholders, modeled for other leaders how to participate in the transparent sharing of data. He then set an expectation for his executive leaders to do the same and roll the results out to their staff. Table 10.3. Employee Engagement Mean and Top Box Results Employee Engagement Survey

Mean

Top Box

C1. Honest two-way communication

3.94

29.83%

13. Decision-making in the best interest of the district

3.91

30%

11. Finance management

3.85

21.47%

Using Leader Rounding to Check In With Stakeholders As leaders received input from the rollout of data and other key metrics on their scorecard, they needed to continue to check in with faculty and staff. They needed to be sure improvements and communication were cascading throughout the system. They began to hardwire a process called leader rounding. Inspired by the health-care profession, leader rounding is a tool that enables a leader to create an ongoing relationship with employees. Leader rounding enables

202

Teaching Improvement Science in Educational Leadership

leaders to receive direct and specific feedback on what is working well and where there are opportunities to help the system work better. By focusing on four main questions, the leader is able to check in with staff members in order to determine the pulse of the organization from the perspective of the employee (Studer & Pilcher, 2015). • What’s working well? • Is there anything I can help you with right now? • Do you have what you need to do your job? • Is there anyone I should recognize for doing great work? CSD leaders engaged in leader rounding and tracked each check-in on a district-designed leader dashboard. The dashboard was used to identify key themes around successes, identify barriers, and solve problems from employees and staff. Implementing rounding and sharing themes across departments and buildings allowed CSD to monitor their execution and make needed adjustments as quickly as possible.

Results That Matter Six months into the improvement work, CSD saw significant results. One of the earliest shifts came as a result of hardwiring two behaviors: results rollout and leader rounding. Results rollout provided important information for how to improve employee engagement. The leader rounding helped district and school-level directors and principals check in and hold themselves accountable to their employees as they followed through implementing the strategic actions they had committed to during each of their results rollout. In the first 6 months, leaders had rounded with direct reports 138 times, greatly increasing communication and deepening relationships with employees. The rounding session helped identify key barriers to progress and where more direct communication was needed. Table 10.4 shows the significant improvement in employee engagement results from fall to spring.

203

Aligning Values, Goals, and Processes to Achieve Results

Table 10.4. Employee Engagement 2019–2020 Comparison Employee Engagement Survey Results

Fall 2019

Spring 2020

Change

Overall mean score

4.21

4.40

+0.19

Top box

44.06%

54.59%

+10.5

Table 10.5 illustrates the changes in the lowest three survey items for the district. These results were attributed to the hardwiring of key always actions that aligned behaviors to become a catalyst for improvement. Table 10.5. Employee Engagement Survey Fall–Spring Comparisons Employee Engagement Survey Item

Mean Fall

Mean Spring

Change

C1. Honest two-way communication

3.94

4.19

+0.25

13. Decision-making in the best interests of the district.

3.91

4.26

+0.35

11. Finance management

3.85

4.06

+0.21

CSD as an organization had begun to shift from the we/they culture they had once been to a culture that became an employeerated “best place to work” by focusing on people first. Although their journey is not finished by any stretch, their readiness to face change, respond with resiliency, and operate as an aligned system has become a major step in sustaining improvement and achieving ongoing results. This case used several key tools from improvement science. PDSA cycles, leader rounding, and results rollouts were all key components used to gather data, monitoring and refining practices as they were happening.

Conclusion Continuous improvement is more than tools and tactics; it is about creating a culture that values and engages people. To do so, the

204

Teaching Improvement Science in Educational Leadership

Cascades School District learned that they had to have clearly defined core values, and for each decision and change that was made, these strategies and ideas needed to align with these values. As disruptions such as Covid-19 occurred, it was extremely important to know their identity through their values, and it allowed CSD to put students, families, and employees at the forefront of those critical decisions. In addition to this, they also learned that employee engagement begins with neutralizing toxic behaviors like we/they and replacing these with positive behaviors like recognition and gratitude. This cultural shift only comes when these behaviors are hardwired and embedded in a district’s system of improvement (Studer, 2004). CSD learned that to get results, engaging with employees and staff had to happen first. Second, CSD needed to have all leaders and employees participate in shared ownership of the improvement process. Critical actions such as leader-led results rollout and creating forums for two-way communication gave all stakeholders a voice that would provide valuable information in prioritizing what improvements needed to take place first. CSD’s superintendent modeled and encouraged others to live the phrase “Leaders go first,” and in doing so, built trust, reliability, and consistency as part of their improvement process. The cascading of communication and the voice of the customer (students, parents, employees) helped develop district and school leaders who learned to value feedback and learned to use it to harvest and celebrate wins across the system. To identify these wins, the CSD had to create a paradigm shift in thinking about data and measurement. They needed to understand practical measures that would inform their ability to overcome barriers and create improvements in the system. This at first presented a dramatic shift from the traditional “measurement for accountability.” Like most K–12 systems in the United States, they had spent decades grappling with this approach, but it had led to poor or no results (Bryk et al., 2015). To learn and practice the cycles of improvement, they had to trust practical measures that would help them achieve results. The district developed a comfortable cadence “to go slow to go fast” to allow individuals to learn and digest this new

Aligning Values, Goals, and Processes to Achieve Results

205

shift (Bryk et al., 2015). Once the emphasis focused on learning to improve versus measures for accountability, system stakeholders embraced the learning and understood a new type of accountability to mean “You can count on me.” In an unprecedented year of disruption, this new definition of accountability was a powerful shift that led CSD to become one of the earliest and most agile districts in their state to respond quickly and adjust. They did so without losing sight of their values, goals, and processes. In fact, the second round of their employee engagement results showed another increase in results, despite being administered in the middle of the COVID-19 crisis. CSD continues to improve and achieve results. To keep their aims in sight, they have used a scorecard to align goals, measures, and strategic actions, as well as designed a dashboard to monitor progress, sometimes daily. Incorporating stoplight reporting, each leader could visually represent the degree of implementation and signal when an adjustment was necessary to overcome barriers to improvement. When CSD built systems around improvement, when they valued people first, and when they made learning the outcome, they began to see improvements that led to results. So began an improvement journey that Cascades School District will continue.

References Ahlstrom, J. (2014). How to succeed with continuous improvement: A primer for becoming the best in the world. McGraw-Hill Education. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Carpenter, R., & Peterson, D. S. (2019). Using Improvement Science in professional learning communities: From theory to practice. In R. Crow, B. N. HinnantCrawford, & D. T. Spaulding (Eds.), The educational leader’s guide to improvement science: Data, design and cases for reflection (pp. 275–294). Myers Education Press. Deming, W. E. (2013). The essential Deming: Leadership principles from the father of quality. McGraw-Hill.

206

Teaching Improvement Science in Educational Leadership

Gallup (2020). Employee Engagement and Performance: Latest insights from the world’s largest study. Gallup%202020%20Meta-Analysis%20Brief.pdf Greco, P. (2019). Healing our systems and making improvement stick. AASA School Administrator. http://my.aasa.org/AASA/Resources/SAMag/2019/Mar19/Greco. aspx Harter, J. (2020). 4 factors driving record-high employee engagement. U.S. Gallup. https://www.gallup.com/workplace/284180/factors-driving-record-high employee-engagement.aspx Harter, J. K., Schmidt, F. L., & Hayes, T. L. (2002). Business-unit-level relationship between employee satisfaction, employee engagement, and business outcomes: A meta-analysis. Journal of Applied Psychology, 87(2), 268–279. https://doi.org/ 10.1037/0021-9010.87.2.268 Langley, G. J., Nolan, T. W., Provost, L. P., & Norman, C. L. (1996). The improvement guide: A practical approach to enhancing organizational performance. Jossey-Bass. Senge, P. M. (Ed.). (2012). Schools that learn: A fifth discipline fieldbook for educators, parents, and everyone who cares about education (1st rev. ed.). Crown Business. Sternke, J. (2019). Cascading communication: The intentionality with which leaders share information contributes mightily to the performance excellence of the organization. AASA School Administrator. http://my.aasa.org/AASA/Resources/ SAMag/2019/Mar19/Sternke.aspx Studer, Q. (2004). Hardwiring excellence: Purpose worthwhile work making a difference. Fire Starter. Studer, Q. (2009). Straight A leadership: Alignment, action, accountability. Fire Starter. Studer, Q., & Pilcher, J. (2015). Maximize performance: Creating a culture for educational excellence. Fire Starter.

chapter eleven

Toward a Scholarship of Teaching Improvement Five Considerations to Advance Pedagogy LARENA HEATH

Carnegie Foundation for the Advancement of Teaching

BARBARA SHREVE

Carnegie Foundation for the Advancement of Teaching

LOUIS M. GOMEZ

University of California, Los Angeles & Carnegie Foundation for the Advancement of Teaching

PAUL G. LEMAHIEU

University of Hawai’i, Mānoa & Carnegie Foundation for the Advancement of Teaching

T

eaching is a scholarly enterprise (Boyer, 1997; Shulman, 2004). This volume firmly places the teaching of improvement within this realm of scholarship. In general, two motives animate the desire to conceive of teaching as a scholarly enterprise. First is expanding the voices respected in efforts to improve the rigor, quality, and impact of teaching. Moving beyond teaching as an object of study (engaged in exclusively by academics) to a scholarly undertaking in which those who engage in the practice are directly involved in its scholarship provides the opportunity for deeper, more nuanced knowledge of the activity of teaching itself. A second motivation is the aspiration to elevate the practice of teaching to a position of greater respect within the academy and elsewhere by engendering and respecting its intellectual character as one that embodies the 207

208

Teaching Improvement Science in Educational Leadership

best of scholarship and inquiry, thereby making visible how teaching might be made better. This view of a scholarship of teaching embraces certain essential attributes: privy access to relevant and maturing bodies of knowledge, critical review of the merits and contributions of work by established scholar-practitioners, and so on. It is readily apparent that the work shared in this volume partakes of these attributes. There are many demonstrations of the application of state-of-the-art thinking about instructional practice. Moreover, each of the several offerings contained herein underwent critical peer review as part of the process of selection and inclusion. Individually and collectively they represent an important addition to the body of knowledge that constitutes our scholarship of teaching as it connects to continuous improvement. By way of postscript, we identify five dimensions of improvement pedagogy that emerge when considering this collected work as a whole: capacity, content, context, community, and compassion. For us, it is evident that they are prevalent throughout the chapters. Yet these dimensions may go unnoticed by the reader simply because they are sub rosa in the individual chapters—that is, they are tacitly intertwined with other concerns within each chapter’s discussions. Below, we elevate them for explicit consideration and offer brief comments on each to provide future teachers of improvement science with a set of binding pedagogical themes to spur the instructional planning that will follow from this book.

Capacity The primary goal of many of the improvement science pedagogical applications described in this book is to move beyond simply imparting improvement knowledge to students to equipping them to use that knowledge well. As several authors in this volume assert, many educational leaders come into professional development opportunities or graduate programs with the explicit purpose of improving their practice in order to spur organizations and their constituent members forward. To this end, educational leaders need to be

Toward a Scholarship of Teaching Improvement

209

prepared to apply improvement science tools and methods to address complex problems in their organizations. Two approaches to building capacity include creating opportunities for students to apply learning in personal and in organizational contexts. Personal improvement projects allow students to deepen their understanding of improvement concepts as they address a problem that they find personally meaningful. This learner-centered approach encourages students to connect new learning with existing knowledge (Chapter 1) and is easily adapted for learners based on their experience in improvement and the contexts in which they serve (Chapter 6). Additionally, part of building capacity requires that instructors create opportunities for students to move from theory to practice by actually doing improvement work in their sites as they are learning (Chapter 7). If instructors encourage learners to see these as learning-by-doing opportunities, students may deepen their understanding of the specific improvement skills and concepts by engaging in critical reflection within a learning community (Chapter 2). This volume highlights that capacity development is bound to a “doing to learn” orientation (Chapter 7), one that is focused on the “how” of improvement instead of just the “what.”

Content Among other noteworthy characteristics, this volume offers those who plan to teach continuous improvement a critical perspective on the discipline’s content. By extension, it also affords us an essential view on what it means to help students understand, foundationally, what the content of improvement is all about. Continuous improvement content is a system-centered compendium of knowledge. We suspect that the systems nature of improvement might lead the reader to perceive this as a certain form of redundancy and miss an important nuance. For example, most of the book chapters refer to some or all of the six principles of improvement as articulated by Bryk and colleagues (2015). Some readers will notice this and take it to be merely a redundancy that is a byproduct of a tightly crafted

210

Teaching Improvement Science in Educational Leadership

edited volume where multiple authors treat the same content. Rather than just repeating a “list” of six vital principles, the book, taken as a whole, underscores that the six are not a simple list but a connected system. Readers should see the six principles as a collection that work in concert. One can’t simply check the user-centered box without understanding variation or context as a system, and one cannot engage in disciplined inquiry without appropriate practical measurement. There is a sense in which each principle begs the inclusion of the rest to support holistic understanding. For example, Chapter 2 accentuates those principles that respect individual and collective voice to achieve social justice. Therefore, social justice becomes a product of enacting all the principles. Chapter 8 reprises the principles to inform the redesign of the EdD. The volume’s contributions lead learners to see the interrelated nature of the improvement learning enterprise. As improvement science has spread throughout educational practice and research circles, it appears to us that at times people believe that they are “doing” improvement if they are engaged in the PDSA or if they conduct empathy interviews. They perceive that these or other isolated activities are continuous improvement. Seeing the work in various systemic settings will help learners understand its interdependent and holistic nature. Insightful instructors can use this volume to help learners on this journey.

Context This book guides improvers to engage key stakeholders early and often in the improvement process and ensures that the problems being addressed, and the solutions developed, are well suited for the context to which they will be applied. Similarly, this volume highlights context as a key consideration for instructors of improvement science as they prepare others both to understand their own systems and their roles within them. Instructors can underscore the importance of context and help students see that it matters greatly in their improvement efforts by

Toward a Scholarship of Teaching Improvement

211

attending to two key questions: Who are my students? and What kinds of problems are they hoping to solve with an improvement approach? In addressing the first question, Chapter 7 describes how instructors integrate knowledge of students’ improvement capabilities, as well as their professional identities and proximity to the problem they are addressing, to adapt their program design and instruction to best support educational leaders’ launching improvement networks. In Chapter 9 the authors highlight the role that one’s sphere of influence plays when approaching a change project. Consequently, instructors must create opportunities for students to reflect on their current level of power and authority within their current system as they apply their learning in an improvement project. Building on this understanding of students’ backgrounds, instructors can design instructional activities that address the challenges students may face in implementing improvement approaches at their sites. For example, utilizing case studies as described in Chapter 4, an instructor might select cases that are situated in settings that are familiar to students—encouraging students to envision how improvement might look in their own contexts. Chapter authors also suggest that it is important to consider the kinds of problems students are hoping to solve when designing doctoral (Chapter 3) and professional licensure programs (Chapter 6), as well as create opportunities for students to identify problems of practice that matter to them and implement improvement efforts to impact their own systems.

Community Another of the essential themes that emerges in these pages is the important role that community or collective engagement in various forms plays in the improvement enterprise and in teaching about it. Several of the chapters encourage instructors to keep in view the collaborative and collective nature of improvement work. The volume highlights the collective aspect of improvement work by variously accenting that shared perspectives define a problem or that joint work builds understanding of the system (Chapters 2, 6, 10) or

212

Teaching Improvement Science in Educational Leadership

that collective formulation of change ideas leads to improvements (Chapters 1, 4) or finally, that it is the coordinated spread of improvement knowledge obtained from iterative testing that leads to widespread improvement (Chapters 7, 9, 10). In addition, networks are frequently referenced as human organizations that are uniquely well positioned to support improvement work. This emphasis on community is essential because in many places throughout the process the requisite knowledge and understanding for improvement is and must be collective in nature. It is also essential because, as the saying goes, “We can do much more together than any one of us can possibly do alone.” Fittingly, the emphasis on community that is strongly present throughout the improvement process is reflected in all of the good thinking about the teaching of improvement that is collected on these pages. There are two main reasons for this. The first is pedagogical. As we just noted, many of the ideas presented here take advantage of best thinking about social sense-making and learning. Groups are formed to explore ideas or to reflect upon their application in practice (Chapters 4, 8, 10). The second reason for the emphasis on community and the teaching of improvement derives from the fact that intended outcomes are performative, and many who come to learn about improvement do so in the interest of advancing improvement in their local practice. This results in participation as teams—both naturally occurring or consciously formed—in order to support learning-by-doing.

Compassion Compassion shapes how and why tools and methods of improvement are employed. It is both essential to the processes of improvement science and necessary to realize their intended effects. It influences how leaders engage with others in improvement work. The several chapters of this volume speak to the ways that attention to compassion can shape an instructor’s decisions about teaching improvement methods. Practically grounded learning experiences such as

Toward a Scholarship of Teaching Improvement

213

personal improvement projects (Chapter 1), case-based learning (Chapter 4), or capstone projects (Chapter 8) create supportive opportunities for learners to wrestle with different challenges that arise in improvement work. In this way, personal experience can inform how learners will relate to others who are navigating the complexity of trying to see a system or confronting an organizational culture that discourages risk-taking. Compassion also comes into play as improvers interact with those closest to the problem. Used technically, the improvement tools described in these chapters can help an improver to understand the limits of their own perspective and expand their understanding of a system. When these tools are employed with compassion and empathy, however, the process of engaging with stakeholders can lead to the deeper understanding of the needs and experiences of those stakeholders, meaningfully shaping how problems are defined and potential solutions are identified. Two examples of this are the explicit attention to understanding and interrupting deficit thinking as described in Chapter 2 and the methods for identifying a problem of practice and building a causal system analysis detailed in Chapter 8. As noted, improvement work in complex systems requires community and collaboration. The learning experiences described in this volume are intended to develop the capabilities of individuals who in turn will need to motivate others to engage in improvement alongside them. By considering compassion when making instructional decisions (both as an essential part of the improvement process and as a stance to assume when teaching it), instructors can develop improvement science practitioners who meaningfully engage stakeholders and who share leadership of improvement efforts with myriad stakeholders. In so doing, they equip students to use improvement science to transform the systems in which they work (Chapters 3, 5), and as a powerful tool for justice (Chapter 2). By preparing students to be guided by and employ compassion in their improvement work, instructors are readying students to recruit others to join them in the critical work of reshaping educational systems. At the end of the day, continuous improvement is recognizing and building on the power and insight that live within an organization’s

214

Teaching Improvement Science in Educational Leadership

frontline workers. This volume brings basic dimensions of a pedagogy of improvement science forward so that teachers of improvement can enable future stewards of educational improvement to keep this underlying human, more than merely technical, purpose front and center.

References Boyer, E. L. (1997). Scholarship reconsidered: Priorities of the professoriate. Jossey-Bass. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press. Shulman, L. S. (2004). The wisdom of practice: Essays on teaching, learning, and learning to teach. Jossey-Bass.

about the authors Susan P. Carlile, MA, Associate Professor at Portland State University Susan P. Carlile has over 50 years of experience in K–12 education as a teacher, middle school principal, high school principal, Director of Curriculum and Instruction for a large, suburban school district near Portland, Oregon, and is now associate professor at Portland State University. As an associate professor and program leader in Educational Leadership and Policy, she has overseen the leadership development of over 600 school leaders, received over 18 grants for her work, and presented and published in dozens of state, national, and international forums on leadership, including work with change leadership researchers Anthony Bryk, Louis Gomez, and Paul LeMahieu, Improvement Science experts. Carlile has a BA in English and Fine Arts from the University of California, Berkeley, an MA in Curriculum and Instruction and a leadership license from the University of Oregon, and a teaching license from the University of Washington. Most recently, her work has focused on the Improvement Science change process and examining the issues facing women in leadership positions and strategies for navigating them to ensure gender, racial, ethnic, and linguistic and socio-economic equity in education. Two book chapters on leading change in schools were published in the spring of 2019. Ryan Carpenter, Superintendent of Estacada School District Ryan Carpenter has been an educator/leader for 14 years. As Superintendent of the Estacada School District since 2017, he has pursued sustaining high levels of improvement science methodology and has been a partner with Studer Education since 2019. Carpenter’s leadership has received recognition as the 2016 Oregon Future Business Leaders of America Principal of the Year. Also in 2016 he was recognized as a School Emergency Preparedness Leader by the Federal Emergency Management Administration (FEMA). The Estacada Chamber of Commerce awarded Mr. Carpenter the 2018 School Employee of the Year. Carpenter’s work on improvement science has also been published in books and professional journals. 215

216

Teaching Improvement Science in Educational Leadership

Charoscar Coleman, EdD Charoscar Coleman, EdD, earned a dual degree in economics and business from the University of Pittsburgh, a master’s degree in secondary education from The George Washington University, and a doctorate in educational leadership and policy studies from the University of Maryland, College Park. Dr. Coleman has served the students of Prince George’s County Public Schools for the past 25 years. He currently serves as an Instructional Director. In this role, he is responsible for supporting schools across the school district by providing direct coaching support to principals and articulating the school system’s mission, goals, accomplishments, needs, and strategies to area schools and communities. He has worked with schools serving students from kindergarten through twelfth grade. Over the course of his career, Dr. Coleman has served as a high school teacher, assistant principal, and principal of three different schools: Dwight D. Eisenhower Middle School, Central High School, and Dr. Henry A. Wise High School. During his 14 years as a principal, Dr. Coleman has twice been nominated for the Washington Post Principal of the Year Award. Robert Crow, PhD, Associate Professor of Educational Research, Western Carolina University Robert Crow, PhD, is an associate professor of educational research and an active faculty member in the doctoral and masters’ programs in educational leadership at Western Carolina University. Robert, who serves as a Carnegie national faculty for networked improvement science, is a co-editor of MEP’s Improvement Science and Beyond series. He is also the co-editor/author of the recent publications The Educational Leader’s Guide to Improvement Science: Data, Design & Cases for Reflection and The Improvement Science Dissertation in Practice: A Guide for Faculty, Committee Members, and their Students. Robert currently chairs the Improvement Science–CPED Improvement Group for the Carnegie Project on the Education Doctorate.

About the Authors

217

Tacquice Wiggan Davis, Associate Director of Intercultural Affairs, WCU Tacquice Wiggan Davis is a native of Jamaican West Indies and serves as the Associate Director for the Department of Intercultural Affairs at Western Carolina University (WCU). She is a student in the Educational Leadership Doctorate program at WCU. Her love for Improvement Science began during the summer of 2018 and has grown tremendously throughout the course of her studies. She is passionate about travel, food, and her children, Jones Parker Davis and Israel Porter Davis. Segun C. Eubanks, Ed.D., Professor of Practice and Director of the Center for Educational Innovation and Improvement Throughout his professional career, Dr. Eubanks has promoted access, equity, and opportunity, as well as advocated for public education and teacher excellence and diversity in education systems in the United States. In leadership roles with the National Education Association, Dr. Eubanks worked to develop teacher leadership initiatives and professional practice standards. He also served for five years as chair of the board of education for Prince George’s County Public Schools (PGCPS). Recently, he played an important role in forming the PGCPS/University of Maryland Improvement Science Collaborative, which works to prepare school teachers and leaders, support school improvement efforts, and connect research with practice. Dr. Eubanks is the inaugural director for the Center for Educational Innovation and Improvement, where he works to further dialogue on leadership development and foster partnerships with school districts. Dr. Eubanks also teaches courses in school leadership and improvement science in UMD’s Doctorate in School System Leadership and the School Improvement Post-Baccalaureate Certificate Program. Louis M. Gomez Louis M. Gomez is a social scientist dedicated to educational improvement. His research and design efforts are aimed at helping to support community formation in schools and other organizations

218

Teaching Improvement Science in Educational Leadership

so that they can collaboratively create new approaches to teaching, learning, and assessment. With colleagues, he has worked to bring Networked-based Improvement Science to the field of Education. This work is aimed at helping the field take a new perspective on design, educational engineering, and development efforts that catalyze long-term, cooperative initiatives. The work gains much of its power because it is carried out in highly focused collaboratives that Gomez and colleagues call Networked Improvement Communities. Gomez is a Professor of Education at UCLA and a Senior Fellow at the Carnegie Foundation for the Advancement of Teaching. Jacqueline Hawkins, EdD, Associate Professor, University of Houston Dr. Jacqueline Hawkins’s research focus has been on prevention and intervention work for educators and parents who engage with students across the education pipeline. She leads a doctoral program for Professional Leadership–Special Populations in her role as Associate Professor in the College of Education at the University of Houston. LaRena Heath, Associate, Carnegie Foundation for the Advancement of Teaching LaRena Heath is an Associate in Networked Improvement Science at the Carnegie Foundation. Prior to joining the foundation, LaRena was Senior Manager of Content and Instruction at Actively Learn, an online literacy platform. Her work focused on creating instructional resources and delivering professional development to teachers and administrators emphasizing evidence-based practices for critical reading, formative assessment, and scaffolding. LaRena began her career at American Institutes for Research and worked on numerous projects investigating factors that led to success in high-performing, high-need schools. Inspired by the dedicated teachers she met through this work, LaRena spent the next several years teaching middle grades in California and Massachusetts. She also served as an instructional technology coach helping teachers throughout her district effectively utilize digital tools in their classrooms. LaRena

About the Authors

219

earned a bachelor’s degree in public policy and a master’s degree in education from Stanford University. Brandi Nicole Hinnant-Crawford, PhD, Associate Professor of Educational Research, Western Carolina University Brandi Hinnant-Crawford (she/her/hers) is a self-described womanist, liberation theologian, critical pedagogue, improvement scientist, and scholar-activist—but these descriptors are aspirational. Each description describes someone committed to leaving the world better than she inherited it. They describe someone striving to dismantle oppressive systems and ameliorate the plight of the marginalized—and that is who she is. As an Associate Professor of Educational Research at Western Carolina University, Brandi’s work seeks to expose policies and practices related to exploitation and domination while simultaneously exploring remedies to alleviate the impact of those policies and practices. Her research agenda has two broad strands that are intimately connected: the first is equity, inclusion, and access: which deals with the pedagogy, policies, and practices within K–12 schooling. The second is organizational improvement, which examines the effectiveness of improvement and evaluation methodologies and the role of different stakeholders in the realization of improvement. Her recent book, Improvement Science in Education: A Primer, reconceptualizes improvement by centering justice as the purpose of improvement. Brandi holds a doctorate in philosophy from Emory University in Educational Studies, a master’s degree in Urban Education Policy from Brown University, and bachelor’s degrees in English and Communication from North Carolina State University. While she loves research and teaching, her first priority is being the mother of her eight-year-old twins, Elizabeth Freedom and Elijah Justice Crawford. Paul G. LeMahieu, Senior Vice President for Programs at the Carnegie Foundation and Graduate Faculty in the College of Education, University of Hawai‘i–Mānoa LeMahieu served as Superintendent of Education for the State of Hawai‘i, the chief educational and executive officer of the only

220

Teaching Improvement Science in Educational Leadership

state system that is a unitary school district, serving 190,000 students. He has been President of the National Association of Test Directors and Vice President of the American Educational Research Association. He served on the National Academy of Sciences’ Board on International Comparative Studies in Education, Mathematical Sciences Board, National Board on Testing Policy, and the National Board on Professional Teaching Standards. His current professional interests focus on the adaptation of improvement science methodologies for application in networks in education. He is a co-author of the book Learning to improve: How America’s schools can get better at getting better (2015), and lead author of the volume Working to improve: Seven approaches to quality improvement in education (2017). He has a Ph.D. from the University of Pittsburgh, an M.Ed. from Harvard University, and an A.B. from Yale College. Chad R. Lochmiller, Ph.D., Associate Professor in the Department of Educational Leadership and Policy Studies at Indiana University Bloomington Dr. Lochmiller serves as a member of the Carnegie National Faculty. His research, which focuses on educational improvement and leadership practice in schools and districts, has appeared in leading journals and edited volumes. He is the 2018 recipient of the Jack A. Culbertson Early Career Award from the University Council for Educational Administration (UCEA). Monica Martens (MA, MS), Research Assistant Monica Martens has worked in education and research for nearly 20 years, most recently for the Ed.D. Professional Leadership—Special Populations program at the University of Houston, where she is a doctoral candidate. Her experiences have been situated within community colleges, universities, public schools, early childhood centers, community organizations, and non-profits. Her work at the University of Houston supports students who are at the beginning of a degree program and educators who are planning projects to improve student outcomes. She also investigates how to make learning more independent and professional development more meaningful for adults.

About the Authors

221

Margaret McLaughlin, Professor of Special Education and Special Assistant to the Dean, College of Education, University of Maryland, College Park Professor McLaughlin conducts research and policy analyses related to special education and the intersection of state and federal educational reforms on students. She has written on topics related to assessment and accountability and fiscal policies and the impacts on students with disabilities. She has co-chaired and/or served on four committees of the National Research Council of the National Academy of Sciences addressing education policies and students with disabilities. She has served as a special education expert on a number of Technical Workgroups for nationally representative studies in education and is currently serving as a Co-PI with Mathematica Policy Research on the current NCEE-funded National Evaluation of IDEA. Kelly McMahon, Ph.D., Associate in Evidence and Analytics at the Carnegie Foundation for the Advancement of Teaching Dr. McMahon works with network leaders to build capabilities to design, lead, and improve networked improvement communities. Bridging her experiences with and research of district instructional reforms and pedagogical knowledge, Kelly is interested in helping the field learn how to learn to improve. She earned a Ph.D. in education administration and policy from the University of Michigan. Ricardo Nazario y Colón, Chief Diversity Officer at Western Carolina University Ricardo Nazario y Colón is an accomplished Higher Education Administrator with over twenty years of experience in various industries, including the U.S. military, colleges and universities, corporate banking, and state government. Currently he serves as the Chair of the Diversity and Inclusion Council for the UNC System and is the Chair of the Governor’s Advisory Board on Hispanic/Latino Affairs in North Carolina.

222

Teaching Improvement Science in Educational Leadership

Maria Eugenia Olivar, J.D. Maria Eugenia Olivar, J.D., is a vice principal in the Hillsboro School District in Hillsboro, Oregon, and has been using Improvement Science for three years. She was born in Venezuela and moved to the U.S. in 1998 to work as a Dual Language Teacher in New York City. Kathleen Oropallo, Ph.D., Leader Coach, Studer Education As a Leader Coach for Studer Education, Dr. Oropallo works side-byside with leaders to establish, accelerate, and hardwire the practices that create a culture of continuous improvement and excellence. Kathy helps partners create better alignment, consistency, and engagement with the highest levels of service. Foundational to the work is developing strong, agile systems and leaders that can cascade processes and communications which allows them to adapt to the everchanging conditions of today’s work environment. She partners with districts throughout the country, presenting content and improvement results to leaders in a variety of forums, including webinars and roundtables. Dr. Oropallo has worked in PreK–12 education for more than 35 years as a teacher, University professor, State Director of Professional Development for the Florida Department of Education, CEO, and job-embedded coach. She has also published several works, including a book, journal and blog articles. Emma Parkerson, Senior Associate, Networked Improvement Science at the Carnegie Foundation for the Advancement of Teaching Emma Parkerson builds leaders’ capability to design, launch, and sustain networks for organizational improvement in the education sector. Drawing on her training in human-centered design, innovation, and improvement—including certification in project management—she has guided multiple design efforts to create tools by teachers, for teachers. She is pursuing a Master of Science degree in Organization Development from Pepperdine University. Jill A. Perry, PhD, Executive Director, CPED Dr. Jill Alexa Perry is the Executive Director of the Carnegie Project on the Educational Doctorate (CPED) and an Associate Professor of

About the Authors

223

Practice in the Educational Foundations, Organizations and Policy at the University of Pittsburgh. Her research focuses on professional doctorate preparation in education, organizational change in higher education, and faculty leadership roles. She is currently researching the ways in which EdD programs teach practitioners to utilize research evidence. She teaches and coaches how to teach Improvement Science in EdD programs. Her books include The Improvement Science Dissertation in Practice, The EdD and the Scholarly Practitioner, and In Their Own Words: A Journey to the Stewardship of the Practice in Education. Dr. Perry is a graduate of the University of Maryland, where she received her PhD in International Education Policy. She holds an MA in Higher Education Administration and a BA in Spanish and International Studies from Boston College. She has over 25 years of experience in leadership and program development in education and teaching experience at the elementary, secondary, undergraduate, and graduate levels in the U.S. and abroad. She is a Fulbright Scholar (Germany) and a returned Peace Corps Volunteer (Paraguay). Deborah S. Peterson, Ed.D., Associate Professor, Portland State University In her 40-year career in education, Deborah Peterson has been recognized as an award-winning leader, published author, career coach, and accomplished speaker. Dr. Peterson has served in roles as varied as preparing students for international inter-cultural experiences, high school department chair, elementary and high school principal, human resources administrator, and university professor. With over 40 years of experience as an educator, she has taught thousands of Pre-K doctoral students, presented at state, national, and international conferences, received numerous grants, and published on various aspects of leadership for equity. Dr. Peterson is an Associate Professor at Portland State University, with a BA from the University of Washington and University of Oregon; an MA from Portland State University, and EdD from Lewis and Clark College. She resides in Portland, Oregon, where she has successfully led numerous equity-focused initiatives in k–12 schools and at the university.

224

Teaching Improvement Science in Educational Leadership

Barbara Shreve, Associate, Networked Improvement Science and Director of Professional Education Offerings at the Carnegie Foundation for the Advancement of Teaching Barbara Shreve is an Associate in Networked Improvement Science at the Carnegie Foundation for the Advancement of Teaching, where she is also Director of Professional Education Offerings. She supports leaders as they build capabilities to design and lead networks and pursue improvement goals as part of networked improvement communities. She applies her experience building learning communities, designing professional learning, and developing curriculum to contribute to the design of education offerings through which individuals and teams develop knowledge and skills with improvement science concepts and methods. She holds a Master’s degree from Mills College, where she also earned her teaching credential. Jean L. Snell, Ph.D. Jean L. Snell, Ph.D., is a Senior Faculty Specialist for the Center for Educational Innovation and Improvement at the University of Maryland. Jean is the coordinator for the Administrator 1 certification program and she also teaches and advises doctoral students in the “School Systems Leadership” program. She has dedicated the last 20 years to helping to develop the next generation of school and teacher leaders and to equipping them to lead continuous improvement and narrow the achievement gap in their schools and districts. Dean T. Spaulding, Ph.D., VP of External Evaluation and Grants for Z Score Inc. Dr. Spaulding has worked in higher education for 20 years teaching program evaluation, research methods, and statistics. He is currently working in program evaluation and grant writing and serving as an external evaluator on numerous state and federally funded grant projects, including the National Science Foundation (NSF). He is co-editor of MEP’s Improvement Science and Beyond series and other books on educational research, program evaluation, and school improvement. He also does consulting work in Improvement Science in K–12 and higher education settings. ([email protected]).

About the Authors

225

Cassandra Thonstad, Assistant Principal, Newberg School District Cassandra Thonstad began her teaching career in 2005, serving as a high school math teacher for over a decade and an instructional coach for four years in North Clackamas and Newberg School Districts (Oregon). She currently serves as an administrator in Newberg School District. In addition, she has taught math at Portland Community College, math pedagogy at George Fox University, and Improvement Science at Portland State University. She has led work on proficiencybased grading across the state of Oregon and has led Improvement Science work in her current district since 2014. Cassandra is driven by a passion for our PK-12 system and uses Improvement Science with an equity focus that challenges staff and students to change the trajectory of learners to meet their maximum potential while honoring each student’s experience in the classroom. Debby Zambo, Ph.D., Retired Associate Director of CPED Debby Zambo is an Associate Professor Emerita from Arizona State University currently working as the Associate Director of the Carnegie Foundation on the Education Doctorate. Debby has been involved with improvement science, CPED, and the Carnegie Foundation since the early Explorers Workshops in 2015. Since then she has been a member and cofounder of CPED’s Improvement Science Interest Group and a member of the Carnegie Foundation’s Higher Education Network. Debby has also made presentations on improvement science at CPED Convenings, the Carnegie Summit, AERA, UCEA, and—with Jill Perry and Robert Crow—developed and presented five workshops on a range of topics on improvement science from its basic tools and processes, deeper philosophical ideals and complexities, and contextualizing improvement science in dissertation work. In 2020 Debby co-authored The Improvement Science Dissertation in Practice: A Guide for Faculty, Committee Members and their Students with Jill Perry and Robert Crow.

index 5 Why’s, 4, 12, 78, 148 2022 TSPC Principal Licensure Standards, 91 A ableism, 31 action, 20 Action Plan, 179 actionable problems of practice, 47–48 adult learning frameworks, 31 Ahlstrom, J., 78, 191 aim statement, 52, 53 Allen, M.J., 107 always actions, 196–99 American Institutes for Research, 178 anchor chart, 77, 86 anchoring, 107 Anderson, G.L., 54 Anderson, S.E., 177 Anderson, V., 67, 70 andragogy, 31 Auerback, S., 36 B Backwards Brain Bicycle, The, 162 balancing measures, 14 Bambino, D., 170 Barrows, H.S., 64 Bartley, L., 74 Bell Hooks: Cultural Criticism and Transformation, 25 Bensimon, E.M., 27 Biag, M., 98 Bolzan, N., 64 Bonilla-Silva, E., 29 Boud, D., 64 Bourdieu, P., 20 Boyer, E.L., 207 Boykin, A.W., 27, 28 Bozkuş, K., 58 Bransford, J., 127 Bridges, E.M., 64, 65 Brown, E., 91, 110 Bryk, A., xvii, 43, 47, 54, 57, 67, 91, 113, 146, 149, 150, 153, 164, 165, 191, 194, 204, 205 CFAT and, 45 change ideas and, 71, 72



change processes and, 9 improvement principles, 1, 21, 22–24 improving at scale, 10 networked improvement communities and, 120, 125, 134 outcomes and, 5, 6 PDSA cycles and, 11, 55, 66 problem definitions and, 4 problem of practice and, 47, 170 six improvement science principles, 167, 209 solutionitis and, 147 systems and, 67, 70 Building Teaching Effectiveness Network (BTEN), 43 C campus improvement plan (CIP), 175 capacity, 208–9 capstone project, 151–53, 181, 183, 213 Carlile, S.P., 95, 104, 105, 108, 110 Carnegie Foundation, xv Carnegie Foundation for the Advancement of Teaching (CFAT), xvi, 44, 45, 49, 55, 119, 121, 122, 132, 144, 188 Carnegie Project on the Education Doctorate (CPED), 44, 45, 47, 143, 169 Carpenter, R., 191 Cascades School District, 187–88, 188–90 always actions and, 196–99 defining improvement at, 190–91 developing and engaging staff at, 192–94 development of district and school leaders, 194–96 Employee Engagement Survey, 197–99 Evidence Based Leadership and, 190, 191, 192–93 leader-led results rollout, 200–1 leader rounding and, 201–2 management of, 188–90 results of improvement efforts, 202–3 statistics for, 188 strategic plan for, 190

227

228

Teaching Improvement Science in Educational Leadership

cascading, 24 case-based learning, 213 Cassuto, L., 97, 114 causal systems analysis (CSA), 6, 10, 150, 152, 156 causal systems diagrams, 4–8 CEEDAR, 90, 107 CEEDAR High Leverage Best Practices, 91, 93, 95, 99 Center for Educational Innovation and Improvement (CEii), 154 Central Law of Improvement, 25 Chambers, T., 26 change agents, 164, 177–80, 181 change ideas, 52, 53, 67, 71–74, 85 change interventions, 11 change management, 163–64, 164–65, 165–67 assessing political landscapes and, 169–71 data inventory of, 172 five learning goals of, 167–82 focusing members as change agents, 177–80 implementing within an educational system, 171–72 innovative configuration maps and, 180 instructor-led discussions and, 173 integrated approaches to, 176 leaders, change agents, stakeholders and, 181–82 lecture notes and, 175 levels of concern and, 178–79 local examples and, 176–77 nested assignments and, 179 networked improvement communities and, 170, 173–74 note-taking guide and, 171–72 pedagogical examples of, 168 professional activities and, 179, 181–82 stages of concern and, 178–79 synchronous conversations and, 170–71 use of principles of improvement science and, 173–77 chartering phase, 95 Chavkin, N.F., 35 Cincinnati Children’s Hospital Intermediate Improvement Science Series, 122

class, 19 classism, 31 classroom instructional change (CIC), 177 coach, 19 cognitive biases, 162 Cohen, D.K., 127 Coleman, C., 158–61 collaborative revision process, xviii collective problem definition, 34 community, 208, 211–12 community agreements, 124 community of practice, 137–38 compassion, 212–14 concerns-based adoption model (CBAM), 177 conscientizaçāo, 25 consensus decision-making, 114 constructivism, 39 content, 208, 209–10 context of change, 165 Continuing Administrative Licensure Program, 112 continuous change, 173, 190 continuouis improvement content, 209 COVID-19, 195, 204 Crash Course Sociology, 30 creating continuity, 192 Creswell, J.W., 47, 52 critical friends, 170 critical pedagogy, 17 critical praxis, 17, 19–22 critical reflection, 38 critical self-reflection, 30 Crow, R., xvii, 55, 171 C-suite leaders, 187 Cuban, L., 72 Culturally Responsive School Leadership, 30 curriculum map, 107 D Darling-Hammond, L., 26, 127 dashboard, 194 data inventory, 172 deficiency approach, 27 deficit cognitive frame, 27 deficit ideas, 33 deficit ideologies, 20, 27 deficit mindedness, 27 deficit theories, 27 deficit thinking, 27

229

Index

deficit understanding, 27 deliberate practice, 132 Deming, W.E., 194 Design-Based School Improvement, 146 Dewey, J., 104 DiAngelo, R., 35 Difficult Conversations: How to Discuss What Matters Most, 37 disability studies, 23 discovery learning, 29 discrete innovations, 177 dissertation in practice (DiP), xv, 45–46, 47, 53, 59, 143, 151–53, 183 distal knowledge, 165 distributive leadership, 147 doctor of philosophy (PhD) programs/ students, 44, 145 articulating questions, 52 literature use and, 50 publication of research, 54 research problems and, 46–47 uncovering feasible problems, 49 Also see educational doctorate doctoral capstone credits, 145, 146 Doctoral Capstone Examining Committee, 145 doing to learn, 122 Dottin, R.L., 107 Downey, C.J., 69 driver diagrams, xviii, 9, 71, 84, 85, 105, 111, 150, 151, 156, 171 driver measures, 13 DuFour, R., 66 Dynamic Indicators of Basic Early Literacy Skills (DIBELS), 87 E Eaker, R., 66 early literacy, 169 EdD Professional Leadership: Special Populations, 168 Education Leadership Policy Program (ELP), 90 educational doctorate (EdD) programs/ students activity and questions for, 48, 51, 54, 58 articulating questions, 52–53 dissertation in practice (DiP), 45–46, 47, 59 implementation of change effort, 54–55



literature use and, 51 new teaching skills and, 44–45 as practitioners, 44, 45 problem of practice (PoP), 47 uncovering feasible problems, 49–50 Also see doctor of philosophy, University of Maryland educational justice, 17, 27, 39 educational leadership, xv educational reform, 67 emancipatory paradigms, 39 emancipatory research, 23 empathy interviews, 49, 51, 150, 210 English language learners (ELL), 25–26, 89, 107, 110–11 English Learner Standards, 91, 93, 95, 99 episodic change, 190 equity, 18, 21, 22, 103, 104, 105, 107, 108, 110, 115, 142, 147, 148, 169, activities based on, 98 community-based, 23 data on, 95 principal leadership and, 89, 90, 91, 92, 93, 94 equity audit, 23, 34,45, 96 equity-mindedness, 29 essential contexts, 120, 129 Evidence Based Leadership, 190, 191, 192, 196 Evidence for Improvement, 138 example pedagogies, xvii existing data, 49, 51, 52 experiential learning, 121 extant improvement capabilities, 127 extant research skills, 129–30 F failing forward, 112 failure, 74 Feletti, G., 64 fishbone diagrams, 5, 6, 7, 8, 9, 32, 33, 37, 49, 51, 52, 78, 83, 111, 150, 152 analysis, 105 annotated, 33 Fixsen, D.L., 74 formative assessment, 77 Fountas, I., 82, 84, 85, 87 Framework for the Initiation of Networked Improvement Communities, 122 Freire, P., 17, 20, 22, 23, 25, 27, 104

230

Teaching Improvement Science in Educational Leadership

From the Achievement Gap to the Educational Debt, 26 Fullan, M., 104 G Gallagher, D., 153 Gallup, 196, 198 gateway courses, 91, 98, 107, 108 go slow to go fast, 204 Göker, S.D., 58 Goldson, M., 155, 156 Gomez, L.M., 120 Gorski, P.C., 27 Got a Wicked Problem?, 70 Gould, S.J., 19 gratitude, 197, 204 Greco, P., 191 Green, T., 23–24 group dynamics, 128–29, 130–31 Guided Reading Program, 82 Gutherie, R.V., 19 H Haas, E.M., 91, 110 Hall, G.E., 177 Hallinger, P., 64, 65 Harper, S., 29 Harter, J., 188, 196 Hawley, W., 142 Herr, K., 54 heteronormativity, 31 Heycox, K., 64 Higgins, L., 64 high leverage change ideas, 11 Hinnant-Crawford, B.N., xvii, 13, 18, 19, 22, 34 Hite, W., 143 Hochbein, C., 47 Holdheide, L.R., 110 Hood, S., 20 hooks, b., 25, 26 Hord, S.M., 177 Horsford, S.D., 18, 20 Hughes, C.A., 29 I ideas for change, 134 iLEAD (Improvement Leadership Education and Development), 144 Illich, I., xix Imig, D., 142, 143

improvement cycles, xviii Improvement Guide, The, 25, 192 improvement science adopting in curriculum, xiii–xiv analyze data for improvement, 57 approaches for developing capacity for, xiv–xvi capacity and, 208–9 community and, 207 compassion and, 207 content and, 207 context and, 207 continuous change and, 173 as critical praxis, 19–22 critiques of, 18 developing instructional expertise in, xvi–xix development of social conditions for, 133 development progression for, 92 five dimensions of, 208–14 history and, 26 implement for improvement, 56–57 incremental changes and, 195–96 introducing principles of, 146–47 Langley’s model for improvement, 21 principal interns and, 111–14, 116 principal licensure program courses and, xviii professional organizations and, xv–xvi reflect for improvement, 57–58 at the school district level, xv seeing the system, 24–26 six principles for improvement, 21–22, 89–90, 95, 165, 167, 173–77 social justice and, 18–19, 27, 39, 106 specifying measures in, 13–14 strategize for improvement, 56 systems and, 67–68 teaching a practical understanding of, 66–67 teaching principles of, 146 three actions that define, 191 user-centered and problem-specific, 22–24 Also see change management, networked improvement communities, principal licensure programs, TISEJ

Index

Improvement Science Dissertation in Practice, The, 55 Improvement Science for School Leaders, 91 Improvement Science in Education: A Primer, 18, 22 improvement scientists, 161 inclusion, 18, 19, 89, 94, 98, 103, 108, 137, 208 inclusive excellence scorecard, 24 innovative configuration map, 180 Institute for Healthcare Improvement Breakthrough Series Collaborative, 122 Institute of Education Sciences, 164 Institute of Medicine, 47 instructional scaffolds, 85 instructive teaching case, 65 instructor, 19 interviews, 52 Is Everyone Really Equal, 35 Ishikawa diagram. See fishbone diagram J Jack, A., 36 Jhally, S., 26 Johnson, L., 67, 70 Journal of Cases in Educational Research, 65 justice, 18 K K-12 Dual Language Program, 114 Khachatryan, E., 133 Khalifa, M., 30 Kim, S., 66, 75 Kinder Institute for Urban Research, 169 Knackendoffel, A., 166 Kolb, D.A., 121 Konetes, G., 56 Kurtz, S., 64 L Ladson-Billings, G., 26, 35 Langley, G., xvii, 6, 8, 21, 43, 55, 67, 71, 72, 73, 133, 192 Lareau, A., 36 leader rounding, 188, 201–2 leaders go first, 201, 204 leading measures, 187 learning by doing, 112 learning designs, 122

231

Learning Lab. See NIC Design Learning Lab learning science, 164 Learning to Improve, 21, 22, 134, 146 LeMahieu, P., 92, 95, 96, 97 levels of use (LoC), 178–79 leverage, 67, 69–71 leverage point, 67, 69–71 Lightfoot, D., 35 literature scan, 6 local knowledge, 6 Love, B., 29 Love, P., 165 Lyon, A.R., 57 M Macedo, D., 25 Macias, L.V., 36 Martinez, K., 43 Maryland State Department of Education, 145 Matthews, R.S., 91 McMillan Cottom, T., 26 McNair, T.B., 27, 29 Meadows, D.H., 67 Means, D.R., 37 Meltzer, D., 66 Merchand, J.E., 64 Merriam, S.B., 31 Metz, A., 74 Milner, R., 26 miniexperimental inquiry cycles, 12 Mintrop, R., 47, 146, 147, 164, 165, 169 monolingualism, 31 motility, 9 Myung, J., 43 N National Education Association (NEA), 142 National Educational Leader Professional (NELP) Standards, 142, 145, 146 networked improvement communities (NIC), xviii, 119–20, 120–21, 146 accelerated learning through, 153–57 analytical partnerships and, 138 change management and, 170, 173–74, 177 community agreements and, 124 community of practice and, 137–38 developmental evaluation of, 138

232

Teaching Improvement Science in Educational Leadership



essential contexts while guiding initiation, 129–33 experiential learning and, 121 four characteristics of, 120 four essential contexts of, 125–29 four key stages of, 123 guiding the emergence of, 121–24 instructional cases and, 124–25 methods of analysis and, 124–25 note-taking guide to, 171–72 professional development and, 175 recommendations for instructors, 134–37 research and, 138 University of Maryland and, 153–57 network phase, 95 Nevarez, C., 26 New York City Department of Education, 44, 52, 53 Newberg School District, 111 Newman, S., 91 NIC Design Learning Lab, 122–24, 134 shaping syllabus for, 123, 124, 134 Also see networked improvement communities Nieto, S., 27, 28 Nine Principles, 193–94 Nonsense Word Fluency Assessment, 77, 87 note-taking guide, 171–72 O On Diversity: Access Ain’t Inclusion, 36 one and done, 150 observations, 49, 51 Office of Talent Development, 154 Olivar, M.E., 112–14 Oregon Department of Education, 104 Oregon Educator Equity Report, 95 Oregon School Administrator and English Language Learner License Standards, 106 organization-focused initiatives, 177 Organizational Excellence, 193 organizational systems, 67–68 outcome measures, 13 P PARCC, 156 Parkerson, E., 133 Passeron, J.C., 20

PDSA. See plan-do-study-act pedagogical content knowledge, xvi Pedagogy of the Oppressed, 17, 27 Penuel, W., 153 Perry, J.A., 44, 45, 47, 55 personal improvement project (PIP), 1, 105, 213 background of, 2 causal systems diagrams, 4–8 changes leading to improvement, 9–10 inquiry questions and, 8–9 PDSA cycle and, 10 problem definition and, 4–14 problem exploration and, 3–14 using self as lens for, 3 personal theory of improvement, 150 Peterson, D.S., 95, 104, 105, 106, 108, 110, 111, 191 PGCPS Coherence Framework, 156–57 PGCPS Office of Accountability, 156 PGCPS School Performance Plan, 156, 157 Pilcher, J., 188, 190, 191, 192, 193, 194, 196, 197, 199, 201, 202 Pinnell, G.S., 82, 84, 85, 87 plan-do-study-act (PDSA), xviii, 10, 11, 55, 63–66, 96, 105, 111, 112, 113, 114, 152, 187, 191, 194, 203, 210 case-based instruction and, 65–66 change ideas and, 71–74 change management and, 170 failure and, 74 feedback from students and, 76–77 illustrative lesson plan, 79, 82–84 illustrative root cause analysis, 83–84 instructive teaching cases and, 65, 85–86, 87 leverage and, 69–71 leverage points and, 69–71 PDISAN and, 156–57 pedagogical planning and, 77–79 practical understanding of, 66–67 problem-based learning and, 64–66 systems and, 67–68, 69 tinkering and, 72 using cases in, 75–76 political landscape, 163, 165, 166 Portland State University, 90, 103 Also see principal licensure programs positivism, 39

233

Index

Post-Doctoral Improvement Science Action Network (PDISAN), 154–57 five core functions and goals of, 154 launch of, 154–55 problem of practice analysis, 155, 156 pragmatic paradigms, 39 premortem process, 97 primary drivers, 52, 53 Prince George’s County Public Schools (PGCPS), 143 Principal Leadership: High Leverage Practices to Promote Inclusion and Equity, 110 Principal Leadership: Linguistically and Culturally Diverse Students and Families, 110 Principal Licensure Program Standards, 91, 93, 95 principal licensure programs, redesign of, 89–90, 99–100, 103–4 development progression of, 92–93 gateway courses and, 107, 108 lesson designs for, 94–95 lessons learned from, 114–16 new learning and, 95–97 new licensure standards and, 105–6 at Portland State University, 90–91, 104–5 principal interns and, 111–14, 116 problem of practice and, 91, 107 recommendations for faculty, 97–99 redesign process for, 92–99, 107–11 root cause analysis and, 107 Principles of Educational Research and Data Analysis I, 109 problem-based learning (PBL), 64–66 case-based instruction and, 65–66, 75–76, 82 four characteristics of, 64–65 instructive teaching cases and, 65 instructor as primary actor, 65 problem of person, 4 problem of practice (PoP), 4, 47, 91, 107, 187 change management and, 170, 172, 173, 176, 181 PDISAN and, 156, 158 user-centered, 147–49, 149–51 problem statement, 5 process mapping, 70

process measures, 13 professional development, 169, 175 professional identities, 127–28 professional learning communities, 66, 82, 113, 114, 178 Professional Practice Doctorate, 144–45 proximity to problem, 125–26 PSAT, 156 Pyne, K.B., 37 Q Quantway, 44 R racism, 31 Racism Without Racists, 29 Rao, H. H., 97 recognition, 197, 204 reflection, 20 reflective analysis, 125 reflective leader, 58 reliability, 201 research-practice partnerships, 153 research-to-practice gap, 169 results rollouts, 200–1, 203 Rethinking Racism, 29 root cause analysis (RCA), 78, 83, 105, 132, 187 Russell, J., 122 S scaling up, 97 School Leadership Standards, 93, 95 scholarly practitioner, 143, 153 School Performance Plan, 156 Scope and Responsibilities of School Administrators, 106, 109 scorecard, 194 secondary drivers, 52, 53 Senge, P.M., 67, 70, 126, 188 Sensoy, O., 35 Shulman, L., xvi, 207 SMART goals, 2 Smarter Every Day, 162 social justice, xvii, 18, 39, 45, 100, 106, 107, 108, 112, 169, 210 solutionitis, 26, 147 Sowers, N., 44 Special Polulations, 168–69 speed dating, 170 sphere of influence, 169

234

Teaching Improvement Science in Educational Leadership

spreading phase, 96 stages of concern (SoC), 178–79 stakeholder engagement, 6 stakeholders, 164 Stanford University, 97 Stanley, D., 23, 24 Statway, 44 Sternberg, R., 1 Sternke, J., 196 Stone, C., 2 Stone, D., 37 strategize-implement-analyze-reflect (SIAR) cycles, 55, 58 Strayhorn, T., 26 Strother, S., 44 students, 19 Studer Education, 188, 190, 197 Studer, Q., 188, 190, 191, 192, 193, 194, 196, 197, 199, 201, 202 Summit for Improvement in Education, xv supervised clinical practice experience, 107 survey results, 188 SWOT analysis, 37 synchronous conversations, 170–71 systematic replication, 165 systems, 67 systems change, 169 systems improvement maps, 49, 51, 52 T tactic-driven tools, 191 teacher efficacy, 147 Teacher Standards and Practices Commission (TSPC), 91 teaching cases, 75–76 Teaching Improvement Science for Educational Justice (TISEJ), 17, 28 anticipation and, 31–33 explicit instruction and, 29 facilitation and, 37–38 preparation and, 33–37 teaching, scholarship of, 207–8 TED Talks, 30 theory of action, 10 theory of improvement, 9, 151 Thonstad, C., 111–12 Tillman, L.C., 20 tinkering, 72

top box percentage, 198–99 transition pipeline, 169 TSPC English Language Learner Standards, 91 TSPC School Leadership Standards, 91, 99 Tujec, T, 70 Tyack, D.B., 72 U uncritical positivistic analysis, 20 University of Houston, 168 University of Maryland (UMD) EdD program, 141–42, 142 networked improvement communities and, 146, 153–57 reflections from graduate of, 158–61 revision of, 143–44, 144–45 teaching improvement science and, 146–53 Also see improvement science University of Texas Research and Development Center for Teacher Education, 177 V Valencia, R.R., 27 Voegele, J.D., 107 W Wertsch, J., 2 we/they culture, 189, 197, 203, 204 what right looked like, 197, 200 white supremacist capitalist patriarchy, 24–25, 26 wicked problems, 120 Williams, D., 23, 24 Williams, W., 1 Willis, J.W., 44 Wood, J.L., 26 work systems, 67–68 workplace productivity, 196 why questions. See 5 Why’s Y Yamada, H., 91 yes/no trees, 37