Program Evaluation in Practice: Core Concepts and Examples for Discussion and Analysis 1118450213, 9781118450215

The lack of teaching cases in program evaluation is often cited as a gap in the field. This ground-breaking book fills t

559 102 912KB

English Pages 224 [226] Year 2014

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Program Evaluation in Practice: Core Concepts and Examples for Discussion and Analysis
Contents
List of Tables, Figures, Exhibits, and Boxes
Preface
Acknowledgments
The Author
Part One: Introduction
Chapter One: Foundations of Program Evaluation
What is Program Evaluation?
Internal and External Evaluators
How to Use This Book
The Evaluation Objective
Designing and Developing an Evaluation Matrix
Data Collection
Triangulation of Data
Writing the Evaluation Report
Dissemination and Use of Evaluation Findings
Summary
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Two: Ethics in Program Evaluation and an Overview of Evaluation Approaches
Ethics in Program Evaluation
What is an Evaluation Approach?
Objectives-Based Approach
Decision-Based Approach
Participatory Approach
Consumer-Oriented Approach
Expertise-Oriented Approach
Eclectic Approach
Summary
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Three: In-Depth Look at the Objectives-Based Approach to Evaluation
Objectives-Based Approach
How to Use Evaluation Objectives
Summary
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Part Two: Case Studies
Chapter Four: Improving Student Performance in Mathematics Through Inquiry-Based Instruction
The Evaluator
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Five: Evaluation of a Community-Based Mentor Program
The Evaluator
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Six: Teacher Candidates Integrating Technology into Their Student Teaching Experience
The Evaluators
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Seven: Evaluation of a Professional Development Technology Project in a Low-Performing School District
The Evaluator
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Eight: Expansion of a High School Science Program
The Evaluators
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Nine: Evaluation of a Proven Practice for Reading Achievement
The Evaluators
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Ten: Project Plan for Evaluation of a Statewide After-School Initiative
The Evaluator
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Eleven: Evaluation of a Training Program in Mathematics for Teachers
The Evaluators
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Twelve: An Evaluator-in-Training’s Work on a School Advocacy Program
The Evaluator
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Thirteen: Evaluation of a School Improvement Grant to Increase Parent Involvement
The Evaluators
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
Chapter Fourteen: Evaluating the Impact of a New Teacher Training Program
The Evaluators
The Program
The Evaluation Plan
Summary of Evaluation Activities and Findings
Final Thoughts
Key Concepts
Discussion Questions
Class Activities
Suggested Reading
References
Index
Recommend Papers

Program Evaluation in Practice: Core Concepts and Examples for Discussion and Analysis
 1118450213, 9781118450215

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

3GFFIRS

10/21/2013

13:54:14

Page ii

3GFFIRS

10/21/2013

13:54:14

Page i

PROGRAM EVALUATION IN PRACTICE

3GFFIRS

10/21/2013

13:54:14

Page ii

3GFFIRS

10/21/2013

13:54:14

Page iii

PROGRAM EVALUATION IN PRACTICE Core Concepts and Examples for Discussion and Analysis Second Edition

D E A N T. S P A U L D I N G

3GFFIRS

10/21/2013

13:54:14

Page iv

Cover design by © Arthur S. Aubry/Getty Cover image: © khalus/Getty Copyright © 2014 by John Wiley & Sons, Inc. All rights reserved. Published by Jossey-Bass A Wiley Brand One Montgomery Street, Suite 1200, San Francisco, CA 94104-4594—www.josseybass.com No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Readers should be aware that Internet Web sites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read. Jossey-Bass books and products are available through most bookstores. To contact Jossey-Bass directly call our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317572-3986, or fax 317-572-4002. Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-ondemand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com. Library of Congress Cataloging-in-Publication Data Spaulding, Dean T. Program evaluation in practice : core concepts and examples for discussion and analysis / Dean T. Spaulding. – Second edition pages cm Includes bibliographical references and index. ISBN 978-1-118-34582-5 (pbk.) – ISBN 978-1-118-45021-5 (pdf) – ISBN 978-1-118-45020-8 (epub) 1. Educational evaluation. 2. School improvement programs–Evaluation. I. Title. LB2822.75.S69 2014 379.1ʹ58–dc23 2013027715 Printed in the United States of America SECOND EDITION

PB PRINTING 10 9 8 7 6 5 4 3 2 1

3GFTOC

10/21/2013

14:16:46

Page v

CONTENTS List of Tables, Figures, Exhibits, and Boxes

xiii

Preface

xvii

Acknowledgments The Author

xxi xxiii

PART ONE: Introduction

1

CHAPTER ONE: FOUNDATIONS OF PROGRAM EVALUATION What is Program Evaluation? Internal and External Evaluators How to Use This Book The Evaluation Objective Designing and Developing an Evaluation Matrix Data Collection Triangulation of Data Writing the Evaluation Report Dissemination and Use of Evaluation Findings Summary Key Concepts Discussion Questions Class Activities Suggested Reading

3 5 11 13 15 19 20 31 31 35 37 38 38 39 39

v

3GFTOC

10/21/2013

vi

14:16:47

Page vi

Contents

CHAPTER TWO: ETHICS IN PROGRAM EVALUATION AND AN OVERVIEW OF EVALUATION APPROACHES Ethics in Program Evaluation What is an Evaluation Approach? Objectives-Based Approach Decision-Based Approach Participatory Approach Consumer-Oriented Approach Expertise-Oriented Approach Eclectic Approach Summary Key Concepts Discussion Questions Class Activities Suggested Reading

41 41 43 44 47 50 55 55 56 56 57 57 58 58

CHAPTER THREE: IN-DEPTH LOOK AT THE OBJECTIVES-BASED APPROACH TO EVALUATION Objectives-Based Approach How to Use Evaluation Objectives Summary Key Concepts Discussion Questions Class Activities Suggested Reading

59 59 64 66 67 67 67 67

PART TWO: CASE STUDIES

69

CHAPTER FOUR: IMPROVING STUDENT PERFORMANCE IN MATHEMATICS THROUGH INQUIRY-BASED INSTRUCTION

71

3GFTOC

10/21/2013

14:16:47

Page vii

Contents

The Evaluator

71

The Program

76

The Evaluation Plan

78

Summary of Evaluation Activities and Findings

81

Final Thoughts

86

Key Concepts

86

Discussion Questions

87

Class Activities

87

Suggested Reading

88

CHAPTER FIVE: EVALUATION OF A COMMUNITY-BASED MENTOR PROGRAM The Evaluator The Program The Evaluation Plan Summary of Evaluation Activities and Findings Final Thoughts Key Concepts Discussion Questions Class Activities Suggested Reading

89 89 91 92 97 97 97 98 98 99

CHAPTER SIX: TEACHER CANDIDATES INTEGRATING TECHNOLOGY INTO THEIR STUDENT TEACHING EXPERIENCE The Evaluators The Program The Evaluation Plan Summary of Evaluation Activities and Findings Final Thoughts Key Concepts Discussion Questions

101 101 102 102 108 108 108 108

vii

3GFTOC

10/21/2013

viii

14:16:47

Page viii

Contents

Class Activities

109

Suggested Reading

110

CHAPTER SEVEN: EVALUATION OF A PROFESSIONAL DEVELOPMENT TECHNOLOGY PROJECT IN A LOW-PERFORMING SCHOOL DISTRICT The Evaluator The Program The Evaluation Plan Summary of Evaluation Activities and Findings Final Thoughts Key Concepts Discussion Questions Class Activities Suggested Reading

111 111 112 113 116 117 117 117 118 119

CHAPTER EIGHT: EXPANSION OF A HIGH SCHOOL SCIENCE PROGRAM The Evaluators The Program The Evaluation Plan Summary of Evaluation Activities and Findings Final Thoughts Key Concepts Discussion Questions Class Activities Suggested Reading

121 121 122 122 124 126 126 126 127 127

CHAPTER NINE: EVALUATION OF A PROVEN PRACTICE FOR READING ACHIEVEMENT The Evaluators The Program

129 129 130

3GFTOC

10/21/2013

14:16:47

Page ix

Contents

The Evaluation Plan

131

Summary of Evaluation Activities and Findings

131

Final Thoughts

133

Key Concepts

133

Discussion Questions

133

Class Activities

134

Suggested Reading

135

CHAPTER TEN: PROJECT PLAN FOR EVALUATION OF A STATEWIDE AFTER-SCHOOL INITIATIVE The Evaluator The Program The Evaluation Plan Summary of Evaluation Activities and Findings Final Thoughts Key Concepts Discussion Questions Class Activities Suggested Reading

137 137 138 140 143 144 144 145 145 145

CHAPTER ELEVEN: EVALUATION OF A TRAINING PROGRAM IN MATHEMATICS FOR TEACHERS 147 The Evaluators 147 The Program 148 The Evaluation Plan 149 Summary of Evaluation Activities and Findings 151 Final Thoughts 154 Key Concepts 155 Discussion Questions 155 Class Activities 156 Suggested Reading 156

ix

3GFTOC

10/21/2013

x

14:16:47

Page x

Contents

CHAPTER TWELVE: AN EVALUATOR-INTRAINING’S WORK ON A SCHOOL ADVOCACY PROGRAM The Evaluator The Program The Evaluation Plan Summary of Evaluation Activities and Findings Final Thoughts Key Concepts Discussion Questions Class Activities Suggested Reading

159 159 160 162 162 168 168 168 169 170

CHAPTER THIRTEEN: EVALUATION OF A SCHOOL IMPROVEMENT GRANT TO INCREASE PARENT INVOLVEMENT The Evaluators The Program The Evaluation Plan Summary of Evaluation Activities and Findings Final Thoughts Key Concepts Discussion Questions Class Activities Suggested Reading

171 171 171 172 176 177 178 178 178 179

CHAPTER FOURTEEN: EVALUATING THE IMPACT OF A NEW TEACHER TRAINING PROGRAM The Evaluators The Program The Evaluation Plan

181 181 182 182

3GFTOC

10/21/2013

14:16:47

Page xi

Contents

Summary of Evaluation Activities and Findings

185

Final Thoughts

185

Key Concepts

186

Discussion Questions

186

Class Activities

186

Suggested Reading

186

References

187

Index

189

xi

3GFTOC

10/21/2013

14:16:47

Page xii

3GFLAST01

10/21/2013

14:4:27

Page xiii

LIST OF TABLES, FIGURES, EXHIBITS, AND BOXES TABLES Table 1.1 Table 1.2 Table 1.3. Table 3.1. Table 4.1 Table 5.1 Table 7.1 Table 7.2

Evaluation Matrix for the Summer Camp Project Stakeholder Perceptions of Strengths of and Barriers to Camp Status of Prior Recommendations Made for the Summer Camp Follow-Up Sessions Overview of the Scope and Sequence of Evaluation Objectives Thomas’s Evaluation Matrix Template for the Math Project Evaluation Matrix for the Mentor Program The District’s Technology Benchmarks Overview of the Logic Model Guiding the Project Evaluation

19 33 37 65 80 94 114 115

FIGURES Figure Figure Figure Figure Figure Figure

1.1 2.1 2.2 4.1 4.2 10.1

Formative and Summative Evaluation Determining a Program’s Worth or Merit Overview of Evaluation Approaches The RFP Process Overview of Project Activities Structure of After-School Program Higher Collaboration of Services Figure 11.1 Model of the Top-Down Approach to Professional Development Figure 11.2 Model of Professional Development with Action Research Figure 11.3 Overview of the Action Research Model

9 43 44 75 79 140 152 152 153

xiii

3GFLAST01

10/21/2013

xiv

14:4:27

Page xiv

List of Tables, Figures, Exhibits, and Boxes

EXHIBITS Exhibit 1.1 Parent or Guardian Perception Survey— Summer Camp Exhibit 1.2 Interview Protocol for the Summer Camp Project Exhibit 1.3 Overview of an Evaluation Objective and Findings Exhibit 1.4 Example of an Evaluation Objective and Finding Focused on Program Modifications Exhibit 6.1 Technology Use and Integration Checklist for Portfolio Analysis

22 28 33 36 107

BOXES Box 1.1 Box 1.2 Box 1.3 Box 2.1 Box Box Box Box Box Box

4.1 4.2 5.1 5.2 6.1 6.2

Box 7.1 Box 8.1 Box 9.1 Box 10.1 Box 11.1 Box 12.1

Overview of the Framework Guiding Each Case Study Evaluation Objectives for the Summer Camp Project General Categories of Evaluation Objectives (Example for an After-School Program) Example of an Evaluation Objective and Benchmark The RFP Process Thomas’s Evaluation Objectives Program Goals What is Evaluation Capacity? Evaluation Objectives Overview of Portfolios in Education and Teacher Training Overview of Logic Models Overview of Jennifer and Ed’s Annual Evaluation Plan Evaluation Questions for the Reading Right Program Overview of Broad Categories for After-School Program Activities Overview of Action Research Sampling of Community Activities

13 15 17 45 74 79 92 93 102 104 115 123 130 139 153 163

3GFLAST01

10/21/2013

14:4:27

Page xv

For Mr. Mugs, my laptop-lapdog

3GFLAST01

10/21/2013

14:4:27

Page xvi

3GFLAST02

10/21/2013

14:7:29

Page xvii

PREFACE In this second edition you will find some new chapters and new cases. Most significantly, you will find a chapter focusing on the basic theories and approaches to program evaluation. You will also find a chapter dedicated to objectives-based evaluation, an approach that most professional evaluators working today use. In addition, there is a new section on ethics in program evaluation as well as the Joint Committee on Standards for Educational Evaluation’s standards for evaluating educational programs. Case studies from the first edition have been updated, as have readings, discussion questions, and class activities. For over twenty years, research and literature in the area of teaching program evaluation have noted that real-world opportunities and the skills gained and honed from such experiences are critical to the development of highly trained, highly skilled practitioners in the field of program evaluation (Brown, 1985; Chelimsky, 1997; Trevisan, 2002; Weeks, 1982). According to Trevisan and others, traditional courses in program evaluation have been designed to provide students with authentic experiences through in-course or out-of-course projects. Although both approaches have notable benefits, they are not without their share of limitations. Didactic learning environments that use in-course projects have often been criticized for being too structured in their delivery. Trevisan (2004) and others note that these activities typically do not require students to leave campus or collect any “real” data that will be used by clients, in any meaningful way, to make decisions or to effect change. In such cases, these activities may consist of presenting students with a fictitious evaluation project to be designed based on a given set of goals, objectives, or variables for a fictitious agency, group, or company. Such involvement, however, typically offers no more than a cookie-cutter approach, with little room for student exploration, questioning, or growth in any sort of political or social context.

xvii

3GFLAST02

10/21/2013

xviii

14:7:29

Page xviii

Preface

In an attempt to shift this paradigm, Trevisan (2002) describes a popular model employed by many institutions of higher education, whereby an evaluation center is established to provide a more coordinated effort toward providing in-depth learning opportunities for evaluators-in-training. Typically such a center acts as a sort of agency or consultancy, contracting with outside agencies, schools, or groups and serving as an external evaluator. According to Trevisan, this approach incorporates long-term evaluation projects of a year or more, to be conducted by full-time graduate students under the direct supervision of a full-time faculty member. Trevisan notes one of the benefits of such an approach: it provides graduate students interested in the field of evaluation with long-term, realistic projects that tend to reflect much of the work, dilemmas, issues, and ethical considerations that they will encounter on a daily basis as professional evaluators. Although this approach certainly produces an experience that is more realistic, it also presents numerous challenges. For example, one barrier that many instructors face when attempting to implement a more hands-on, real-world project to teach program evaluation is the infrastructural challenges of an academic setting. This infrastructure not only is daunting to faculty but also is often counterproductive to and intrusive into the students’ overall learning experience. For example, most institutions of higher education function within a fifteen-week semester schedule, starting in September and ending in May. Although there are certainly examples of real-world evaluation projects that can be conducted from start to finish in such a short time (Spaulding & Lodico, 2003), the majority of real-world projects— especially those funded at the state or federal level—have timelines that stretch out across multiple years and require annual reporting. In addition, many of these state and federal evaluation projects follow a July 1 to June 30 or August 1 to July 31 funding cycle, with the majority of data analysis and report writing necessarily occurring during the summer, when many faculty (and students) are not usually on campus. Another barrier to teaching program evaluation with a realworld project is the variability in the quality of the experiences from project to project. One difficulty with using real-world projects is that they are out of the hands of the instructor.

3GFLAST02

10/21/2013

14:7:29

Page xix

Preface

In some cases, projects stall after a good start, partners change, or a host of other unexpected things happen. In situations in which the student is placed in an agency or group to work as an internal evaluator, the experience could very well turn out not to be as rich as expected. To address some of these issues, instructors have used case studies in their classes to assist in teaching program evaluation. Although I am not suggesting that case studies by themselves will rectify the difficulties just noted or serve as a replacement for real-world experiences, they do allow evaluators-in-training to vicariously experience an evaluation. Further, case studies place these evaluators in decision-making situations that they otherwise might not be able to experience. Case studies also provide opportunities for rich discussion and learning while ensuring that certain learning objectives desired by the instructor are achieved. Until now, the effort to use case studies when teaching program evaluation has been mainly a grassroots initiative, with instructors bringing into class examples of evaluation projects that they themselves have worked on and contextualizing them for their students. Although the use of case studies and case study books is evident in certain disciplines (such as child and adolescent development), “the absence of readily available teaching cases has been a significant gap in the field of evaluation” (Patton & Patrizi, 2005, p. 1). The purpose of this book is to provide a variety of evaluation projects to be discussed, analyzed, and reflected on. The case studies are intended to foster rich discussions about evaluation practices, and the book’s comprehensive scope means that it should promote discussions touching on the real issues that arise when conducting an evaluation project. For the instructor, this book is not meant to be a stand-alone text for teaching and learning about program evaluation. Its main purpose is to be used as an educational supplement to any course— introductory or advanced—in program evaluation. In addition, these cases should not be viewed as examples of exemplary program evaluations. Although the methods and tools featured in the cases closely reflect those used in real-world evaluations, classroom discussions and activities could certainly focus on expanding and improving those tools and overall methodologies.

xix

3GFLAST02

xx

10/21/2013

14:7:29

Page xx

Preface

I hope you enjoy reading and discussing these cases as much as I have enjoyed revisiting them. An instructor’s supplement is available at www.josseybass .com/go/spaulding2e. Additional materials, such as videos, podcasts, and readings, can be found at www.josseybasspublichealth .com. Comments about this book are invited and can be sent to [email protected].

3GFLAST03

10/21/2013

14:11:15

Page xxi

ACKNOWLEDGMENTS In writing this book I reviewed and reflected on over one hundred program evaluations that I have conducted in the last ten years. It brought back many faces of people I have worked with in the past, reminding me of the incredible opportunities I have had to work with so many talented evaluators and project directors in the field. I would like to thank all of them for their dedication and hard work in delivering quality programming. Proposal reviewers Kathryn Anderson Alvestad, Brenda E. Friday, Sheldon Gen, Kristin Koskey, Leslie McCallister, Jennifer Ann Morrow, Patrick S. O’Donnell, Linda A. Sidoti, and Amy M. Williamson provided valuable feedback on the first edition and revision plan. Tomika Greer, Kristin Koskey, Joy Phillips, and Iveta Silova offered thoughtful and constructive comments on the completed draft manuscript.

xxi

3GFLAST03

10/21/2013

14:11:15

Page xxii

3GFLAST04

10/21/2013

14:14:2

Page xxiii

THE AUTHOR Dean T. Spaulding is an associate professor at the College of Saint Rose in the Department of Educational Psychology. He is the former chair of teaching program evaluation for the American Evaluation Association. He is also one of the authors of Methods in Educational Research: From Theory to Practice (Jossey-Bass, 2006). Dr. Spaulding has served as a professional evaluator for more than a decade and has worked extensively in K–12 and higher education settings. Although his work has focused primarily on after-school, enrichment, and mentor programs, he has also worked to evaluate programs in public health, mental health, and special education settings at both the state and federal levels.

xxiii

3GFLAST04

10/21/2013

14:14:2

Page xxiv

3GFLAST04

10/21/2013

14:14:2

Page xxv

PROGRAM EVALUATION IN PRACTICE

3GFLAST04

10/21/2013

14:14:2

Page xxvi

3GC01

10/21/2013

10:46:43

Page 1

PART

1 INTRODUCTION

3GC01

10/21/2013

10:46:43

Page 2

3GC01

10/21/2013

10:46:43

Page 3

CHAPTER

1 FOUNDATIONS OF PROGRAM EVALUATION LEARNING OBJECTIVES After reading this chapter you should be able to 1. Provide a basic definition of program evaluation 2. Understand the different activities conducted by a program evaluator 3. Understand the difference between formative and summative evaluation 4. Understand the difference between internal and external evaluation 5. Understand the difference between program evaluation and research

PROGRAM EVALUATION VIGNETTE An urban school district receives a three-year grant to implement an after-school program to improve student academic achievement. As staff start to implement the program, the district administrator realizes that an evaluation of the program is mandatory. The district administrator also realizes that such work requires the expertise of someone from outside the district, and the superintendent, with permission from the school board, hires an external evaluator from a local college. After reviewing the grant, the evaluator conducts an initial review of

program A temporary set of activities brought together as a possible solution to an existing issue or problem

3

3GC01

10/21/2013

4

10:46:43

Page 4

Chapter 1 Foundations of Program Evaluation

Formative evaluation A type of evaluation whereby data collection and reporting are focused on the now, providing ongoing, regular feedback to those in charge of delivering the program

summative evaluation A type of evaluation whereby data collection and reporting occur after the program and all activities have taken place

the program’s curriculum and activities. Next the evaluator develops an evaluation plan and presents it at the next school board meeting. The evaluation plan encompasses the objectives that the evaluator has developed and the tools that he will use to collect the data. The evaluator discusses how the plan will provide two different types of feedback as part of the data collection process. Formative evaluation will be used to address issues as the program is happening. For example, one question might be: Are all the stakeholders aware of the program and its offerings? Summative evaluation will be used to answer the overall evaluation question: Did students in the after-school program have a significant increase in their academic achievement compared to those students who did not participate? The board approves the plan, and the evaluator spends the following month collecting data for the formative and summative portions of the project. At the next board meeting the evaluator presents some of the formative evaluation data and reports that there is a need to increase communication with parents. He suggests that the program increase the number of fliers that are sent home, update the school Web site, and work more collaboratively with the parent council. In addition, he notes that there is wide variation in parent education levels within the district and that a large number of parents speak Spanish as their native language. The evaluator recommends that phone calls be made to parents and that all materials be translated into Spanish. At the end of project year one, summative findings are presented in a final report. The report shows that lack of parent communication is still a problem, and that there is little difference in scores on the standardized measures used to gauge academic achievement between those students who participated in the program and comparable students who did not participate. Based on the evaluation report, district officials decide to make modifications to the program for the upcoming year. A parent center, which was not part of the original plan, is added, in the belief that this will help increase parent involvement. In addition, the administration decides to cut back on the number of extracurricular activities the after-school program is offering and to focus more on tutoring and academic interventions, hoping that this will increase academic achievement in year two.

3GC01

10/21/2013

10:46:43

Page 5

What Is Program Evaluation?

WHAT IS PROGRAM EVALUATION? A common definition used to separate program evaluation from research is that program evaluation is conducted for decisionmaking purposes, whereas research is intended to build our general understanding and knowledge of a particular topic and to inform practice. In general, program evaluation examines programs to determine their worth and to make recommendations for programmatic refinement and success. Although such a broad definition makes it difficult for those who have not been involved in program evaluation to get a better understanding, it is hoped that the vignette just given highlighted some of the activities unique to program evaluation. Let’s look a little more closely at some of those activities as we continue this comparison between program evaluation and research.

What Is a Program? One distinguishing characteristic of program evaluation is that it examines a program. A program is a set of specific activities designed for an intended purpose, with quantifiable goals and objectives. Although a research study could certainly examine a particular program, most researchers tend to be interested in either generalizing findings back to a wider audience (that is, quantitative research) or discussing how the study’s findings relate back to the literature (that is, qualitative research). With most research studies, especially those that are quantitative, researchers are not interested in knowing how just one after-school program functioned in one school building or district. However, those conducting program evaluations tend to have precisely such a purpose. Programs come in many different shapes and sizes, and therefore so do the evaluations that are conducted. Educational programs can take place anytime during the school day or after. For example, programs can include a morning breakfast and nutrition program, a high school science program, an afterschool program, or even a weekend program. Educational programs do not necessarily have to occur on school grounds. An evaluator may conduct an evaluation of a community group’s educational program or a program at the local YMCA or Boys & Girls Club.

5

3GC01

10/21/2013

6

10:46:44

Page 6

Chapter 1 Foundations of Program Evaluation

client An individual or group whom the evaluator is working for directly

Accessing the Setting and Participants Another characteristic that sets program evaluation apart from research is the difference in how the program evaluator and the researcher gain access to the project and program site. In the vignette, the program evaluator was hired by the school district to conduct the evaluation of its after-school program. In general, a program evaluator enters into a contractual agreement either directly or indirectly with the group whose program is being evaluated. This individual or group is often referred to as the client. Because of this relationship between the program evaluator and the client, the client could restrict the scope of what the evaluator is able to look at. To have the client dictate what one will investigate for a research study would be very unusual. For example, a qualitative researcher who enters a school system to do a study on school safety might find a gang present in the school and choose to follow the experience of students as they try to leave the gang. If a program evaluation were conducted in the same school, the evaluator might be aware of the gang and the students trying to get out of the gang, and this might strike the evaluator as an interesting phenomenon, but the evaluator would not pursue it unless the client perceived it as an important aspect of school safety or unless gang control fit into the original objectives of the program. Collecting and Using Data As demonstrated in the vignette, program evaluators often collect two different forms of evaluation data: formative and summative. A further discussion about formative and summative evaluation is presented later in this section; essentially, the purpose of formative data is to change or make better the very thing that is being studied (at the very moment in which it is being studied). Formative data typically is not collected in most applied research approaches. Rarely would the researcher have this reporting relationship, whereby formative findings are presented to stakeholders or participants for the purposes of immediately changing the program. Changing Practice Although program evaluators use the same methods as researchers do to collect data, program evaluation is different from

3GC01

10/21/2013

10:46:44

Page 7

What Is Program Evaluation?

research in its overall purpose or intent, as well as in the speed at which it changes practice. The overall purpose of applied research (for example, correlational, case study, or experimental research) is to expand our general understanding of or knowledge about the topic and ultimately to inform practice. Although gathering empirical evidence that supports a new method or approach is certainly a main purpose of applied research, this doesn’t necessarily mean that people will suddenly abandon what they have been doing for years and switch to the research-supported approach. In the vignette, we can see that change occurred rapidly through the use of program evaluation. Based on the evaluation report, administrators, school board members, and project staff decided to reconfigure the structure of the after-school program and to establish a parent center in the hope of increasing parent involvement. It was also decided that many of the extracurricular activities would be eliminated and that the new focus would be on the tutorial component of the program, in the hope of seeing even more improvement in students’ academic scores in the coming year. For another example, consider applied research in the area of instructional methods in literacy. In the 1980s the favored instructional approach was whole language; however, a decade of research began to support another approach: phonics. Despite the mounting evidence in favor of phonics, it took approximately a decade for practitioners to change their instruction. In the early 1990s, however, researchers began to examine the benefits of using both whole language and phonics in what is referred to as a blended approach. Again, despite substantial empirical evidence, it took another ten years for many practitioners to use both approaches in the classroom. This is admittedly a simplified version of what occurred; the purpose here is to show the relationship between applied research and practice in regard to the speed (or lack of speed) with which systems or settings that applied researchers evaluate implement changes, based on applied research. Although there are certainly many program evaluations after which corresponding changes do not occur swiftly (or at all), one difference between program evaluation and research is the greater emphasis in program evaluation on the occurrence of such change. In fact, proponents of certain philosophies and approaches in program evaluation believe that if the evaluation report and recommendations are not used by program staff to make decisions and

7

3GC01

10/21/2013

8

10:46:44

Page 8

Chapter 1 Foundations of Program Evaluation

changes to the program, the entire evaluation was a complete waste of time, energy, and resources (Patton, 1997).

Reporting Findings and Recommendations Another feature of program evaluation that separates it from research is the way in which program evaluation findings are presented. In conducting empirical research it is common practice for the researcher to write a study for publication—preferably in a high-level, refereed journal. In program evaluation, as shown in the vignette, the findings are presented in what is commonly referred to as the evaluation report, not through a journal article. In addition, the majority of evaluation reports are given directly to the group or client that has hired the evaluator to perform the work and are not made available to others. Formative and Summative Evaluation Both quantitative and qualitative data can be collected in program evaluation. Depending on the purpose of and audience for the evaluation, an evaluator may choose to conduct an evaluation that is solely quantitative or solely qualitative, or may take a mixedmethods approach, using quantitative and qualitative data within a project. The choice of whether to conduct a summative or a formative evaluation is not exclusively dictated by whether the evaluator collects quantitative or qualitative data. Many people have the misperception that summative evaluation involves exclusively quantitative data and that qualitative data is used for formative evaluation. This is not always the case. Whether evaluation feedback is formative or summative depends on what type of information it is and when it is provided to the client (see Figure 1.1). Data for summative evaluation is collected for the purpose of measuring outcomes and how those outcomes relate to the overall judgment of the program and its success. As demonstrated in the vignette, summative findings are provided to the client at the end of the project or at the end of the project year or cycle. Typically, summative data includes such information as student scores on standardized measures—state assessments, intelligence tests, and content-area tests, for example. Surveys and qualitative data gathered through interviews with stakeholders may also serve

3GC01

10/21/2013

10:46:44

Page 9

What Is Program Evaluation?

Summative Data Collected by Evaluator

Formative Data Collected by Evaluator

Formative Data Presented to Program Directors or Client

Changes Made to Program as It Is Occurring

FIGURE 1.1.

Summative Data Report in End-of-Year Report to Measure Whether Benchmarks and Program Goals and Objectives Have Been Met

Formative and Summative Evaluation

as summative data if the questions or items are designed to elicit participant responses that summarize their perceptions of outcomes or experiences. For example, an interview question that asks participants to discuss any academic or behavioral changes they have seen in students as a result of participating in an after-school program will gather summative information. This information would be reported in an end-of-year report. However, an interview question that asks stakeholders to discuss any improvements that could be made to the program to better assist students in reaching those intended outcomes will gather formative information. Formative data is different from summative data in that rather than being collected from participants at the end of the project to measure outcomes, formative data is collected and reported back to project staff as the program is taking place. Data gathered for formative evaluation must be reported back to the client in a timely manner. There is little value in formative evaluation when the evaluator does not report such findings to the client until the project is over. Formative feedback can be given through the use of memos, presentations, or even phone calls. The important

9

3GC01

10/21/2013

10

10:46:44

Page 10

Chapter 1 Foundations of Program Evaluation

role of formative feedback is to identify and address the issues or serious problems in the project. Imagine if the evaluator in our vignette had not reported back formative findings concerning parent communication. How many students might not have been able to participate in the after-school activities? One of the evaluator’s tasks is to identify such program barriers, then inform program staff so that changes can occur. When programs are being implemented for the first time, formative feedback is especially important to developers and staff. Some programs require several years of intense formative feedback to get the kinks out before the program can become highly successful. Formative feedback and the use of that information to change or improve the program constitute one factor that separates program evaluation from most applied research approaches. Classical experimental or quasi-experimental research approaches attempt to control for extraneous variables so that only the independent variable can affect the dependant variable. An important aspect of experimental research is a clear definition of the different treatments. A treatment is something that is given to a group of people that they previously did not have (for example, a computerbased tutoring program for mathematics). If the program itself is the treatment variable, then it must be designed before the study begins. An experimental researcher would consider it disastrous if formative feedback were given, resulting in changes to the treatment in the middle of the study. In contrast, program evaluators, while trying to keep the independent variables or treatment constant, realize that it is better to make modifications to the program—even if it “distorts” the lines of causality—than to deliver a substandard program consistently for the entire duration of the program.

Training in Program Evaluation Many students wonder, How do evaluators get involved in program evaluation? and Where do they receive their training? These are both good questions. Although program evaluation today is certainly a much more recognized field than it was in the past, it is made up of both those who have formal training in program evaluation theory and practice and those who have been less formally trained. There is no specialized degree or certification required for people to call themselves evaluators. Today a number of colleges and universities offer course work in program

3GC01

10/21/2013

10:46:44

Page 11

Internal and External Evaluators

11

evaluation as well as advanced degrees in this area. Although course work will vary by institution, most focuses on quantitative and qualitative methods, program evaluation theory, and ethics, and includes a practicum experience. As in any field, program evaluators come from a wide range of backgrounds and experiences as well as different philosophical and methodological perspectives. Often faculty at colleges and universities serve as program evaluation consultants, working with area school districts, agencies, nonprofit programs, and other institutions of higher education. There are also private evaluation consulting companies that hire program evaluators. Furthermore, public agencies at both the state and federal levels hire program evaluators for full-time positions to conduct internal evaluations in that setting, as well as to conduct single-site and multisite evaluations. The American Evaluation Association is an international organization devoted to improving evaluation practices and methods, increasing the use of evaluation, promoting evaluation as a profession, and supporting evaluation to generate theory and knowledge. This organization has approximately four thousand members representing fifty states and sixty countries. The association hosts an annual conference in the United States that focuses on a theme, such as collaboration, methodology, or utilization (see www.eval.org/News/news.htm). The association also comprises special interest groups that specialize in certain areas or topics, such as teaching program evaluation or environmental evaluation.

INTERNAL AND EXTERNAL EVALUATORS The proximity of an evaluator to what is being evaluated certainly influences the access to information, the collection of that information, and the reporting and use of that information to promote change. Take, for example, a waiter at a restaurant, whose perspective on the food and the restaurant’s management is very different from that of the food critic who comes to dine and to write up a review for the local paper. An evaluator’s perspective is similarly shaped by his or her relationship to the setting or program. In the field of program evaluation, this perspective is often accounted for by what are referred to as internal evaluators and external

internal evaluators Individuals who are currently part of the program and who will also serve as the program’s evaluators

3GC01

10/21/2013

12

10:46:44

Page 12

Chapter 1 Foundations of Program Evaluation

external evaluators Evaluators, usually consultants, who are from outside the setting where the program is taking place

evaluators. An external evaluator is someone from outside the immediate setting who is hired to come in and evaluate the program. Because this person has no obligations to the program except in his or her capacity as evaluator, in theory he or she has no immediate biases for or against the program or any one of the stakeholder groups involved in the project. Most programs that receive federal, state, or foundation funding require an external evaluator to be present. In contrast, many companies, agencies, institutions of higher education, school districts, and other groups also employ internal evaluators. An internal evaluator is typically an employee of the company, agency, or group who is responsible for carrying out duties that pertain to evaluation. For example, many school districts now have a program evaluator on staff. This person is responsible for establishing and working with databases to maintain student academic and behavioral data and using data to assist staff and administrators in improving practice. An internal evaluator might also provide expertise in working with the state testing and accountability data as well as monitor programs the school is currently implementing. There are many strengths inherent in—and many barriers to—the use of both internal and external evaluators. The main reason that many funding agencies require an external evaluator to be present, as mentioned earlier, is to increase the objectivity of the data collection. This objectivity may or may not be achieved, however, and the external evaluator also inevitably will encounter some barriers. External evaluators are often faced with the difficulty of establishing trust with the stakeholders involved in the program they are evaluating. Even though the external evaluator is collecting data on the program and not specifically on the performance of program staff, this stakeholder group may not welcome the evaluator with open arms. Stakeholders may, and often do, see the evaluator as a threat to their livelihood— someone whose job it is to find “holes” in the program. In some cases the stakeholders may feel that the external evaluator “really doesn’t know us” or “doesn’t know what we are all about.” In some cases, they may feel that the evaluator doesn’t know enough about the setting or the context of how things work in that setting to be able to gather in-depth data that pertains to them and is meaningful for evaluation purposes. In many cases,

3GC01

10/21/2013

10:46:44

Page 13

How to Use This Book

stakeholders who are uncertain about this evaluator are likely to avoid him or her altogether, not returning phone calls to set up interviews or not returning surveys. It is a daunting and often difficult challenge for even the most seasoned of program evaluators to enter a foreign setting, establish trust with the various groups involved in the program, and provide participants with meaningful data in the interest of programmatic improvements. Internal evaluators typically do not have to deal with gaining the trust of stakeholders as external evaluators do. In addition, internal evaluators know the setting, how to access needed data, and the “language” that each group uses. In some cases both an internal and an external evaluator are retained. If an internal evaluator is already present in a program, then an evaluation plan should encompass the work of both evaluators to optimize the breadth and depth of data collected and, ideally, to ensure the overall success of the program. In such situations, the internal evaluator would be responsible for collecting certain types of data to which the external evaluator would not have access. In turn, the external evaluator would collect additional data to ensure the authenticity and objectivity of the evaluation effort and its findings.

HOW TO USE THIS BOOK To provide some standardization, a framework was developed and applied to each case study in this book. Box 1.1 presents an overview of the framework sections and a brief explanation of each.

BOX 1.1. Overview of the Framework Guiding Each Case Study Presented here are the main sections you will find in each case study, as well as a general description of what you may expect to be covered in each section. Although an attempt has been made to align the case studies with the following sections, such alignment was not always possible due to the cases’ uniqueness and fluidity. The Evaluator In this section the evaluator (or evaluators) is introduced. The role of the evaluator is also discussed here, as well as the evaluator’s

13

3GC01

10/21/2013

14

10:46:44

Page 14

Chapter 1 Foundations of Program Evaluation

background, education, and connection to the evaluation project as a whole. The Program Here the program being evaluated is described: its purpose, its implementation, and relevant stakeholders and participants. In addition, where possible, the goals and objectives of the program as well as the program’s structure and design are presented. The Evaluation Plan Here the evaluator’s evaluation plan is discussed in as much detail as possible. This discussion includes, for example, the objectives driving the evaluation and the methods and tools the evaluator used or planned to use to conduct the evaluation. Summary of Evaluation Activities and Findings This section describes the data collection process of the evaluation and provides a summary or overview of any evaluation findings. In each of the cases, the evaluator is usually presented with a dilemma or situation at the end of this section. Final Thoughts This section provides the reader with a conclusion: what really happened at the end of the evaluation, how the evaluator handled the dilemma, and the results of those actions for the evaluator and the project as a whole.

benchmarks Specific outcomes that define the success or worth of a program

As you can see, there are many different approaches to conducting an evaluation of a program. It should be noted that although the objectives-based approach is not the sole approach for conducting an evaluation, because of the requirements for securing federal and state funding and the focus on meeting goals and benchmarks in today’s climate of accountability, it is, generally speaking, the most widely used approach. In addition, an objectives-based evaluation is most likely to be the first type of evaluation that a new evaluator just entering the trade will be exposed to and have to conduct. Therefore, most of the case studies presented in this book follow a more objectives-based approach.

3GC01

10/21/2013

10:46:44

Page 15

The Evaluation Objective

15

The following sections present some additional resources and readings to assist those who are relatively new to program evaluation and to more clearly delineate some of the activities and concepts overviewed and described in each case study.

THE EVALUATION OBJECTIVE In an objectives-based approach, the evaluation objective is the cornerstone of conducting a rigorous and successful evaluation project. Evaluation objectives are written goals according to which the evaluation data will be collected and reported. Box 1.2 presents a list of evaluation objectives used in evaluating the summer camp project. For example, the evaluation objectives that follow were developed to evaluate a summer camp for students. The camp was designed to provide students with enrichment during the summer months. Research has shown that many school-age children lose a significant amount of knowledge and skills during summer vacation. This is particularly true for students who are unable to participate in enriching experiences while out of school.

BOX 1.2.

Evaluation Objectives for the Summer

Camp Project Objective 1: To document stakeholder perceptions as to the purpose of the camp Objective 2: To document activities conducted during camp Objective 3: To document stakeholder perceptions of the lessons learned and the strengths and challenges of the camp Objective 4: To document student outcomes as a result of participating in the camp Objective 5: To document modifications made to programming based on the previous year’s evaluation recommendations

evaluation objective A clear description of a goal used by the evaluator to judge the worth or merit of a program

3GC01

10/21/2013

16

10:46:44

Page 16

Chapter 1 Foundations of Program Evaluation

The typical evaluation has four or five main evaluation objectives. Specific data is collected to answer or address each evaluation objective. For many grant-funded projects, evaluation objectives are already established and clearly defined in the grant. In such cases, an evaluator must work with the established objectives and begin to develop an evaluation matrix (see the following subsection). For projects with no preestablished evaluation objectives, however, the evaluator must play a significant role in their development. Developing evaluation objectives in a collaborative setting can be a useful practice for an evaluator. To both build trust and gain buy-in from the different stakeholder groups (such as teachers, staff, administrators, and parents, in a school setting), it is helpful to gather representatives from all parties for a discussion about the goals of the project and what outcomes or results they believe a program such as this should produce. It should also be noted here that evaluation objectives are not static; they can change over time. There may be objectives deemed important in the very beginning of a multiyear evaluation that are not emphasized at the end of the project. Typically, formative evaluation objectives (discussed shortly) are emphasized in the early stages of the evaluation timeline, and summative evaluation objectives (also discussed shortly) take on a more prominent role toward the end of the project. No matter what objectives and timelines are being used, it is imperative that evaluation objectives be aligned with the goals and activities of the project being evaluated. For example, let’s say that the main focus of a summer enrichment program is literacy. As part of the program’s activities, students or campers keep journals, work with local storytellers to author their own stories, and receive tutoring or interventions in literacy. Project developers and staff hope that students will, from this experience, become more interested in reading and literacy as a whole and that this enthusiasm will eventually flow over into students’ increasing their performance on some standardized reading measure that they will take at a later point. From this single program component, two evaluation objectives could potentially be developed, such as the following: ■

To document an increase in students’ interest and frequency of engaging in reading and other literacy-based activities. Data for this evaluation objective could be collected through

3GC01

10/21/2013

10:46:44

Page 17

The Evaluation Objective

pre-post interviews with students documenting whether they believe their interest in and frequency of such practices have increased over time as a result of participating in the project. Supporting evidence could also be collected from parents, who may be observing their child reading more books at night, taking more books out of the library, talking about the book he or she is reading at dinner, and so on. An analysis of students’ journals, the lists of books they have completed, book reports, and so on could serve as additional evidence to support these claims. The second objective could focus on more “hard” or end outcomes (such as test scores). A discussion of end outcomes is presented later in this section. ■

To document increases in student performance on a standardized reading measure administered annually. This objective would require the evaluator to obtain student scores on the annual measure to determine whether there appears to be any relationship between student participation in the program and score increases on the assessment.

Evaluation objectives will vary somewhat depending on the program. However, there are some general categories under which all objectives can fall, as described in Box 1.3.

BOX 1.3. General Categories of Evaluation Objectives (Example for an After-School Program) Documenting activities. Objectives such as these work toward documenting what the program “looks like” by describing what activities take place. Data for these types of objectives can be gathered through interviews, focus groups, or surveys (see the subsection “Tools for Collecting Data” later in this chapter), and through direct observations of program activities. Documenting program implementation. These objectives focus on documenting processes associated with program startup and basic program implementation. As part of this effort the evaluator would be interested in documenting strengths in as well as barriers to program implementation. For example, one barrier the

17

3GC01

10/21/2013

18

10:46:44

Page 18

Chapter 1 Foundations of Program Evaluation

evaluator could discover might be that there isn’t enough busing available for everyone who wants to attend field trips. Barriers that have a severe impact on the quality of the programming (such as an instructor’s not using the correct curriculum) should be documented and fed back immediately to the project directors so this problem can be corrected in a timely manner. Safety concerns constitute another barrier that requires immediate feedback. Again, evaluation in which information is presented to staff in a timely fashion is formative. Because of its timely nature, formative evaluation findings are often reported to program staff through the use of memorandum reports and presentations. These presentations can be done at the project’s weekly or monthly meetings. Documenting outputs of activities. These objectives focus on outputs or changes that occur as a result of some activity. These changes tend to be associated with what people believe or how they perform or act. For example, if program staff attended a seminar on working with at-risk students, and their beliefs about poverty changed or they changed some aspect of their instruction as a result of engaging in this activity, this would qualify as a finding that would meet an objective in this category. Data for these types of objectives can be gathered through interviews and surveys (Rea & Parker, 2005). Before using a survey to document these outputs, the evaluator should allow some time to pass after participants attended the seminar, giving them time to return to the classroom. For an example of an objective pertaining to the outputs of an activity, see the first of the previous set of two example objectives, “To document an increase in students’ interest and frequency of engaging in reading and other literacy-based activities.” Documenting end outcomes. These objectives focus on documenting changes in the participants themselves. In after-school and enrichment programs these end outcomes are often referred to as hard outcomes—that is, outcomes that are measured with a standardized assessment; for example, changes in students’ reading, math, or science scores on a standardized measure are considered to be end outcomes. A decrease in the number of violent incidences, an increase in student attendance, and an increase in student course work grades could also be used to satisfy end outcome evaluation objectives.

3GC01

10/21/2013

10:46:46

Page 19

Designing and Developing an Evaluation Matrix

19

DESIGNING AND DEVELOPING AN EVALUATION MATRIX One of the first activities to be conducted during the planning of the evaluation is the development of an evaluation matrix. The matrix serves as a blueprint to guide the evaluator and to ensure that all necessary data is collected. Table 1.1 presents an example of a matrix used to evaluate the summer camp project. Although each

TABLE 1.1.

Evaluation Matrix for the Summer Camp Project

Evaluation Objective

Stakeholders

Tools Used to Collect Data

When

Purpose

Evaluation objective 1: To document the depth and breadth of activities provided during the follow-up session (2004– 2005)

Faculty, project directors, and campers

Interviews

July

Summative

Evaluation objective 2: To document student satisfaction with the follow-up activities

Students

Interviews and observations

March–April

Summative

Parents

Postsurveys

May or June

Summative

Evaluation objective 3: To document faculty perceptions of the follow-up activities

Faculty and project directors

Interviews

March–April

Summative

Evaluation objective 4: To document parent perceptions of student outcomes from participating in camp and follow-up activities

Parents

Surveys

March–April

Summative

Evaluation objective 5: To document changes in student learning and abilities

Students

Word knowledge assessments

March 5 (post)

Summative

3GC01

10/21/2013

20

10:46:46

Page 20

Chapter 1 Foundations of Program Evaluation

project will have its own unique evaluation objectives, the basic components essential to all evaluations are the same. Notice in the example matrix shown that the evaluation is being guided by five individual objectives. Notice also that the matrix contains the timeline detailing when the data will be collected and the methods and measurement tools the evaluator intends to use for data collection, and that it specifies whether the data is summative (findings presented at the end of the project) or formative (findings presented as the project is occurring). The more detail the evaluator can present in the matrix, the easier it will be to carry out the overall evaluation. Most evaluators use some sort of matrix, even though it may not be spelled out as formally as the one in the table. In addition to helping organize the evaluation, the evaluation matrix is also a wonderful tool in helping the evaluator build trust with the various stakeholder groups involved in the project. In doing so, the evaluator may have early discussions with representatives from individual stakeholder groups (such as teachers, parents, and staff ) about the data collection process and the kinds of information that stakeholders perceive as important and useful. It is recommended that the evaluator incorporate the assistance and feedback from all stakeholders into the building of the evaluation matrix before data is collected. Keep in mind that on a multiyear project, the matrix and data collection activities are likely to change slightly as new objectives are added to the evaluation plan and old objectives that have been met and no longer need to be monitored are done away with.

DATA COLLECTION As specified in the evaluation matrix, the tools that the evaluator uses to collect data will vary depending on several factors, including the size of the stakeholder groups, the education or developmental level of the stakeholder group, and the evaluator’s access to the stakeholder group. This section presents a few of the basic tools commonly used by evaluators and typical methodologies used for evaluations.

Data Sources The survey or self-report measure is perhaps the most common data collection tool used by program evaluators. One reason this

3GC01

10/21/2013

10:46:46

Page 21

Data Collection

tool is so popular is the overall ease with which such a survey can be administered. Surveys are usually administered through a mailout, mail-back procedure; however, in some cases they may be collected on-site, typically following an activity, such as a workshop or an information session. Surveys can be administered across multiple groups involved in a program. Keep in mind that wording of items may need to be modified slightly for the different groups. The following is a list of stakeholders that the evaluator may want to consider surveying when conducting an evaluation of an after-school, enrichmentoriented, or summer program. ■

Parents and guardians



Project administrators



Project staff



Community members, volunteers, and senior citizens



Students



Presenters and service providers

Designing a Survey When designing a survey it is important that its final form be piloted or field tested prior to being sent out, to ensure that there are no errors in the survey that would keep participants from being able to properly complete it. In addition, it is important to be aware of possible language or reading ability barriers for those being surveyed. Pretesting the survey with a handful of those participants should give the evaluator an accurate idea of how the survey will perform when administered to the entire stakeholder group (Rea & Parker, 2005). Exhibit 1.1 presents a survey designed to gather information from parents and guardians of the students participating in the summer camp project. The survey was specifically developed to address multiple evaluation objectives. Scales for Collecting Data Through Surveys A successful survey asks for only needed information and is easy and quick to complete. A survey that is too general and appears to be asking questions that have little or nothing to do with the

21

3GC01

10/21/2013

22

10:46:46

Page 22

Chapter 1 Foundations of Program Evaluation

EXHIBIT 1.1.

Parent or Guardian Perception Survey—Summer Camp PLEASE RETURN by July 30 As part of the effort to evaluate the summer camp, the following survey has been designed to gather your perceptions regarding the activities associated with the camp. The information you provide will assist us in delivering important formative feedback to program coordinators and to the granting agency, as well as help us meet the intended objectives and outcomes of the overall project. Your responses are confidential and will not be shared with anyone in any way that identifies you as an individual. Only aggregated data will be presented in the final evaluation report. Your participation in this survey process is completely voluntary and will not have an impact on your child’s future attendance in the program. Your time and cooperation are greatly appreciated. If you have any questions about this survey or the overall process, please contact Dr. Dean T. Spaulding, Assistant Professor, Department of Educational Psychology, College of Saint Rose, Albany, NY 12203, (XXX) XXX-XXXX.

Perceptions of Recruitment The following items seek to gather your perceptions regarding your overall beliefs about the recruitment process for summer camp. Please read each item carefully and use the scale that follows to show your level of agreement with each item. The last, open-ended item seeks to gather more in-depth information from you. 1=Strongly Disagree 2=Disagree 3=Slightly Disagree 4=Slightly Agree 5=Agree 6=Strongly Agree

I was provided with camp information in a timely fashion.

1 2 3 4 5 6

The program brochure provided me with a way to get additional information prior to enrollment.

1 2 3 4 5 6

I found the enrollment process to be easy.

1 2 3 4 5 6

How did you hear about camp? ____________________________________

Perceptions of Orientation The following items are designed to gather your perceptions about the orientation process for summer camp. Please read each item carefully and use the scale that follows to show your level of agreement with each item. 1=Strongly Disagree 2=Disagree 3=Slightly Disagree 4=Slightly Agree 5=Agree 6=Strongly Agree

I believe the check-in process at orientation was well organized.

1 2 3 4 5 6

3GC01

10/21/2013

10:46:49

Page 23

Data Collection

I left orientation feeling confident that my child was in good hands.

1 2 3 4 5 6

I believe dinner at orientation allowed me to meet the counselors and teachers my child would be working with.

1 2 3 4 5 6

I think having dinner with my child at orientation allowed me to be included in the camp experience.

1 2 3 4 5 6

The information session at orientation provided me with a clear understanding of what my child would be doing at summer camp.

1 2 3 4 5 6

I was encouraged to participate in camp activities throughout the ten-day program.

1 2 3 4 5 6

I was provided with contact numbers and information. 1 2 3 4 5 6 I was provided with enough information so I could attend camp activities and field trips.

1 2 3 4 5 6

The food was appropriate for children.

1 2 3 4 5 6

I enjoyed the Hudson River Rambler performance.

1 2 3 4 5 6

If you went to the dorms with your child either on orientation night or during a later visit, please answer the next three questions: I left the dorm feeling my child was in a safe place.

1 2 3 4 5 6

I felt the dorm was clean.

1 2 3 4 5 6

I felt the dorm would be a comfortable place for my child.

1 2 3 4 5 6

Perceptions of Parent Involvement During Camp If you participated in the following activities, indicate your participation with a Ö: Date Monday 7/5 Tuesday 7/6 Wednesday 7/7 Thursday 7/8 Friday 7/9 Saturday 7/10 Sunday 7/11 Monday 7/12 Tuesday 7/13 Wednesday 7/14 Thursday 7/15

Breakfast

A.M. Session

Lunch

P.M. Session

Dinner

Field Trip

23

3GC01

10/21/2013

24

10:46:49

Page 24

Chapter 1 Foundations of Program Evaluation

(Exhibit 1.1 continued ) If you did not participate in any or all of the activities just listed, please circle your reason (circle all that apply): A. I did not have transportation. B. I had other child care needs. C. I had work conflicts. D. I thought I would have to pay to participate. E. I was not interested. F. I did not know I could participate. G. Other: _____________________________________________________ ___________________________________________________________

Reflections on Camp From what you have heard or observed from your child, what did your child like about summer camp? (check all that apply) ___Food

___Counselors

___Field trips

___Speakers and guest lecturers ___Other campers

___Night activities ___Campers’ cameras ___Dorm room

___Final presentations ___Class time

___Teachers and professors

___Working on the computers

Other (please explain): ____________________________________________ From what you have heard or observed from your child, what didn’t your child like about summer camp? (check all that apply) ___Food

___Counselors

___Field trips

___Speakers and guest lecturers ___Other campers

___Night activities ___Campers’ cameras ___Dorm room

___Final presentations ___Class time

___Teachers and professors

___Working on the computers

Other (please explain): ____________________________________________ The following items seek to gather your perceptions about the outcomes of your child’s participation in summer camp. Please read each item carefully and use the scale that follows to show your level of agreement with each item.

3GC01

10/21/2013

10:46:50

Page 25

Data Collection

1=Strongly Disagree 2=Disagree 3=Slightly Disagree 4=Slightly Agree 5=Agree 6=Strongly Agree

I believe my child wants to come back to camp.

1 2 3 4 5 6

My expectations of camp were met.

1 2 3 4 5 6

I believe my child’s expectations of camp were met.

1 2 3 4 5 6

Perceptions of the Impact on Academics and School The following items are designed to gather your perceptions about the possible impact attending camp may make on your child’s academics and school-related work in the upcoming school year. Please read each item carefully and use the scale that follows to indicate your level of agreement with each item. 1=Strongly Disagree 2=Disagree 3=Slightly Disagree 4=Slightly Agree 5=Agree 6=Strongly Agree

I believe that this camp experience will help my child in school.

1 2 3 4 5 6

My child has been continuing activities experienced at camp.

1 2 3 4 5 6

I have noticed improvement in the way my child interacts with other children.

1 2 3 4 5 6

I plan to attend the follow-up sessions with my child.

1 2 3 4 5 6

I would be willing to send my child to summer camp next year.

1 2 3 4 5 6

I would recommend summer camp to other parents.

1 2 3 4 5 6

Demographic Items (Optional) About you (check or fill in all appropriate items): School district:_____________________ Grade level (fall 2003):_______________ Child’s age:____________ Child’s gender:______ Male______ Female Did your child attend camp last year?______ Yes______ No What is the total number of members within the household?______ Number of children:______ Number of adults:______ Which camp did your child participate in? ______ Storytelling______ American history______ Don’t know Which residence hall did your child live in? ______ Fontebonne______ Charter______ McGinn______ Don’t know PLEASE PROVIDE ANY ADDITIONAL COMMENTS: _______________________________________________________________ _______________________________________________________________ _______________________________________________________________

25

3GC01

10/21/2013

26

10:46:50

Page 26

Chapter 1 Foundations of Program Evaluation

project will quickly be dismissed by those who are expected to fill it out. A survey should collect only data that is essential for the evaluator in completing the evaluation of the project. In addition, the evaluator should know exactly which questions or items on the survey are aligned with which objectives. For example, an evaluator should know that items 4 through 14 will answer evaluation objective 1, items 15 through 26 will address objective 2, and so on. Planning in such detail will ensure that only the needed information is collected. The following are a few common scales and approaches that can be used to solicit information from participants. Likert scales. These scales are commonly used in surveys (see Exhibit 1.1). Respondents are presented with complete statements (for example, “I found the program increased students’ interest in reading”) and use an agreement scale to indicate their beliefs, selecting the number that best represents how they feel. Here is an example of a Likert scale: 1=Strongly Disagree 2=Disagree 3=Slightly Disagree 4=Slightly Agree 5=Agree 6=Strongly Agree

Checklists. A checklist is essentially a list of possible answers that respondents check off if applicable, and it represents an easy way to gather broad information from participants. Although constructing a checklist is not difficult, generating such breadth of items can sometimes pose a challenge, especially if the evaluator is not fully aware of all the possible answers that would be appropriate. Sometimes conducting a few initial interviews with members from stakeholder groups can help the evaluator expand the checklist to ensure that it gathers valid data. It is also advisable to include an “Other” category at the end of each checklist, thus allowing respondents to write a response that was not posted. (See Exhibit 1.1 for examples of checklists.) Open-ended or free response items. These items ask an openended question and expect respondents to give a detailed answer. Unlike the other methods just described, open-ended items allow the respondents to describe “how” and “what” in much more depth. In constructing a survey it is important, however, not to overuse open-ended questions. Too many open-ended items on a survey can deter participants from filling it out. As part of using

3GC01

10/21/2013

10:46:50

Page 27

Data Collection

27

open-ended questions appropriately, data derived from them should be linked directly to answering evaluation objectives, and the evaluator should avoid putting open-ended items at the end of a survey just to fill in any extra blank space. Demographics sections. A demographics section can be placed at the beginning or end of a survey to gather personal information about the participants. The information requested can vary widely depending on the purpose of the project. The survey in Exhibit 1.1 has limited demographics; additional possibilities include the respondent’s gender, age, marital status, years employed in current position, education level, and annual income.

One-to-One Interviews Although many of us probably have some idea of how interviews are conducted, we may not realize that they involve more than simply asking questions of someone and writing down his or her responses. To have a successful interview requires proper advance planning. The evaluator needs to establish the time and location and develop a list of questions, often called the interview protocol. Typically, an interview protocol contains no more than six to eight open-ended questions. Interviewing with such a list should take about an hour, depending on the project and the level of detail that is needed. As with the other tools, questions from the interview protocol must also be linked to specific evaluation objectives (Kvale & Brinkman, 2008). Aside from developing six to eight broad questions, the evaluator may also want to develop subquestions, or probes. Probes help ensure that the evaluator is addressing specific information within the larger context of the questioning process. One of the benefits of using an interview protocol in conducting multiple interviews is that the protocol helps standardize the process, so everyone is asked the exact same questions, word for word. Exhibit 1.2 presents an example of an interview protocol that was used to interview camp instructors in the evaluation of the summer camp. Questions 3 and 7a provide examples of subquestions or probes. Another method of collecting data from stakeholders, the focus group, is very similar to one-to-one interviews. To conduct a focus group, the evaluator first develops a protocol—a series of open-ended questions; however, instead of asking them of an

interview protocol A list or series of open-ended questions used to collect in-depth information

probes Specific questions highlighted on an interview protocol

focus group A small group of people, guided by a group leader, assembled to discuss an issue or topic in depth

3GC01

10/21/2013

28

10:46:50

Page 28

Chapter 1 Foundations of Program Evaluation

EXHIBIT 1.2.

Interview Protocol for the Summer

Camp Project 1.

What was the purpose of the follow-up sessions?

2a. What was the overall process for developing the follow-up sessions? 2b. How does that extend and support the curriculum delivered at the summer camp? 3.

Describe the activities used in the follow-up sessions. Which of these did you find the campers were most and least engaged in?

4.

What do you see as the main learning objectives of the activities?

5.

Overall, have the learning objectives been met? If so, how?

6.

What changes would you make to the curriculum for next year’s follow-ups?

7a. What changes have you seen, if any, in these students in the time you have been working with them?

• As a group? • On an individual student basis? 7b. What other possible changes in student performance could you expect to see as a result of students’ participating in this experience? 8.

What do you see as the Saturday follow-up’s strengths?

9.

What do you see as challenges?

10a. Has your experience in developing and implementing the curriculum for camp and the follow-up sessions changed how you think about or develop curriculum for your college classes? 10b. Has it changed how you instruct others to teach this population? 11.

What are some of the lessons you have learned from this experience?

individual, the evaluator poses the questions to a group of stakeholders for discussion. The advantage of this technique is that often the conversations will get much deeper because of the different perspectives of the assembled individuals. When conducting a focus group, it is important that the evaluator set ground rules beforehand to make sure that all participants respect each other, even if their views on the situation

3GC01

10/21/2013

10:46:50

Page 29

Data Collection

29

are very different. At least two evaluators should be present when conducting a focus group: one to ask the questions and the other to take notes. A video or an audio recording device can be used during both interviews and focus groups. This will help ensure the accuracy of the data being collected by allowing the evaluator to add further detail and quotes that might not otherwise have been recorded. If the evaluator is planning to use such a device, it is important that those being interviewed are informed and agree, both off and on tape (Kvale & Brinkman, 2008).

Alternative Forms of Data In addition to using surveys and interview protocols, evaluators are always seeking creative ways to collect different kinds of data. Often, when working with school-age children, evaluators will have the students keep a journal about their experiences with the project. When considering using journals as a source of data, it is important—especially with middle school students—to provide some sort of structure for their journal entries. One way to do this is to provide daily or weekly themes or even questions to which students must respond in writing. In addition, the evaluator should make it quite clear that students’ journals are going to be collected and read as part of the evaluation. Photography is another excellent method of collecting data. An evaluator who wishes to use photography as an alternative data collection method has several options. First, the evaluator can choose to either be the photographer and photograph students engaging in activities or allow the students to be the photographers. During the summer camp program, campers were each given a disposable camera and asked to photograph things that they liked or didn’t like about camp. Over the course of the next ten days, campers took lots of pictures during field trips, class time, and free time. Later the photographs were developed, and evaluators interviewed students, using their photographs as prompts to further the conversation. Archival Data Program evaluators often find themselves at some point using archival data, which is data that has already been collected by someone other than the evaluator. In education, evaluators often

archival data Data that has been collected by some person or group other than the evaluator

3GC01

10/21/2013

30

10:46:50

Page 30

Chapter 1 Foundations of Program Evaluation

have to use student achievement data, which may be gathered annually through the state’s testing system. But archival data does not have to be obtained through standardized assessments alone. Student quarterly school report card data, records of office referrals and suspensions, and even observation notes on student performance taken by a teacher during a classroom activity could all fall under the heading of archival data. Evaluators might also use archival data to determine how things were or how students were performing before the program being evaluated was put into place. The information used in this instance is sometimes referred to as baseline data, or data that is collected consistently over a period of time. The evaluator can examine this baseline data and use it to discern or show a pattern. Then later the evaluator will gather new data once the program is in place and examine this information to see if there is any change or shift in that pattern. If a change is discovered, the evaluator will suggest that this outcome is due (in part) to the program and will recommend that the programming continue as is. Although archival data may sound ideal for the busy evaluator, it is important to note that like any other type of data, archival data does not come without its challenges. One challenge evaluators face when using archival data is that they did not directly collect it, and therefore cannot know for sure how accurate the information is. Although standardized assessments administered by the state would have testing procedures guiding them, for example, the evaluator cannot be sure that these procedures were followed exactly during the testing or that there wasn’t variability across state assessments from year to year. The same is true for archival data that is less standardized in nature. Take, for example, students’ being referred by the classroom teacher to the principal’s office. Let’s say that the evaluator uses the school’s archival data to determine the average number of students by grade level referred to the principal’s office for each quarter. The evaluator might go back several years into the archival data to gather enough data points to establish a pattern. Let’s say that for the last three years, however, a new principal at the school put into place new criteria for teachers’ sending students to the office. As part of these new criteria, teachers can no longer automatically send students to the office for misbehavior. Teachers must now counsel the student, and after three warnings send the student to the office.

3GC01

10/21/2013

10:46:50

Page 31

Writing the Evaluation Report

31

As you probably recognize, this would dramatically reduce the number of office referrals and show a decrease in referrals that doesn’t really reflect student behavior. Because this new procedure would have been implemented over several years, it would naturally yield a pattern of lower-than-expected office referrals, even though student behavior had not necessarily improved. Because of the challenges with archival data, it is important that an evaluator not use archival data exclusively when conducting an evaluation. If one is using archival data, it is important to juxtapose it with rigorous interview data, survey data, and observation data to determine whether the archival data contains any inconsistencies.

TRIANGULATION OF DATA Triangulation is a term used to describe a data analysis technique whereby three or more different types of data are collected and analyzed together. It is sometimes referred to as cross-referencing. The idea behind triangulation is that coming to the same conclusions using three different types of data helps ensure that the findings are indeed accurate. The concern is that with only one type of data, an evaluator might come to incorrect conclusions—a problem that triangulation helps to alleviate. For example, an evaluator may send out surveys to teachers to gather information about their recent participation in a threeday professional development program. In addition to gathering quantitative data from surveys, the evaluator also may have observed the three-day professional development program, taking in-depth notes (qualitative), and then conducted interviews with teachers afterward (qualitative). In doing so, the evaluator is trying to ensure that the findings are valid or accurate, and that stakeholder responses on the surveys are similar in nature to those in the interviews and supported by his or her direct observations. Triangulation of data may not always be possible, but when it is, evaluators should consider using this method to increase their confidence in evaluation findings.

WRITING THE EVALUATION REPORT There is no one way to construct an evaluation report, but there are some general guidelines. Typically, summative evaluation

Triangulation A process whereby the evaluator takes into consideration three different types of information (observation data, survey data, and interview data about the effectiveness of a program) and brings them together to examine an issue

3GC01

10/21/2013

32

10:46:50

Page 32

Chapter 1 Foundations of Program Evaluation

reports are written and presented at the end of each project year. In some cases, a midyear project status report is required. As the evaluator, you should determine whether such a midyear report is needed and plan accordingly. The following are the basic sections of an evaluation report: Cover page. This should contain the title of the project, the evaluator’s name and credentials, the client or name of the organization that commissioned the report, and the date or time of year that the report is being submitted (for example, summer 2005). Executive summary. For short reports, an executive summary is not necessary. Typically, an executive summary runs one or two pages and provides a short purpose and methodology for the report, the essential main findings, a conclusion, and recommendations, if appropriate. Often administrators use the executive summary as a stand-alone document to highlight key findings at meetings, media events, and the like. Introduction. A two- to three-paragraph introduction is a good way to set the stage for the evaluation report and how the project came to be. In addition, the introduction should contain the overall purpose of the evaluation, the name of the client or organization for which the report has been written, and both the project goals and the evaluation objectives. Methods. In this section the evaluator presents an overview of the different types of tools that were developed, when they were administered, what kinds of data were collected, and the sources for the data. Body of the report. The body of the report contains the analyzed data and findings from the evaluation. It is best to start off each new objective on a new page. First, the evaluation objective should be restated, followed by another short description of what tools were used as well as what kinds of data were collected, and from whom. Following this, the evaluator will want to report the summarized data in a table (or in a figure or in bulleted form). The evaluator will then include an evaluation finding or findings based on this information. These evaluation findings generally include an overall theme or summary of the data being presented. Additional data that supports the main data and findings can be presented in bullet form under the main table (see Exhibit 1.3, an example of the body of a report from the summer camp evaluation).

3GC01

10/21/2013

10:46:51

Page 33

Writing the Evaluation Report

33

EXHIBIT 1.3.

Overview of an Evaluation Objective and Findings Objective 3: To document stakeholder perceptions of the lessons learned and the strengths and challenges of the camp. The purpose of this objective was to document stakeholder perceptions of both lessons learned from the experience and the strengths of and barriers to the camp. To meet this objective, qualitative data was gathered via either one-to-one interviews or focus groups. Parent perception data was provided through open-ended questions on the survey. Finding: All stakeholders reported that maintaining friendships and becoming motivated to learn and build skills were the strengths of participating in the experience; lack of full participation and inconsistent attendance at camp were noted by some to be barriers. Table 1.2 presents these findings by stakeholder category.

TABLE 1.2. Stakeholder Perceptions of Strengths of and Barriers to Camp Stakeholders Program directors

Strengths • Continued friendships,

making new friends • Exposure to students

from different schools and backgrounds

Barriers • Tutoring sessions

occurring at the same time • Families’ moving

Suggestions Seeing if tutoring could come before or after; better integration

• Lack of contact

• Students’ continuing

information

to learn and refine skills learned from previous lessons

• Transportation

problems • Conflicts with other

school or family obligations Camp instructors

• Continued friendships,

making new friends • Exposure to students

from different schools and backgrounds • Students’ continuing

to learn and refine

• Only 50 percent of

students’ attending • Month of time

between each session (too long)

Linking students together via the Internet or Blackboard

• Difficulty keeping

students on target (continued )

3GC01

10/21/2013

34

10:46:52

Page 34

Chapter 1 Foundations of Program Evaluation

(Exhibit 1.3 continued )

TABLE 1.2. (continued) Stakeholders

Strengths skills learned from previous lessons

Barriers

Suggestions

with learning between sessions • Inconsistency with

student attendance, difficulty providing continuity Campers

• Seeing friends and

staying in touch • Learning more about

• Overly short sessions • Not all students’

attending

Longer sessions; mandatory attendance

history and storytelling • Improving computer

skills Parents

• Continued learning

and growing • Friends

• Family or school

obligations on the same day

Saturday morning sessions

• Need for sessions to be

known about in advance so planning could occur

Finding: Camp instructors noted that this experience has benefited their own pedagogy and teaching at the college level, as follows:

• Examination of qualitative data revealed that camp instructors noted several areas in which their work serving as instructors for camp has benefited them or changed how they think about or deliver instruction at the college level. More specifically, instructors noted that because of this experience they have tried to do more with interactive activities in their college classroom and have seen how effective such practices are when teaching an adult population.

• Camp instructors reported that this experience has also changed how they instruct others to work with urban at-risk youth. More specifically, instructors have gained this insight: it is important to stress to preservice teachers that when instructing students from these backgrounds they should allow for extra time to start an activity, as it takes these students a little more time to get into the activity.

3GC01

10/21/2013

10:46:52

Page 35

Dissemination and Use of Evaluation Findings

Finding: Stakeholders noted several areas in which changes could be made to next year’s programming in relation to the follow-up sessions:

• Address the issue of low attendance at follow-ups. During the initial greeting of parents at the summer camp, the follow-up sessions will be stressed, as well as their function to support and extend the work and learning that have occurred at summer camp. Parents will be reminded of these sessions at the closing of camp, and perhaps via a notice sent out at the beginning of the school year.

• Offer an incentive for students to attend the follow-up sessions. Stakeholders believed that offering some type of incentive to students for completing the follow-up sessions would greatly help to increase the low attendance and to decrease inconsistencies in attendance that occurred with this year’s sessions.

• Combine sessions. Another area to be addressed is the time constraint with the current three-hour sessions. Stakeholders noted that combining two months of sessions would allow for a half- or quarter-day field trip to a museum or other appropriate educational venue.

• Increase parent involvement. Stakeholders also noted the need for more parent involvement in the follow-up sessions; they believed that field trips could be used as a way to get more parents involved.

DISSEMINATION AND USE OF EVALUATION FINDINGS It is the role and responsibility of the evaluator to deliver the evaluation report on time to the client or agency that has directly commissioned the work to be conducted. In the case of summer enrichment programs, the client is most likely to be an administrator or a project director (or both). In most cases it is the responsibility of the administrator or project director to submit the final evaluation report to any relevant funding agency. Even if an evaluator has established trust and a positive relationship with a particular stakeholder group (such as parents), he or she cannot give the evaluation report to the group without the expressed permission of the client. Once the client has reviewed the report and made comments to the evaluator, the client will disseminate the report to whichever groups he or she feels should receive it. In some cases the client may wish to have the evaluator present the

35

3GC01

10/21/2013

36

10:46:52

Page 36

Chapter 1 Foundations of Program Evaluation

key findings from the executive summary at an upcoming project meeting and field any questions that stakeholders might have. The appropriate use of evaluation findings and recommendations is key to a successful evaluation project. Ideally, throughout the process the evaluator has established a professional degree of trust among the stakeholders with whom he or she has been working. One of the silent roles of the evaluator is to present evaluation findings and recommendations to the client in such a way as to make change occur. The role of the evaluator does not stop with the delivery of the report and recommendations. The evaluator should work with the client to address the issues requiring further attention, and to continue to gather and feed data back to the client until those issues are resolved. One way an evaluator can monitor progress toward meeting the recommendations for the project is to build this activity into an evaluation objective. As part of the evaluation of the summer camp, the evaluation team did just that: they built in a specific objective that focused on the project staff’s ability to address limitations or concerns within the project. At the end of the camp, all areas of concern had been successfully addressed. Exhibit 1.4 presents this objective.

EXHIBIT 1.4.

Example of an Evaluation Objective and Finding Focused on Program Modifications Objective 5: To document modifications made to programming based on the previous year’s evaluation recommendations. The purpose of this objective was to document any programmatic changes made in year two that were based on program evaluation recommendations from year one. To complete this objective, a review of the year one follow-up report was conducted. In addition, qualitative data were gathered from stakeholders, and data across the entire report was analyzed to determine whether program refinements had been made and whether they were successful. Finding: In 2004–2005 all recommendations made from year one were addressed, and intended outcomes were achieved (see Table 1.3).

3GC01

10/21/2013

10:46:53

Page 37

Summary

TABLE 1.3. Status of Prior Recommendations Made for the Summer Camp Follow-Up Sessions 2003–2004 Recommendations

2004–2005 Changes

Results

Status

Increase interest in and awareness of follow-up sessions during summer camp in 2004.

An effort was made by prior campers and staff to increase awareness of follow-up sessions.

There was a 50 percent increase in the total number of campers attending follow-up sessions.

Achieved

Decrease the number of sessions, and increase their length to include trips.

The number of total sessions was shortened from six to five.

Campers attended a full-day trip to Boston.

Achieved

Provide field trip opportunities.

• Five Rivers—

Campers realized that learning can take place outside of a classroom environment.

Achieved

A total of thirty campers attended the culminating activity.

Achieved

snowshoeing • Albany—Underground

Railroad tour • Boston

Provide an incentive for completing follow-up activities.

The culminating activity was a trip to Boston’s aquarium, IMAX, and planetarium.

SUMMARY Program evaluation is the process associated with collecting data to determine the worth or value of a program. To do this, evaluators use a wide variety of instruments or tools to collect data, such as standardized measures, surveys, interview protocols, observation protocols, and archival data. Data is collected at different times during the process to address specific program evaluation needs. Data that is collected while the program and its activities are unfolding is considered formative. Data collected at the end of the process or annually to report how the program did in a given time frame is considered summative. Most evaluators use

37

3GC01

10/21/2013

38

10:46:53

Page 38

Chapter 1 Foundations of Program Evaluation

both formative and summative data to successfully evaluate a program. Many times evaluators collect data from groups of people called stakeholders. Stakeholders are those who participate directly in or are affected in some way by the program itself. Evaluators regularly write evaluation reports and present these reports to the agencies or groups who funded the program.

KEY CONCEPTS Archival data Benchmarks Client Evaluation objective External evaluators Focus group Formative evaluation Internal evaluators Interview protocol Probes Program Summative evaluation Triangulation

DISCUSSION QUESTIONS 1. What is the difference between internal and external evaluators? Taking the summer camp program described in this chapter, what might be some benefits and challenges of being an external versus an internal evaluator in this situation? 2. If you were evaluating the summer camp program described in this chapter, what would be the benefits and challenges of using surveys, interview protocols, and archival data?

3GC01

10/21/2013

10:46:53

Page 39

Suggested Reading

CLASS ACTIVITIES 1. Review the vignette at the beginning of this chapter. Pretend you are the evaluator for this project. Develop formative and summative surveys and interview protocols to collect data. Remember, the purpose of formative data collection is to improve the program as the program is taking place. Summative data is used to develop a summary of how the program did in meeting its intended goals, objectives, and benchmarks.

SUGGESTED READING Driskell, T., Blickensderf, E. L., & Salas, E. (2013). Is three a crowd? Examining rapport in investigative interviews. Group Dynamics: Theory, Research and Practice, 17, 1–13. doi: 10.1027/a0029686. Kvale, S., & Brinkman, S. (2008). InterViews: Learning the craft of qualitative research interviewing. Thousand Oaks, CA: Sage. LaVelle, J. M. (2011). Planning for evaluation’s future: Undergraduate students’ interest in program evaluation. American Journal of Evaluation, 32, 362–375. Mathison, S. (1999). Rights, responsibilities, and duties: A comparison of ethics for internal and external evaluators. New Directions for Evaluation, 1999(82), 25–34. Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: A comprehensive guide. San Francisco, CA: Jossey-Bass. Torres, R. T., Preskill, H. S., & Piontek, M. E. (1997). Communicating and reporting: Practice and concerns of internal and external evaluators. Evaluation Practice, 18, 105–125.

39

3GC01

10/21/2013

10:46:53

Page 40

WEBC02

11/07/2013

14:17:33

Page 41

CHAPTER

2 ETHICS IN PROGRAM EVALUATION AND AN OVERVIEW OF EVALUATION APPROACHES

LEARNING OBJECTIVES After reading this chapter you should be able to 1. Understand ethical dilemmas faced by evaluators 2. Understand the Joint Committee on Standards for Educational Evaluation’s standards and how evaluators may use them in the profession 3. Understand the key similarities and differences among the various evaluation approaches 4. Understand the key benefits and challenges of the different evaluation approaches

ETHICS IN PROGRAM EVALUATION When conducting an evaluation, a program evaluator may face not only methodological challenges (for example, what data collection instrument to use) but ethical challenges as well. Ethics in program evaluation refers to ensuring that the actions of the program evaluator are in no way causing harm or potential harm to program participants, vested stakeholders, or the greater community. In some cases, evaluators may find themselves in an ethical dilemma because of the report they have created. For example, an

41

WEBC02

11/07/2013

42

14:17:33

Page 42

Chapter 2 Ethics in Program Evaluation

evaluator might be tempted to suppress negative findings from a program evaluation for fear of angering the client and losing the evaluation contract. In other cases, evaluators may find themselves in a dilemma not because of their report, per se, but because of how others use it. For example, how should an evaluator move forward if he or she knows that a report supports one stakeholder group over another and will no doubt spark a situation? For example, a school superintendent who finds an after-school program too expensive might use the evaluation report to support canceling the program even though parents and students find the program beneficial. Evaluators faced with a multitude of ethical challenges each day turn to the Joint Committee on Standards for Educational Evaluation for guidance (Newman & Brown, 1996). Established in 1975, the Joint Committee was created to develop a set of standards to ensure the highest quality of program evaluation in educational settings. The Joint Committee is made up of several contributing organizations, one of which is the American Evaluation Association (AEA). Although the AEA, which sends delegates to Joint Committee meetings, has not officially adopted the standards, the organization does recognize the standards and support the work of the committee. The standards are broken down into five main areas: utility, feasibility, propriety, accuracy, and evaluation accountability. Utility standards. The purpose of these standards is to increase the likelihood that stakeholders will find both the process and the product associated with the evaluation to be valuable. These standards include, for example, making sure the evaluation focuses on the needs of all stakeholders involved in the program, making sure the evaluation addresses the different values and perspectives of all stakeholders, and making sure that the evaluation is not misused. Feasibility standards. The purpose of these standards is to ensure that the evaluation is conducted using appropriate project management techniques and uses resources appropriately. Propriety standards. These standards are designed to support what is fair, legal, and right in program evaluation. These standards include, for example, ensuring that the human rights and safety of program participants are upheld and maintained indefinitely throughout the evaluation process; that reports provide a comprehensive evaluation that includes a summary of

WEBC02

11/07/2013

14:17:33

Page 43

What Is an Evaluation Approach?

43

goals, data collection methods, findings, and recommendations; and that evaluations are conducted for the good of the stakeholders and the community. Accuracy standards. The purpose of these standards is to ensure that evaluations are dependable and truthful in their data collection and findings. These standards include making sure that the evaluation report is both reliable and valid, and that data collection tools and methodologies were sound and rigorous in nature. Evaluation accountability standards. These standards call for both the rigorous documentation of evaluations and the use of internal and external meta-evaluations to improve the ongoing processes and products associated with evaluation. A complete list of the standards can be found at www.eval .org/evaluationdocuments/progeval.html.

WHAT IS AN EVALUATION APPROACH? As noted in Chapter One, program evaluation is the process of systematically collecting data to determine if a set of objectives has been met. This process is done to determine a program’s worth or merit (see Figure 2.1). The evaluation approach is the process by which the evaluator goes about collecting data. Two evaluators working to evaluate the same program not only may use different methods for collecting data but also may have very different perspectives on the overall purpose or role of the evaluation. Although many beginning evaluators may believe that simply changing the type of data being collected (for example, from quantitative to qualitative) is changing the approach, in reality an evaluation approach is based on more than simply data collection techniques.

Evaluation Criteria

FIGURE 2.1.

Program

evaluation approach The model that an evaluator uses to undertake an evaluation

Did Program Meet Evaluation Criteria?

Determining a Program’s Worth or Merit

WEBC02

11/07/2013

44

14:17:33

Page 44

Chapter 2 Ethics in Program Evaluation

Objectives-Based Approach: Does Program Meet Stated Objectives?

Tylerian Approach Consumer-Based Approach: Criteria Used for Rating

Criteria

Expertise-Oriented Approach: Criteria Internalized by Judge

Participatory Approach: Stakeholders Select Criteria

FIGURE 2.2.

objectivesbased approach An evaluation model whereby the evaluator focuses on a series of preestablished objectives, and then collects only the necessary data to measure whether the program met those objectives

Decision-Based Approach: Questions Serving as Criteria

Goal-Free Approach: Criteria Emerge

Overview of Evaluation Approaches

Changing an approach to program evaluation entails a shift not only in philosophy but also in the “reason for being” or purpose of the evaluation. In this chapter you will read about some of the main approaches used in program evaluation today (see Figure 2.2 for an overview). When considering these approaches, think about the criteria used to evaluate a program and who will ultimately judge the program using the criteria.

OBJECTIVES-BASED APPROACH Just as there are many applied research approaches, there are several different approaches to program evaluation. The most common approach program evaluators can use is the objectivesbased approach, which involves objectives written by both the

WEBC02

11/07/2013

14:17:33

Page 45

Objectives-Based Approach

BOX 2.1.

45

Example of an Evaluation Objective and

Benchmark Evaluation objective: To document middle school students’ changes in academic achievement, particularly in the area of reading and literacy skills. Benchmark: Students in fifth through eighth grade will show a 10 percent gain on the English language arts (ELA) state assessment in year one, and there will be a 20 percent increase in students passing the ELA in program years two and three.

creators of the program and the evaluator. An evaluation objective is a written statement that depicts an overarching purpose of the evaluation and clearly states the types of information that will be collected. Often these objectives are further supported through the use of benchmarks. A benchmark is more detailed than an objective in that it specifically states what quantitative goals the participants in the program need to reach for the program to be successful. Box 2.1 presents an evaluation objective followed by a benchmark. Evaluators will often start with the objectives for the evaluation and build evaluation data collection activities from those objectives. Evaluation objectives may guide either formative or summative data collection. Either way, quantitative or qualitative data, or both, is collected, and findings are compared to the project’s objectives. Objectives are certainly helpful in shaping the evaluation, but there is a risk that evaluators may become so focused on the objectives that they lose sight of other unanticipated outcomes or benefits to participants as a result of the program. Although objectives assist in guiding an evaluation, there is another method—the goal-free approach—that doesn’t prescribe using evaluation objectives. This approach is guided by the perspective that there are many findings and outcomes that do not fall within the strict confines of the goals and objectives established by both the project developers and the evaluator. Those who practice goal-free evaluation believe that the unforeseen outcomes may be more important than outcomes that the program developers

benchmarks Specific outcomes that define the success or worth of a program

goal-free approach An evaluation model designed to control for bias, whereby the evaluator purposely does not learn the goals and objectives of the program being evaluated but tries to determine these through careful data collection

WEBC02

11/07/2013

46

14:17:33

Page 46

Chapter 2 Ethics in Program Evaluation

emphasize. One difficulty in conducting a goal-free evaluation is that projects that receive funding are required to show specific outcomes based on objectives. If the outcomes are not included in the evaluation, the appropriate data to present to funding bodies may not end up being collected.

Tylerian approach An early approach developed by Ralph Tyler, focused on judging the merit or worth of a program based on a very specific set of benchmarks

Early Objectives-Based Approach Evidence of program evaluation in the United States dates back to the early 1800s, but the Tylerian approach, named after its creator Ralph Tyler, was the first to focus on the use of behavioral objectives as a method of determining or judging the worth of a program. Beginning in 1932, Tyler, now considered the father of education evaluation, conducted an eight-year evaluation study examining the effects on academic achievement of progressive high schools versus traditional high schools (Tyler, 1949). In what is considered to be the first education evaluation of its kind, Tyler used behavioral objectives as the criteria to determine a program’s worth. For example, a behavioral objective for the classroom (referred to as a learning objective today) might have been stated something like: “Students will be able to define at least twenty out of the twenty-five vocabulary words.” In Tyler’s study, students in the progressive high school were compared to students in the traditional high school based on the behavioral objectives Tyler established. The idea was that the style of learning in the classrooms would result in the objectives’ being met or not. It would be the evaluator’s role to establish the objectives (criteria), collect the necessary data from the treatment and comparison classrooms, analyze the data, and determine if one high school model met a greater number of the objectives than the other. If so, this model would be deemed to have been more successful. The Tylerian approach to program evaluation was considered the hallmark of evaluation methodology and remained the primary approach used to judge the worth of programs until the 1960s. Elements of the Tylerian approach can be found in evaluations conducted today. The objectives-based approach that grew out of Tyler’s work is evident in many of the evaluations conducted for programs funded by the state and federal governments. Benchmarks, described earlier, have clear roots in the Tylerian approach and are often used in the evaluation of programs today, along with program goals and evaluation objectives. Benchmarks provide a

WEBC02

11/07/2013

14:17:33

Page 47

Decision-Based Approach

very clear-cut perspective on whether or not the program met its objectives. One of the benefits of an objectives-based approach to program evaluation is that it allows funders and reviewers of grants and programs to carefully examine a program’s proposal during the request for proposal process and to determine if the scope of the evaluation truly reflects the goals, objectives, and activities. Objectives clearly define what the purpose of the program is, how the evaluator will evaluate aspects of the program to determine if the objectives have been met, and exactly what criteria will need to be present for this program to be deemed a success. Having these objectives laid out before a program is funded, implemented, or both can provide funders and program developers with a framework for the evaluation. The objectives-based approach is not without challenges, however. One criticism of this approach is that it does not allow unforeseen outcomes to be accounted for or to play any role in the evaluation. Findings from the evaluation that may be as important as those aligned with the objectives might not be examined closely or at all because the evaluator is so focused on the evaluation objectives. Similarly, critics argue that the objectivesbased approach biases evaluators toward looking at only certain aspects of a program, as dictated by the objectives, leaving aspects of the program virtually unexplored. Objectives and benchmarks can also tend to pertain more to summative evaluation than to formative evaluation. An evaluator may collect data to show that the program met its objectives and benchmarks but be unable to explain how the program functions, not having observed the program in action because the evaluation objectives did not require him or her to do so. Although the objectives-based approach continues to be the most widely used approach in evaluation today, evaluators must make sure that the evaluation objectives are broad enough to focus on both formative as well as summative aspects of the program.

DECISION-BASED APPROACH Unlike the objectives-based approach, the decision-based approach is guided not by objectives serving as the criteria to judge a program, but rather by questions. These questions are

47

WEBC02

11/07/2013

48

14:17:33

Page 48

Chapter 2 Ethics in Program Evaluation

often asked of program developers or project directors and serve as the main guiding light for the overall evaluation, as well as for the evaluation efforts and activities that pertain to data collection. In a true decision-based approach it is the program directors who are guiding the evaluator by asking questions that the evaluator must then go out and collect data to answer. In some decision-based evaluations, directors ask additional questions based on the latest evaluation findings, and this pattern continues until all the directors’ questions have been answered to their satisfaction.

CIPP model An approach developed by David Stufflebeam in the 1960s, used extensively by the federal government and elsewhere for many decades

CIPP Model for Program Evaluation Created by David Stufflebeam in the 1960s, the CIPP model was originally developed as an evaluation approach to be used in educational settings. A widely used decision-based approach, Stufflebeam’s model uses both formative and summative evaluation data through a prescribed framework. This framework is referred to as CIPP. The acronym CIPP stands for the four steps or phases that guide the evaluation process: context, input, process, and product. As you read more about the CIPP model, you will see how the approach is governed by the program directors’ questions. Context Evaluation

Context is the first component of the CIPP model. In this section the evaluator focuses on studying the context or situation in which the program will take place. This is an important step because it serves as a foundation for the rest of the evaluation. As you know, in program evaluation we use a program, a series of group of activities, to address or “fix” a problem. Context in the CIPP model is understood as the nature and scope of the problem in relation to the setting. Different settings may have similar (or what appear to be similar) issues, but individuals within these unique settings may go about addressing the issues quite differently. For example, if a school has a large number of students being sent to the office, it might identify this as a problem and propose implementing a schoolwide behavioral program, the belief being that such a program would (if implemented) correct the problem. Understanding this problem from multiple stakeholder perspectives to determine the best set of program activities is what is referred to as establishing the context. Questions from program

WEBC02

11/07/2013

14:17:33

Page 49

Decision-Based Approach

directors that would guide the program during the context phase might include ■

What do teachers and staff think we need to address with this program?



What do teachers, staff, and the greater school community believe are the underlying elements of students’ behavioral issues during the school day?



What is not working in our building’s current behavioral program for students?

To better understand the context in which a program will take place, many program developers and evaluators conduct what is referred to as a needs assessment. In fact, questions like the ones just presented might offer guidance in conducting the needs assessment for the schoolwide behavioral program just mentioned. As part of this process, teachers, staff members, students, parents, and community members might be asked to provide their insights and perspectives concerning what they think are the main reasons behind the high number of office referrals. In many cases, different stakeholders (because of their varying proximity to the issue and diverse lived experiences) may have different perspectives on the issue and how it should be resolved. In addition to determining how the issue might be addressed, the evaluator might also gather information and perspectives from stakeholders concerning the goals and objectives of the program and how they relate to their perception of the overall issue. To gather needs assessment information, a program developer or evaluator could use a variety of data collection tools, which might consist, for example, of a needs assessment survey, focus groups with stakeholders, one-to-one interviews, in-depth analysis of archival data and records, or a combination of these. All of this information would be examined by the program developer or evaluator to determine whether there was a consensus or shared theme among the stakeholders pertaining to what the problem is and how it should be fixed. This information would then be “rolled” into developing program goals and objectives, and designing program activities that (in theory) would meet those goals and objectives.

49

WEBC02

11/07/2013

50

14:17:33

Page 50

Chapter 2 Ethics in Program Evaluation

Input Evaluation

The second component of the CIPP model is input. Input affords the evaluator the opportunity to examine the relationship between the amount of resources available (for example, money, staff, equipment) and the program’s proposed activities. The question that has to be answered at this juncture is, Will the current budget and funding support the proposed activities? If the answer is no, then the activities need to be refined or inputs need to be increased. The next questions to ask are, If activities are refined, will they be enough to sufficiently and logically address the problem? Are staff members adequately trained to implement the program correctly? Does the school building have enough space to offer the program during the day? Process Evaluation

Process evaluation is the third component of the CIPP model. In this component the question, Are we doing the program as planned? becomes the foundation for the formative evaluation that needs to take place. Continuous program monitoring occurs under process evaluation. Through the use of both quantitative and qualitative data collection, the evaluator wants to ensure that the program is following the steps and procedures that it “claimed” it would at the outset. As part of this process the evaluator would probably collect data on other aspects of program implementation. These would include, for example, staff perceptions of the program and how it is going so far, budget issues, and emerging strengths and weaknesses. All of this formative data would be fed back to program developers and the program directors so that ongoing programmatic changes could happen in a timely manner. Product Evaluation

Product evaluation is the fourth and final component of the CIPP model. Product evaluation focuses on final outcomes of the program, and on determining whether the program met its stated goals and objectives. This component involves primarily summative evaluation and answers the question, Was the program successful or not?

PARTICIPATORY APPROACH When we think about program evaluation, we naturally think of the evaluator’s being in charge. The evaluator, an expert in research

WEBC02

11/07/2013

14:17:33

Page 51

Participatory Approach

methods, data collection, and data analysis, would naturally be the one to develop surveys and other supporting evaluation materials. The evaluator would collect quantitative and qualitative data and carefully analyze it, rendering findings for the final evaluation report to determine if the program met its goals and objectives. But what if the goals and objectives that were developed for the program are not the same ones deemed important by those being served by the program? What if those being served by the program have unique and very different perspectives when it comes to judging whether the program was successful? How would the external evaluator begin to understand these different perspectives held by the program’s stakeholders? An evaluator taking a participatory approach attempts to provide a clearer, deeper understanding of the program and its worth by tapping into the perspectives and lived experiences of the stakeholders. The evaluator does this by removing the focus of the evaluation from himself or herself, relinquishing control of the evaluation to program participants— those most directly affected by the program’s activities and services. Let’s take a summer program for urban youth held on a college campus as an example. When the program was designed, the program directors from the higher education institution believed that ultimately the purpose of the program was to create lifelong learners. They also believed that by increasing the students’ love of learning, they would increase students’ interest in going to college. Keep in mind that these program goals were developed by the program directors. The outside evaluator in this situation would gather data to document whether the program achieved these goals. The evaluator might decide to gather report card grades and student behavior record data for the students attending the summer program. The evaluator might even collect survey data from the students’ teachers outside of the summer program to document teacher perceptions concerning whether students are more motivated in class and whether they see a real difference in students’ desire to learn as a result of participating in the summer program. In any event, if you were to ask the students what they liked and didn’t like about the summer program, and what they thought the purpose of the program was, you might get very different answers from those the external evaluator would receive.

51

WEBC02

11/07/2013

52

14:17:33

Page 52

Chapter 2 Ethics in Program Evaluation

As you can imagine, your evaluation report would probably have a very different focus or perspective than that of the external evaluator. First of all, the students might see the purpose of the program very differently than those in higher education who originally developed the program. To them the program’s purpose might seem to be being nothing other than to help them “make new friends” or “get to know kids from school better.” In other words, students might have no idea that the primary goal of the program was to make them lifelong learners. Students are not the only ones who would have a different take on the summer program; parents would also have their own understanding of the program’s purpose. For example, parents might see the program not as a way to transform their children into lifelong learners, but as more of an alternative to finding a babysitter. With this perspective, parents might consider the program to have had worth or merit if it took place five days a week for eight hours a day. Again, this perspective might not at all reflect the goals and objectives initially established by the program directors from higher education. From these few examples alone you can probably tell that who you are and what proximity you have to the program can dramatically affect your perceptions of the program’s purpose and overall worth. You can also tell that as an evaluator, understanding all of these unique perspectives is critical to establishing the validity of the data that is being collected as well as the meaning and validity of the findings in the final evaluation report. To access this level of information, an evaluator using the participatory approach would work with students, parents, and any other related stakeholders to train these groups in how to conduct an evaluation. The evaluator would play the role of technical consultant, advising these stakeholder groups about the best methods for collecting data and how to develop surveys and other evaluation tools. The evaluator would also show stakeholders how to analyze quantitative and qualitative data, as well as how to author an evaluation report and present findings to a wider audience. By teaching evaluation skills directly to the stakeholders and having stakeholders drive the evaluation efforts, findings from the evaluation would have much more meaning to the stakeholders than if the work had been done solely by an evaluator from the outside who was hired by the program directors and administration. With this

WEBC02

11/07/2013

14:17:33

Page 53

Participatory Approach

in mind, the central idea behind participatory evaluation is to empower these stakeholder groups, to develop stronger communication between those being served by the program and those who are in charge of implementing the program, to bring about more meaningful programmatic changes, to improve the program, and to celebrate the wide variety of outcomes associated with the program.

Strengths and Challenges of the Participatory Approach From the description just presented it is easy to see the strengths of the participatory approach. For most evaluators it is a constant struggle to determine the validity of data and findings and what the evaluation report “really” means to the different groups of people the program serves. Another important component of program evaluation is the use of the evaluation report. If an evaluation report is not used by stakeholders (either the project directors in charge or the stakeholders who are participating in the program’s activities), the report has had little, if any, impact on improving the program. Recognizing this, some experts in program evaluation would ask the question, “Why conduct an evaluation at all if it makes no difference?” Participatory evaluation works to increase the likelihood that stakeholders will use evaluation results because they can easily see their direct connection to the evaluation, having played an intricate role in the creation of the evaluation report. In a traditional evaluation approach, program participants— for example, parents—would often sit on the sidelines, contributing to the evaluation by making suggestions or participating in parent focus groups led by the external evaluator, for example. In a participatory evaluation, however, parents are expected to take on a much more active role, developing their own surveys to gather information about the program from other parents and hosting their own parent focus groups. This involvement in the evaluation empowers parents, giving them a stronger connection to the program. They feel that they have a voice and that what they think and say matters to the evaluation and may contribute to improving the program.

53

WEBC02

11/07/2013

54

14:17:33

Page 54

Chapter 2 Ethics in Program Evaluation

The participatory approach might sound like the ultimate approach to conducting an evaluation, but it does come with some challenges. Although it is important for the evaluation to reflect the “voice” and perspectives of the stakeholders served by the program, their perceptions of the program’s purpose might not necessary coincide with the purpose embraced by the program’s funding source. Take the preceding example of the summer enrichment program at the college. Students in the program believed that making new friends was one of the program’s main purposes. In a true participatory approach, students would examine the program and probably make suggestions for major changes based on their own criteria. For example, students might indicate that they want less time during the day devoted to learning and more free time so they can spend it making new friends. This recommendation, however, would not be considered viable by the program directors, who created the program to expose students to higher education and learning. It would also run counter to the desires of the program’s funding source, a foundation that wants to see clear quantitative outcomes related to achievement on annual state assessments and school report card grades as indicators of the program’s success. You can imagine the challenges that would occur if a participatory evaluation were the main source of data collection for the program. The evaluation report would dramatically miss the mark according to the funder’s criteria for success. Recognizing this challenge, program evaluators who support a participatory approach often incorporate it into the formative portion of the evaluation. These evaluators let stakeholders, such as parents or students, develop their own instruments and collect data about the program’s day-to-day activities, the quality of these activities, and what is needed to improve them or the services being offered. This data is presented by those stakeholder groups to the evaluator and program developers and is used to improve the program as it is taking place. Summative data, such as outcome data from standardized assessments, report card grades, or district measures, are collected and reported out by the evaluator, who has expertise in working with these types of data. In addition to the focus of the evaluation’s being different, participatory evaluation may yield some data collection techniques that are not as rigorous as those used in traditional program evaluation. Students and parents, for example, not having

WEBC02

11/07/2013

14:17:33

Page 55

Expertise-Oriented Approach

55

expertise in evaluation, may develop less-than-valid surveys, protocols or other evaluation tools.

CONSUMER-ORIENTED APPROACH Although the consumer-oriented approach is used extensively outside of education, the evaluation of educational products, such as classroom teaching materials, textbooks, and online tutorials, continues to dominate this area of evaluation. In a consumeroriented evaluation, the consumer, and the needs of the consumer, are the main focus in judging the worth of a program or product. The consumer-oriented approach typically is structured around criteria and yields evaluations similar to those found in consumer reports. Michael Scriven (1967), one of the early founders of the consumer-oriented approach, believed that it is the evaluator’s role to develop or select the criteria that will be used to judge the program or product. Scriven also believed that the purpose of this approach was to present the evaluation findings and to let the current as well as potential consumers make the final decision as to whether or not to use the program or product. The criteria used may vary greatly depending on what is being evaluated, but in general they focus on cost, durability, and performance.

EXPERTISE-ORIENTED APPROACH The expertise-oriented approach—one of the oldest and most frequently used methods of program evaluation—expects the evaluator to be a content expert and to serve more as judge than as evaluator (Fitzpatrick, Sanders, & Worthen, 2004). In some forms of expertise-oriented evaluation the criteria used to judge the program, service, or product are completely internalized by the evaluator. This means that the evaluator does not use formal criteria on paper, but rather draws on a lifetime of professional experiences to determine merit or worth. Although this approach may seem a little unconventional, it is used all the time by judges on many television talent shows. Rarely do you see these judges who are famous singers and entertainment moguls filling out a checklist or rubric when scoring a contestant. They listen to and observe the contestant, and then rate him or her

expertiseoriented approach An evaluation model whereby the evaluator is a recognized expert in the field (for example, a judge at a county fair) and uses that expertise to judge the worth of a program

WEBC02

11/07/2013

56

14:17:34

Page 56

Chapter 2 Ethics in Program Evaluation

according to the criteria that they have internalized through their professional experiences. Another form of the expertise-oriented approach is used by agencies granting accreditation to institutions, programs, or services when they send program evaluators to these sites to conduct an expertise-oriented evaluation. In these situations, data is not typically collected by the evaluators but is presented to them by those participants being judged or seeking accreditation. With this approach, the evaluators judge the program or service based on an established set of criteria as well as their own expertise in the area. An example of an organization that conducts this type of evaluation is the National Council for Accreditation of Teacher Education (NCATE). Colleges and universities that train teachers often seek national accreditation from NCATE to demonstrate the quality of their programs.

ECLECTIC APPROACH In reality, many evaluators today practice a more eclectic approach to program evaluation. This means that they take a little from each of the approaches, and where appropriate integrate these various aspects into their evaluation designs, philosophies, and methodologies. These evaluators use a wide variety of data collection methods and try to develop an evaluation plan that accounts for multiple user and stakeholder perspectives.

SUMMARY There are many different evaluation approaches available to the evaluator in the twenty-first century. These evaluation approaches are each unique in their purpose and in the overall philosophies and data collection methodologies that serve as their framework. The objectives-based approach provides clearly delineated objectives and benchmarks that allow the evaluator to determine whether the program has met its intended goals. In contrast, in a goal-free approach, an evaluator does not want to know the purpose, goals, or objectives of the program. An evaluator taking this approach will visit the program and see if he or she can determine program goals based on what he or she observes. The CIPP model is one example of the decision-based approach.

WEBC02

11/07/2013

14:17:34

Page 57

Discussion Questions

Instead of objectives to guide the evaluator, questions asked by program directors dictate the kinds of data and analysis the evaluator will use. With the participatory approach, the evaluation objectives and methodology are developed by those whom the program is meant to serve. As part of this approach, program participants collect their data, write their own evaluation report, and present findings to a wider audience. With the expertiseoriented approach, programs and products are “judged” by experts in the field. In some cases, criteria used by the judges are not clearly defined but rather internalized by the judges based on their expertise.

KEY CONCEPTS Benchmarks CIPP model Evaluation approach Expertise-oriented approach Goal-free approach Objectives-based approach Tylerian approach

DISCUSSION QUESTIONS 1. Reexamine the vignette in Chapter One. What evaluation approach do you think the evaluator was using? Be prepared to discuss whether another approach would have been more effective. 2. Be prepared to discuss how one could apply an internal or external evaluation model to the participatory approach. 3. When you become an evaluator, what approach or approaches do you think you will take? Be prepared to explain why. 4. Reexamine the Joint Committee’s standards. What ethical dilemmas do you think evaluators face on a regular basis,

57

WEBC02

11/07/2013

58

14:17:34

Page 58

Chapter 2 Ethics in Program Evaluation

and how could they use the standards to guide them through these delicate situations?

CLASS ACTIVITIES 1. Reexamine the vignette in Chapter One. Design some evaluation tools that you think would be helpful to the evaluator of this project. 2. Using the vignette in Chapter One, develop and present an evaluation matrix and plan that you would propose for year two if you were the evaluator in this situation.

SUGGESTED READING Newman, D. L., & Brown, R. D. (1996). Applied ethics in program evaluation. Thousand Oaks, CA: Sage. Stufflebeam, D. (1999). Foundational models for 21st Century program evaluation. Kalamazoo, MI: The Evaluation Center, Western Michigan University. Retrieved from https://www.globalhivmeinfo.org/Capacity Building/Occasional%20Papers/16%20Foundational%20Models%20for %2021st%20Century%20Program%20Evaluation.pdf

3GC03

10/21/2013

12:20:12

Page 59

CHAPTER

3 IN-DEPTH LOOK AT THE OBJECTIVES-BASED APPROACH TO EVALUATION

LEARNING OBJECTIVES After reading this chapter you should be able to 1. Understand the possible depth and breadth of objectives that one can use for an evaluation 2. Understand the differences between evaluation objectives designed to meet formative needs and those designed to meet summative needs 3. Understand the methods evaluators use to document different evaluation objectives

OBJECTIVES-BASED APPROACH Many evaluators today practice an eclectic approach to program evaluation and do not focus on a specific approach (for example, the goal-free approach or CIPP model). They take bits and pieces of the evaluation approaches discussed in Chapter Two based on the purpose of the program and the context in which the program is taking place. These evaluators also recognize the importance of collecting both formative and summative data in conducting a comprehensive evaluation.

59

3GC03

10/21/2013

60

12:20:12

Page 60

Chapter 3 In-Depth Look at the Objectives-Based Approach to Evaluation

Although most evaluators working in today’s evaluation world will find themselves taking an objectives-based approach, it is still important to make sure that the objectives are aligned with the many aspects of a program and not just the program’s desired end outcomes or results. Evaluators may find that clients are initially only focused on the outcomes or results of the program; however, it is important for the modern-day evaluator to include a variety of objectives in the initial evaluation plan. The purpose of the program will have a bearing on what evaluation objectives the evaluator may decide to focus on. There are a number of evaluation objectives that you as the evaluator may want to consider incorporating into your evaluation plan.

capacity Refers to how program developers “build” different parts of the program so the program can take place and function (for example, if the program has five trainings, then the materials and everything that accompanies a quality training have to be created and ready to be delivered)

intent Refers to the promise of those in charge of the program to deliver the activities

Capacity and Intent This first evaluation objective is designed to document whether all the required components of the project promised in the original grant application have been developed. Intent refers to items that are unique to the project and that have to be built or designed before the program can commence, such as learning modules, lessons, workshops, or activities, among other possibilities. Intent also focuses on the numbers detailing the components that have to be implemented annually for the project to commence. These numbers are specified in the original grant application, which might state, for instance, “Ten learning modules will be developed.” To evaluate intent, the evaluator can also examine whether or not the required number of personnel or employees (for example, two full-time and four part-time counselors) have indeed been hired and are active in their respective new positions. Without the project’s intent or capacity having been developed, it will be impossible to implement the activities and services correctly when the time comes. If an evaluator finds that the project is not at capacity for implementation, he or she must present this formative data to the project directors so that this gap will be addressed. If it is not addressed, then the program is not being implemented with fidelity and therefore is not likely to meet the proposed benchmarks and outcomes. The evaluator interested in documenting a program’s capacity and intent will have a multitude of data collection possibilities. Data for this objective, both quantitative and qualitative in nature, can come in many different forms. Data collection methods can

3GC03

10/21/2013

12:20:12

Page 61

Objectives-Based Approach

61

include interviews with program directors and staff; a review of purchase orders for equipment, materials, and supplies; site visits and observations; and a review of modules that have been developed, to name a few.

Validation After the evaluator has documented that the program has developed the necessary capacity and has the intent of delivering the activities developed, the evaluator then must switch gears and start to plan to determine whether the activities are actually valid. Evaluators typically do not create their own criteria to validate activities, instead trying to use an already established set of criteria or standards. In education these standards are often learning standards (for example, Does a new computer training series comply with adult learning theory?); however, they could be industry standards, national or state standards, or association standards. Although validation can happen at numerous points during the evaluation process, typically the main validation occurs at this point in time. Formative data collected by the evaluation team would be given to project directors for programmatic modifications to occur, to avoid having participants engage in an activity or learning module that doesn’t meet a required set of standards. For validation, an evaluator would be interested in examining the product, activity, lesson, curriculum, or learning model against an established set of criteria. Such criteria could be in the form of a checklist, a rubric, or an observation rating scale. Where appropriate, the evaluator should try to establish inter-rater reliability, intra-rater reliability, or both. Inter-rater reliability requires observers or scorers to examine the same thing using the checklist, scoring rubric, or observation rating scale and to arrive at the same (or close to the same) scores. Intra-rater reliability requires one individual to score something twice and arrive at close to the same score each time. Reliability shows that the evaluators are consistent in their evaluation of the product, thereby demonstrating that the score arrived at is valid or the judgment being made is trustworthy. Evaluators can use data from validation as formative feedback for the client. Let’s say that as part of the validation process the evaluator is to observe seven learning modules that have been

validation A cross-check to ensure that criteria and standards are being adhered to

inter-rater reliability Consistency established when two observers come to an agreement on a final outcome or scores derived from observing a setting

intra-rater reliability Consistency established when a single observer is able to obtain the same score when rating a project twice at two different points in time

3GC03

10/21/2013

62

12:20:12

Page 62

Chapter 3 In-Depth Look at the Objectives-Based Approach to Evaluation

developed. Using a checklist and scoring rubric, the evaluator observes each of the modules and then has another evaluator observe them as well. The two evaluators establish inter-rater reliability and find that the modules are missing some important elements. These elements are delineated in their shared scoring rubric. Areas in which further refinement of the modules is needed will be presented back to the client in the hope that the client will work to fix these areas before participants begin to use the modules. Although validation is presented here as the second step, in reality it could go anywhere in the evaluation process.

Activities and Fidelity of Delivery This evaluation objective documents activities that occur as part of the project, such as workshops, trainings, meetings, field trips, and so on. The purpose of this objective is not to focus on outcomes of the activity but simply to document the activity and its purpose, and to describe using narrative inquiry what steps and procedures are inherent in the activity. Another part of the objective is to determine whether the activity was delivered with fidelity. Did the activity go off as planned? Were all the steps and points that were supposed to be addressed in the activity covered? If the presenter had an agenda or PowerPoint presentation, did he or she follow it or go off on some tangent for an hour? In the classroom, fidelity of the activity may be determined based on whether or not the teachers who are delivering the instructional strategy that is under investigation are following the correct sequence of steps. To document the program’s fidelity, the evaluator would probably collect observations of the activities, trainings, or procedures to determine how closely they aligned with the intent of the program. In some cases the evaluator would be guided by an observation protocol or checklist of the different steps that should be followed. If the content or procedures are foreign to the evaluator, he or she may decide to bring in an expert in the field to both develop the observation materials and conduct the observations alongside the evaluator. Data from this objective is formative in nature. Any areas in which the program did not meet fidelity standards should be reported back to the program directors so that these issues can be quickly addressed.

3GC03

10/21/2013

12:20:12

Page 63

Objectives-Based Approach

Participant Satisfaction with Activities In this objective the evaluator tries to document how satisfied participants are with the activities they engaged in. Ideally, this objective should be conducted immediately following each activity; however, it is possible, but not a recommended practice, to gather information from participants several months afterward. Evaluators interested in documenting participant satisfaction can administer exit surveys to participants at the end of each activity. Perhaps you have been to a workshop after which a short survey was handed out. This survey was designed to gather your satisfaction with the workshop you just attended. In addition to surveys, an evaluator can use focus groups following activities to gather information from a small number of participants. Data from this objective is considered formative in nature, and is very important to the ongoing success of the program. If the evaluator “uncovers” the fact that participants are unsatisfied with the activity, he or she can inform the program directors so that changes to the next activities can be made. Outputs of Activities The purpose of this objective is to document outputs or changes that have occurred as a result of participants’ engagement in a program’s activities. Outputs tend to be not direct changes in one’s actions (for example, changes in teaching practices), but rather changes in one’s thinking, beliefs, or opinions, particularly in relation to the project. For example, teachers attending a workshop on working with students from at-risk populations realize, following completion of the workshop, that all students can indeed learn. In addition, these teachers now understand that they may have been implementing teaching strategies that were not consistent with these new beliefs. To document outputs of activities, the evaluator could gather this information from participants at the same time that he or she is collecting information about their satisfaction with the activity. In some cases, however, the evaluator might gather this information on outputs a little while after collecting satisfaction information, once participants have returned to their respective settings and have had time to reflect on the activities and what they learned.

63

participant satisfaction An overall pleased feeling expressed by the individuals who have taken part in an activity

outputs of activities Changes in perception or understanding that result from participating in an activity

3GC03

10/21/2013

64

12:20:12

Page 64

Chapter 3 In-Depth Look at the Objectives-Based Approach to Evaluation

Intermediate Outcomes For this objective the evaluator is interested in documenting changes in practice or being among participants who participated in the program’s activities. For example, the evaluator could choose to send out surveys with items specifically designed to document changes in participants’ practice or being. If appropriate, the evaluator could also conduct site visits to observe participants to determine if they have indeed changed their practice. While on-site, the evaluator could conduct focus groups, one-to-one interviews, or both with participants. If the evaluator finds that participants have not modified their practice based on their participation in the activities, then this would serve as important formative feedback to program directors. intermediate outcomes Changes in practice, changes following implementation of a new practice or way of doing something, or both

End Outcomes For this objective the evaluator is interested in gathering data that focuses on outcomes or results of the program. These outcomes are typically those that the program was originally designed to address. Although in many cases data for these outcomes is quantitative in nature, qualitative data may also be used. Unlike intermediate outcomes, end outcomes are usually measured through standardized assessments. The assessments may be something the evaluator administers or may be part of a state or federal assessment system. In the latter case, the evaluator would have to wait to access this outcome data until it became available through these organizations.

end outcomes The final results of the program

Presented in Table 3.1 is an overview of the scope and sequence of possible evaluation objectives.

HOW TO USE EVALUATION OBJECTIVES The evaluation objectives presented in Table 3.1 can be used in many different ways. It is not mandatory for an evaluator to use all of these objectives in a single evaluation; however, an evaluator might certainly use all of the objectives over the course of evaluating a long-term project. Where the project is at the time the evaluator comes on board may very well dictate the objectives used. For example, if the evaluator is involved from the very beginning of the process and is present for the early development and start-up stage of the

3GC03

10/21/2013

12:20:14

Page 65

How to Use Evaluation Objectives

TABLE 3.1.

65

Overview of the Scope and Sequence of Evaluation

Objectives Evaluation Objective

Evaluation Questions and Examples of Objectives

Capacity and intent

Are all aspects of the program in place and ready for implementation? Are all employees, staff members, directors, teachers, and recruits hired; in place; and aware of the new program, its goals, its objectives, and so on? Are all materials, books, computers, learning modules, and so on purchased, in place, and operational? Evaluation objective: To document the project’s ability to develop the necessary capacity

Validation

Is the product, module, lesson, activity, or other aspect developmentally appropriate for users, high quality, and user friendly? Does it meet local, state, or national standards, or a combination of these? Evaluation objective: To conduct criteria-based validation of materials, activities, products, lessons

Activities and fidelity of delivery

Are the activities or trainings being conducted correctly as part of the program? Evaluation objective: To document the quality and fidelity of delivery of the program’s activities

Participant satisfaction with activities

Are those participants who engaged in the activity satisfied? Are there additions to the programming that the participants would like to see? Evaluation objective: To document participant satisfaction with program activities

Outputs of activities

Did participants learn something new through engaging in the activity? Did participants have a change in attitude about or perception of something? Do participants anticipate changing their current practice or behaviors now that they have this new knowledge, attitudinal change, or both? Evaluation objective: To document changes in participants’ practices, beliefs, and attitudes as a result of their participation in activities

Intermediate outcomes

Did the participants who engaged in the activity change their practices? What are some of the barriers or challenges participants have faced when trying to apply knew knowledge, change their practice, or both? Evaluation objective: To document changes in participants’ practice as a result of participating in program activities Evaluation objective: To document challenges or barriers to participants’ changing their practices (continued )

3GC03

10/21/2013

66

12:20:14

Page 66

Chapter 3 In-Depth Look at the Objectives-Based Approach to Evaluation

TABLE 3.1. (continued ) Evaluation Objective

Evaluation Questions and Examples of Objectives

End outcomes

Were the outcomes for program participants as expected? Evaluation objective: To document end outcomes for those participating in the program

Sustainability

What aspects or components of the program can be sustained over time without the input or resources (for example, money) initially used for program start-up? Evaluation objective: To document components of the program that are sustainable over time

program, then he or she would naturally want to use the objectives that focus on capacity building and validation. If, however, the evaluator is being brought into a project that has been in place for some time, he or she might find it more useful to concentrate on objectives documenting changes in practice and end outcomes. One of the useful things about this list of objectives is that it is cyclical in nature. For example, the evaluator who comes in at the end of the project and is only able to examine the project’s end outcomes can then go back to the beginning objectives (such as validation) and begin to validate those activities that were shown to have produced higher or better end outcomes.

SUMMARY Regardless of the evaluation approach an evaluator prefers to use, at one point or another an evaluator will have to use the objectives-based approach. The objectives-based approach is the most widely used approach in program evaluation. Most government agencies and foundations require this approach for evaluations of programs that they fund. With this approach, objectives for a program are preestablished. In theory, the program is developed to address a specific issue or problem, and therefore will ideally meet or address specific intended objectives. There are a wide variety of objectives that can potentially make up the objectives-based approach. These objectives allow the evaluator to document such aspects as a program’s capacity, the quality of activities, the validity of activities or materials, the

3GC03

10/21/2013

12:20:14

Page 67

Suggested Reading

outcomes associated with changes in participants’ practices, and participants’ improvements on standardized measures.

KEY CONCEPTS Capacity End outcomes Intent Intermediate outcomes Inter-rater reliability Intra-rater reliability Outputs of activities Participant satisfaction Validation

DISCUSSION QUESTIONS 1. Select one of the objectives presented in Table 3.1 and be prepared to discuss how an evaluator would go about working with clients to collect data for this objective. 2. Consider the validation objective in Table 3.1. What might be a situation in which an evaluator would use the validation objective later on in the sequence of objectives?

CLASS ACTIVITIES 1. Add a third column to Table 3.1. In that column, add what evaluation approach you think most closely aligns itself with each objective to be met. The evaluation approaches can be found in Chapter Two.

SUGGESTED READING Stufflebeam, D. (1999). Foundational models for 21st century program evaluation. Kalamazoo, MI: The Evaluation Center, Western Michigan University. Retrieved from https://www.globalhivmeinfo.org/Capacity Building/Occasional%20Papers/16%20Foundational%20Models%20for% 2021st%20Century%20Program%20Evaluation.pdf

67

3GC03

10/21/2013

12:20:14

Page 68

3GC04

10/21/2013

11:28:4

Page 69

PART

2 CASE STUDIES

3GC04

10/21/2013

11:28:4

Page 70

3GC04

10/21/2013

11:28:4

Page 71

CHAPTER

4 IMPROVING STUDENT PERFORMANCE IN MATHEMATICS THROUGH INQUIRY-BASED INSTRUCTION An Objectives-Based Approach to Evaluation LEARNING OBJECTIVES After reading this case study you should be able to 1. Describe some of the challenges of being an external evaluator 2. Describe the key components of—and the evaluator’s role in—the request for proposal process 3. Describe the purpose of a needs assessment and how information gathered through this process is sometimes used 4. Understand an evaluation matrix, and be able to develop one for an evaluation project

THE EVALUATOR While finishing his course work in program evaluation, graduate student Thomas Sanders decided to try his hand at consulting. He had heard of former graduate students setting up evaluation consultancy practices, and he wanted to see for himself if he could do the same. One of the first challenges Thomas had to address was to find clients—people, groups, or agencies that needed to hire him and his evaluation services. Thomas began by trying to generate a list

clients People, groups, or agencies that choose to hire an evaluator

71

3GC04

10/21/2013

72

11:28:4

Page 72

Chapter 4 Improving Student Performance in Mathematics

external evaluator An evaluator from outside the setting or context

of potential clients, but he had difficulty coming up with names to contact. His graduate work had provided him with a wide variety of real-world experiences evaluating projects in the local community. In addition, he had always worked on evaluation projects using a team approach with other graduate students, with a faculty adviser from the university setting up the projects and overseeing them. In actuality, Thomas had never before conducted an evaluation on his own, and he had to admit that the very thought of running an evaluation project solo was both exciting and a little intimidating, to say the least. Unsure where to start, Thomas decided to seek the guidance of a faculty member in school administration. He knew this faculty member had both program evaluation experience and close contacts among many of the area’s public school administrators. So he made an appointment with the faculty member, and a week later met him at his office. Thomas explained the kinds of projects he was interested in working on with clients. The faculty member knew that Thomas was a competent student and certainly passionate about evaluation, so he had no problem with giving Thomas a list of names to start contacting. The faculty member strongly encouraged Thomas to pursue school cooperative bureaus. These regional bureaus, located around the state, provided a range of services to the school districts in their respective regions. The faculty member told Thomas that in his experience these bureaus often were looking to hire evaluators as consultants to work on various grant-funded projects. Taking the faculty member’s advice, and eager to get started, Thomas sent a letter of introduction to all sixteen regional bureaus across the state. He believed that such a mass mailing would get at least a couple of bureaus interested in his evaluation services. A week after sending out his mailing, Thomas received a phone call from one of the bureau directors. The director told Thomas that his letter had been perfectly timed. They had recently received a math initiative grant from the state to address low student performance at the middle school level. She explained that the bureau had thirteen districts in the region that were eligible to participate in the project and, as part of the grant requirement, had to hire an external evaluator to evaluate and monitor the project’s progress. As they were going to have a kickoff meeting the following week, she suggested that she e-mail

3GC04

10/21/2013

11:28:4

Page 73

The Evaluator

the project narrative to Thomas for him to review to see if he was interested in coming on board with the project. In addition, she asked that, following the review of the project, he draft a proposal for an evaluation plan based on the project narrative, which he could present to the thirteen school district superintendents at the upcoming meeting. Following their conversation, the director e-mailed Thomas, thanking him for his time and attaching a detailed narrative of the project and the state’s original request for proposal (RFP). In the RFP, those seeking funds would clearly describe the need for the project, their plan for evaluation, and the project’s budget. Although not all RFPs are the same, all well-structured RFPs have some common elements, listed here with a brief description of each: ■





Need for the project. In this section, applicants (such as school districts, agencies, or nonprofits) are required to provide both a narrative and evidence speaking to why the project is needed. For education-based projects, this is typically done through an analysis of student performance on state exams at either the district or the school level. Here is where the grant writer demonstrates through the use of data that the students are not performing according to certain benchmarks, specific standards, or preestablished criteria. Project narrative. The project narrative (which the director e-mailed to Thomas for review) is the document in which the grant writer, in as much detail as possible, describes the stakeholders for whom the proposed project will be structured and implemented in the intended setting. This narrative generally includes an overview of the types of activities the project would provide, overall goals of the project, and needed project staff and their related responsibilities. Evaluation plan. This section is generally completed by the program evaluator, who will serve as a nonbiased external evaluator for the project if it is funded. In as much detail as possible, the evaluation plan lays out the project goals and objectives and discusses methodologies and timelines for data collection and reporting of findings to the appropriate groups.

73

project narrative A description of the program and how it will potentially function or work

evaluation plan A blueprint that guides data collection, the timeline, and reporting

request for proposal (RFP) An invitation for an individual or group to submit an application to receive monies to fund a program or initiative

grant writer One who authors a grant

benchmarks Specific outcomes that define the success or worth of a program

stakeholders Groups of people who share a similar interest or benefit from the program under study

3GC04

10/21/2013

74

11:28:4

Page 74

Chapter 4 Improving Student Performance in Mathematics



Budget narrative. This narrative is a summary description of the project’s overall budget and the various allocations to specific budget categories.



Budget. Whereas the budget narrative provides a broad description of the budget, the actual budget provides much more detailed line items for the project. In some cases, budget categories (for example, equipment) may be limited to a certain amount of money. Such restrictions would be indicated in the RFP. Box 4.1 contains an overview of the RFP process.

BOX 4.1.

funding agency A group that authors requests for proposals and provides monies to fund potential programs

Schools in Need of Improvement (SINI) Schools that have been identified by their state education department as having students who have continually demonstrated low academic performance

The RFP Process

The RFP process (depicted in Figure 4.1) is one with which professional evaluators in all areas of program evaluation, particularly those who serve as external evaluators, become very familiar. This process may vary a little from setting to setting, but in general the main components are similar to those presented in the figure. The RFP process begins with a funding agency (step A in Figure 4.1) or a related group that develops an RFP to fund a certain type of project or a series of projects. Funding agencies come in many shapes and sizes. They include state agencies, such as the state’s department of public health or education; federal agencies, such as the National Science Foundation or the U.S. Department of Agriculture; and private foundations and nonprofit organizations. The purpose of the RFP is to invite eligible applicants to apply for funding for a project. For example, the state may have monies to fund after-school programming and therefore may develop an RFP inviting school districts in that state to apply. However, not all school districts may be eligible. Eligibility will vary depending on the requirements set in the RFP and the funding agency’s goals for the initiative. In education, eligible schools are typically those that have consistently shown low student performance on state standardized measures. These measures have been, for the most part, criteria- or standards-based and have predominately focused on English language arts (ELA) and mathematics. These low-performing schools are placed on the Schools in Need of Improvement (SINI) list by

3GC04

10/21/2013

11:28:4

Page 75

The Evaluator

A. RFP Created by Funding Agency

Funding Agency

C. Awards Given to Viable Projects

$

Eligible Applicant

Eligible Applicant

$

Eligible Applicant

Eligible Applicant B. Application for Funding

FIGURE 4.1.

The RFP Process

their state’s department of education. Although being on the list doesn’t necessarily guarantee that a school’s district will receive funding through an RFP, it does provide a wider range of opportunities for districts to obtain additional funding. Despite the fact that an RFP is used to standardize the grant application process, not all projects funded under the RFP will look exactly the same. Because the RFP provides broad criteria, two school districts could both receive funding and go about implementing two programs that have the same intended goals and objectives but look very different. As shown in step B of Figure 4.1, once the RFP has been created it is posted or disseminated through various channels to those who are eligible to apply. In most cases, particularly when state or federal agencies are funding projects, RFPs are posted on the agency’s main Web site. Most (but not all) RFPs have a deadline by which eligible groups must apply. Eligible applicants obtain the RFP and, using it as a guide, create a project that meets the goals set forth by the funding agency. Most RFPs have a section designated for the applicant to describe how the project will be evaluated. This is the point at which those applying for the grant would contract with an external evaluator to write the evaluation section of the proposal.

75

3GC04

10/21/2013

76

11:28:4

Page 76

Chapter 4 Improving Student Performance in Mathematics

Award Notification from a funding agency that a proposed program will receive the requested monies

As shown in step C, once the grant proposal has been completed, the eligible groups submit their proposal to the funding agency and wait to see who will receive the awards. Award is the common term to refer to acceptance of a group’s application as a viable project by the funding agency. Funders generally use a points-based scoring system or rubric to consistently score or judge each applicant. The number of awards is typically determined by the total amount of funding available for the project.

THE PROGRAM professional development Training for the purpose of improving one’s career

Needs assessment A process to determine where there are significant gaps in knowledge, resources, or programming

The overall purpose of the math program that Thomas was to evaluate was to provide high-quality professional development or training to teachers and administrators. Prior to applying for the grant, the bureau director had conducted a needs assessment of the thirteen eligible districts. Needs assessment is a general term used to describe the collection of data from a setting to determine which issues are the most important and thus need to be addressed first. In some educational settings this process is referred to as “identifying the gap.” Multiple methods may be used to collect needs assessment data; however, surveys are probably the most common method used for this purpose. Through conducting her needs assessment, the director discovered the following: ■

Teachers were “unsure” and “not confident” in what they perceived inquiry-based instruction to be.



School principals and some district superintendents also reported that they were not completely clear on what constituted a “good” inquiry-based lesson.



In addition, some administrators reported that they often used “inquiry-based learning” as a buzzword and that they lacked the confidence to walk in and conduct an observation of a teacher’s lesson and to determine whether it was a “quality” inquiry-based lesson or not.



Both teachers and administrators also reported that they were uncertain as to whether they often purchased curriculum

3GC04

10/21/2013

11:28:4

Page 77

The Program

materials (such as textbooks and programs) that were inquiry based, and that they often had to rely on the publisher’s sales representative, rather than being able to review the materials and make the decision on their own. ■

Both teachers and administrators realized that little inquirybased instruction was occurring in the middle school classrooms, particularly in the area of mathematics.



Both teachers and administrators believed that it was because students were not being exposed to this type of instruction in math classes that they had performed particularly poorly on problem-based items on the fourth-grade math assessment. An item analysis of the previous fourth-grade math assessment had confirmed this.

Pleased at the wealth of information she had discovered, the director realized that there was a lot of work to be done if the bureau were ever to help these districts raise their math scores. She also believed that professional development would be best suited to address many of the issues. She also realized, however, that offering teachers, administrators, and other related staff technical workshops on inquiry-based instruction was not the solution. She knew from the current research that such professional development trainings, even a well-thought-out series of workshops, typically have little impact and create few changes in how people function or teach. Instead, she needed to come up with a unique structure for the program. In her opinion, to be successful, to gain buy-in from all vested parties, and to meet its overall intended outcomes, the program had to have the following components: ■

High-quality professional development from an outside source (such as experts in the fields of education, inquirybased learning, mathematics, and classroom instruction).



An established steering committee, which would meet six times a year to monitor the progress of the program, review evaluation feedback, and make programmatic decisions based on this data.



An assigned lead teacher from each of the thirteen middle schools. These lead teachers would serve as liaisons between

77

3GC04

10/21/2013

78

11:28:4

Page 78

Chapter 4 Improving Student Performance in Mathematics

the steering committee and the middle-school teachers being served by the program. ■

Participating middle school teachers from each of the thirteen schools.



A math consultant who would rotate across the thirteen schools and provide both group assistance to lead teachers and middleschool teachers as well as in-class support to teachers.

THE EVALUATION PLAN From his graduate work, Thomas knew that the first step in any successful evaluation is for the evaluator to develop an evaluation plan. As part of that process, Thomas reviewed the project narrative in the grant proposal, then began to draft his evaluation objectives for the project. In examining the project, Thomas noted that the project appeared to have three main functions, or phases:

evaluation matrix A template or table used by evaluators as a blueprint for conducting a program evaluation



Phase one focused on professional development, providing training to the middle school teachers.



Phase two examined whether—and if so, how—those professional development activities had changed teacher practices in the classroom.



Phase three examined whether there were any outcomes or results of these new teaching practices as indicated by changes in student outcomes (see Figure 4.2).

Based on this information, Thomas created three main evaluation objectives as well as multiple subobjectives for each main objective. Because he was using preestablished objectives to guide the evaluation, this would be considered an objectives-based approach. Box 4.2 presents Thomas’s evaluation objectives. Next, Thomas used these evaluation objectives to develop an evaluation matrix—a “blueprint.” The matrix works to align the evaluation objectives with the tools, their purpose, and the timeline in which they will be administered. Depending on the depth and breadth of the project and its evaluation, matrixes will vary in their dimensions. Presented in Table 4.1 is the basic template that Thomas used when creating his evaluation matrix.

3GC04

10/21/2013

11:28:5

Page 79

The Evaluation Plan

Phase One: Professional Development

Phase Three: Student Outcomes

Phase Two: Changes in Classrooms

FIGURE 4.2.

BOX 4.2.

Overview of Project Activities

Thomas’s Evaluation Objectives

1. To document the breadth, depth, and quality of the professional development activities • Review and document professional development activities

and all materials and curricula associated with the trainings to ensure that the trainings are indeed inquiry-based, student-centered approaches. • Document teacher baseline perceptions of inquiry-based,

student-centered instruction prior to teacher participation in the professional development experience. • Document teacher perceptions of the key issues and chal-

lenges in implementing more student-centered instructional practices in the classroom. • Document teachers’ plans for implementing and incor-

porating inquiry-based instructional practices into the classroom. 2. To document changes in teacher and administrator attitudes, beliefs, and practices in classroom instruction following their participation in the professional development activities

79

3GC04

10/21/2013

80

11:28:5

Page 80

Chapter 4 Improving Student Performance in Mathematics

• Document changes in teachers’ instructional practices and

determine areas in need of further work and areas for continued professional development. • Document changes in administrator perceptions of inquiry-

based instructional practice. • Document, where possible, any pivotal training efforts that

occurred in the classrooms following the initial professional development trainings. 3. To document changes in student performance • Document changes in teacher and staff perceptions of

student performance in mathematics. • Document changes in student performance in classroom and

other related activities as they pertain to mathematics. • Document changes in student performance on the state’s

fourth-grade math assessment.

TABLE 4.1.

Thomas’s Evaluation Matrix Template for the Math

Project Evaluation Objectives

Stakeholder Group

Tool, Data, Instrument

Design Timeline

3GC04

10/21/2013

11:28:5

Page 81

Summary of Evaluation Activities and Findings

Once the evaluation matrix was completed, Thomas e-mailed it to the director. Two days later, he heard back from her. She had reviewed his matrix, and noted that she and her staff were impressed with his proposed evaluation. She also told him that they were interested in hiring him to serve as their external evaluator, and invited him to present his plan at the following week’s meeting. For Thomas, the opportunity to serve as an external evaluator on a project to improve mathematical literacy for middle school students in rural school districts was both exciting and personal. He himself had grown up in a rural community and attended a small district school similar to those he was going to be working with on the math project. Having this personal experience, Thomas knew many of the challenges that these rural communities and schools face, such as high poverty rates and limited access to such facilities as libraries and institutions of higher education. At the meeting, the director presented her ideas for the project and Thomas presented the evaluation plan. Thomas explained how the plan laid out all the essential activities for the evaluation, including whom the data would be collected from, how it would be collected, when it would be collected, and the status of the overall activity (that is, completed, not completed, postponed, and so on). The evaluation plan was well received by those at the meeting. However, one administrator noted that Thomas had not indicated in the plan when such data would be delivered to the committee. In other words, he wanted to know whether the data and findings were formative and therefore to be presented back to the committee at one of the steering committee meetings, or summative and therefore to be included in the final evaluation report. Thomas thanked the administrator for bringing up the point, and told the committee he would add a column to the plan indicating whether the data was formative or summative. He said he would present the updated evaluation matrix in a couple of months at the next steering committee meeting.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS At the end of August, Thomas, with the help of members from the steering committee, reviewed all documents and materials for the workshops. Although most of these materials met the group’s

81

3GC04

10/21/2013

82

11:28:5

Page 82

Chapter 4 Improving Student Performance in Mathematics

definition of inquiry-based, student-centered instruction, there were a few points that they wanted further emphasized during the workshops. Approximately two hundred middle school teachers attended the six-day-long workshops; substitutes were hired to cover their absences. These workshops were held periodically throughout the school year. During the workshops, Thomas observed the trainings and sat in on the various breakout groups with teachers. He participated in the professional development activities that each group was charged with and conducted semistructured interviews to gather information from workshop participants. Unlike structured interviews, which follow a list of open-ended questions, semistructured interviews may have several preplanned questions from which the evaluator veers off to ask other, unplanned items. Thomas also administered a teacher survey at the beginning of the first workshop in September and again at the end-of-year workshop in June. Overall, he collected 192 matched, pre-post surveys from approximately two hundred teachers. Formative data (such as presurvey data, observations, and interview data) was reported back to the steering committee in several memorandum reports. This data supported many discussions about inquiry-based instruction during the steering committee meetings; in addition, consultants provided committee members with some short presentations on the subject. Lead teachers were also instrumental in returning to their respective schools and supporting and extending what teachers had been introduced to during the workshops. Lead teachers also met weekly with teams of teachers in their school and trained other teachers about the promising practices wherever possible. Further, the math consultant worked with teachers across the thirteen districts, both in group settings and one-toone with teachers in their individual classrooms. Finally, administrators filled out post-only surveys at the end of the project. Overall, the majority of teachers and administrators viewed the project as a success and were, for the most part, pleased with the quality of the professional development in inquiry-based learning and mathematics that they received. The steering committee served as a vital component of the program model, providing a natural loop for formative evaluation data to be presented in a timely manner. Although initially some of the materials for the workshops needed refinement, the professional

3GC04

10/21/2013

11:28:5

Page 83

Summary of Evaluation Activities and Findings

development was considered to be of high quality and aligned with what teachers and administrators believed was needed to improve instruction and student performance. Teachers and administrators originally reported that they were “not confident” of being able to teach using inquiry-based instructional practices or observe others and identify these practices, respectively; however, both groups also reported that they believed they were more confident by the end of the project. Teachers also reported that they were trying to implement more inquiry-based instructional practices into their math classes as well as other content areas. Lead teachers supported this finding and in some cases provided Thomas and the steering committee with lesson plans and classroom activities that teachers had developed based on what they had learned from the workshops. Administrators also reported seeing a difference in teachers’ instruction that they observed. One administrator created an observation checklist to document inquiry-based instructional practices. He said that he had developed the checklist as a result of what he had learned at the steering committee meetings and from his lead teacher and the consultants. He now used the checklist when observing teachers for tenure. He said that staff at his school had “all pulled together and now shared a common vision—one that supported inquirybased, student-centered practices.” The administrator shared this tool with the other administrators at one steering committee meeting and gave a short presentation about how he used the self-developed tool. This presentation fostered further discussion among the administrators about inquiry, and several of them continued to meet on their own to further pursue this work. An additional finding, noted in the summative evaluation report, was that in several schools teachers and administrators reported that they had changed their practices for purchasing textbooks, curricula, and curriculum-related materials. No longer did they rely on curriculum and book publishers’ decisions that materials and curricula were inquiry based. From this experience they now believed they had the ability to review materials and decide for themselves whether they fit their own definition of inquiry-based, student-centered instructional practices. One area that was noted in the model to be “in need of improvement” had to do with the consultant who visited the schools to work with lead teachers and groups of teachers and do

83

3GC04

10/21/2013

84

11:28:5

Page 84

Chapter 4 Improving Student Performance in Mathematics

one-to-one, in-class observations of teachers. Teachers reported that the time allotted for the consultant was not enough to provide all participating districts with proper coverage. Teachers and lead teachers from several districts noted that because of the lack of time, the consultant came to their school only once during the project. Program participants highly recommended increasing the hours for the consultant or adding another consultant to the model, which was documented in Thomas’s summative evaluation report. In regard to student outcomes, student scores on the state’s fourth-grade math assessment improved significantly that year for all thirteen districts. Although there was still plenty of work to be done, this was certainly an encouraging result for these districts. In fact, districts were so heartened by the initial success of the project that administrators approached the bureau director hoping to apply to another RFP for mathematics that was due to come out from the state. The bureau director was very pleased at the administrators’ continued interest in improving performance in mathematics and the quality of instruction in their schools. She believed they could credit the program model with a successful steering committee and a strong evaluation plan that provided committee members with formative feedback and produced strong buy-in from all parties. The director did not want to stop for even a moment. The momentum and excitement from the district were evident, and she quickly contacted Thomas, wanting to know if he would again serve as their evaluator. This time, however, she told him that they would start developing the program and the evaluation concurrently. Things worked out well for them initially; she was expecting, given all the interest the project had generated, to have even more success the next time. About eight months later the state released another RFP for a math and ELA initiative. Several of the administrators contacted Thomas and the bureau director to see if they could set up an initial meeting to start to think about continuing the program they had started. An initial meeting was set up for the director, Thomas, and the thirteen district administrators. The morning of the meeting, the director called Thomas. They had another positive discussion about how excited everyone was and how all the districts were really coming together to again work on this issue.

3GC04

10/21/2013

11:28:6

Page 85

Summary of Evaluation Activities and Findings

“I know the administrators and teachers are really looking forward to this,” said the director. “I have worked with these districts for almost twenty-five years, and I have never seen as much positive energy as I have seen with this and the success of our last project.” “It has been great to be a part of,” said Thomas, “and I think the next project will be even better.” “I agree,” she replied. “Oh, before the meeting, let me go onto the state’s Web site to make sure that all thirteen districts are eligible to apply for the new grant.” “OK, good idea,” said Thomas. “I’ll see you at the meeting this afternoon.” Thomas hung up and began to gather the papers on his desk for the meeting. Before he could finish packing his bag, the phone rang again. It was the bureau director. She didn’t even give him time to say hello before blurting, “You are not going to believe this.” “What?” “I just checked the state’s Web page, and none of the thirteen districts is eligible to apply for the new grant.” “What? How can that be?” “All their scores on the last math assessment went up, so they are no longer on the list. Can you believe it?” “No, I can’t,” said Thomas. “I mean, it’s a good thing that the scores improved, but a bad thing now that we’ve gotten everyone all enthusiastic about working to improve math.” “And without the grant we just can’t afford to do the extensive professional development that we did before,” she said. “What are we going to do?” asked Thomas. “I don’t know,” said the director, “but we only have a couple of hours to figure out how we are going to tell the administrators at our meeting this afternoon.” Thomas hung up the phone. What was he going to tell these administrators? How would he ever get them on board with another project in the future? How would they ever be able to build such buy-in and enthusiasm again? Suddenly he felt sick in his stomach. He didn’t know what to do. Thomas and the director broke the news to the administrators: not one of them was eligible to apply for the grant. The

85

3GC04

10/21/2013

86

11:28:6

Page 86

Chapter 4 Improving Student Performance in Mathematics

administrators were disappointed, but also pleased that their districts had shown improvements in math.

FINAL THOUGHTS Eventually several districts reverted to being on the SINI list. Thomas never worked with any of them again, however. He finished his degree and took a position at a university three states away. He continued to work with school districts in program evaluation. From his early experience, Thomas learned an important lesson that he carried with him for the rest of his career: success on a project may in fact eliminate those hardworking individuals from continued participation.

KEY CONCEPTS Award Benchmarks Clients Evaluation matrix Evaluation plan External evaluator Funding agency Grant writer Needs assessment Professional development Project narrative Request for proposal (RFP) Schools in Need of Improvement (SINI) Stakeholders

3GC04

10/21/2013

11:28:6

Page 87

Class Activities

DISCUSSION QUESTIONS 1. Review Figure 4.1, depicting the RFP process. Make a list of some of the benefits and challenges you see for schools or other organizations that want to seek such funding to provide additional programming for the people they serve. 2. The situation that Thomas and the bureau director experienced is not uncommon. When projects are a success and project goals are met, often this means that groups are no longer eligible for funding. Analyze the case again and consider what could be done to continue some elements of this project’s design, despite the fact that there is no more funding to hire professional consultants to deliver training, and no more stipends for teachers to be trained outside of school. 3. Take a moment to review the Joint Committee on Standards for Educational Evaluation’s standards in Chapter Two. What standards do you think Thomas had to make sure that he adhered to? After you have selected those standards, be prepared to support your comments by saying how you think Thomas should have proceeded.

CLASS ACTIVITIES 1. The feelings of self-doubt that Thomas experienced are not uncommon among new evaluators-in-training. Reflect on your past evaluation and research projects. Make a list of some key projects you would want to highlight in a conversation with a prospective client. What are some important or unique tools, methods, or practices that you used on these projects? Also, try to think about some of the challenges you had in working on each project. How did you overcome them? What were the results or consequences of your actions? 2. Pretend you too are interested in consulting work in program evaluation. Brainstorm and create a working list of clients you might consider approaching. As Thomas did in the case study, do some initial research on the possible clients in your area or field. What are some programs or areas in which you could provide your evaluation consulting skills?

87

3GC04

10/21/2013

88

11:28:6

Page 88

Chapter 4 Improving Student Performance in Mathematics

3. In the case study, Thomas demonstrated the importance of a detailed evaluation matrix. Based on the evaluation objectives and subobjectives that Thomas developed, and using the template, develop your own matrix for this project.

SUGGESTED READING Ding, C., & Navarro, V. (2004). An examination of student mathematics learning in elementary and middle schools: A longitudinal look from the US. Studies in Education Evaluation, 30, 237–253. Kerney, C. (2005). Inside the mind of a grant reader. Technology and Learning, 25(11), 62–66. Reese, S. (2005). Grant writing 101. Connecting Education & Careers, 80(4), 24–27. Vandegrift, J. A., & Dickey, L. (1993). Improving mathematics and science education in Arizona: Recommendations for the Eisenhower Higher Education Program (ERIC Document Reproduction Service No. 365510).

3GC05

10/21/2013

11:36:46

Page 89

CHAPTER

5 EVALUATION OF A COMMUNITY-BASED MENTOR PROGRAM A Need for Participatory Evaluation LEARNING OBJECTIVES After reading this case study you should be able to 1. Define participatory evaluation and give several examples of how this theory can be applied in practice 2. Describe some of the challenges an external evaluator may face when trying to integrate elements of a participatory approach into an existing evaluation 3. Describe how a participatory approach can assist an external evaluator in improving an evaluation’s cultural validity

THE EVALUATOR Evaluator Stephanie Brothers worked for FCA Consulting, a private evaluation firm in the Midwest. Before starting to work at FCA, Stephanie had earned a master’s in educational research and had taken several program evaluation courses as electives. One that Stephanie found particularly interesting was a course on evaluation program theory. It provided an overview of the various data collection methods and the principles that would guide an evaluator practicing that theory. This course provided

evaluation program theory A systematic method of collecting, analyzing, and reporting information to determine the worth of a set of activities

89

3GC05

10/21/2013

90

11:36:46

Page 90

Chapter 5 Evaluation of a Community-Based Mentor Program

participatory evaluation model An evaluation approach whereby those who are being served by the program play a dominant role in shaping the evaluation, its objectives, data collection tools, and reporting of results

Stephanie with some alternative approaches to conducting evaluations, particularly approaches that focused on more of a participatory evaluation model. This evaluation theory focuses on stakeholders’ or groups’ developing and collecting their own data and presenting their own findings for the evaluation. After receiving her master’s in educational psychology, Stephanie had applied to private evaluation firms around the country and gone on several interviews. All of her interviewers had been very impressed with the amount of course work she had taken geared specifically toward program evaluation—so much so that two firms had immediately offered her a position. In her new job at FCA, as Stephanie got to know the people in the firm, she was surprised by their various educational backgrounds and former work experiences. She had assumed that everyone working in a program evaluation firm would have a degree in program evaluation. Several coworkers had advanced degrees in such fields as psychology, social work, political science, communication, and technology, and a few were former attorneys, teachers, and school administrators. Although at first Stephanie had been a little concerned that her fellow coworkers did not have the program evaluation background that she had, when working in a team with them she soon realized that they brought to the table a variety of perspectives and experiences from the field of education and their own work. Working as a middle-level evaluator for FCA gave Stephanie solid evaluation experience that complemented her technical training. In September the company took on a new client: a communitybased mentor program for high school students. This program was funded through a three-year grant from the state’s department of education and was beginning its second year. Things had not gone well in the first year of the project, however. The project director had partnered with an external evaluator because a rigorous evaluation component that gathered both formative and summative data was required. However, at the end of the first year the program evaluator had failed to collect any data on the project and was unable to submit a summative report to show whether the project was meeting its intended goals and objectives. Not filing a project report annually put the project’s funding in serious jeopardy. Following this, the program evaluator resigned, and the project director hired FCA to take over the evaluation for the next two years.

3GC05

10/21/2013

11:36:46

Page 91

The Program

For the project, Stephanie was informed that she would serve as principal evaluator and oversee a team of three other employees. Because this was an educational program, team members who had educational backgrounds were chosen. One member of the team had been a school administrator for thirty years, and the other two were trained researchers. Stephanie still could not help but feel a little apprehensive about working with a team whose members were not all formally trained in program evaluation, but she had confidence in herself and felt deep down that they would be able to do a high-quality job. She also realized how important the evaluation would be, and she was eager to work with the community-based organization.

THE PROGRAM The purpose of the community-based mentor program was to link volunteers with at-risk high school students. Although the community was considered a small city, it had many of the problems associated with much larger metropolitan areas: many students in the public school district seeking free or reducedprice lunch, high transience rates among families, high dropout rates among high school students, a large number of school suspensions, and drug trafficking. In addition, a substantial portion of the school population was not meeting state benchmarks on the state standardized measures, placing the district on the Schools in Need of Improvement list. The goal of the program was to provide a structured afterschool environment for at-risk high school students through oneto-one mentoring. Mentors were volunteers from the community coming from a wide variety of backgrounds, occupations, and education levels. Each mentor worked with one student. Mentors had their choice of working with their mentee at the high school facility after school or at other locations. Many mentors, particularly those who were retired, chose to have their mentee come to their home. Mentors were required to meet with their mentee at least three times a week for at least one hour per meeting. In some cases, especially for those mentors who were busy professionals and had a family of their own, mentoring took place on weekends. Although the specifics of the program, such as the number and duration of mentor-mentee meetings, were explicitly stated

91

3GC05

10/21/2013

92

11:36:46

Page 92

Chapter 5 Evaluation of a Community-Based Mentor Program

in the program description, Stephanie noticed when she reviewed the program documents that other aspects of the program, such as the types and quality of the activities mentors should be doing with students, were not specified. It appeared that mentors could pretty much do whatever activities they wanted with the student they were working with.

THE EVALUATION PLAN mixedmethods approach A data collection model that incorporates the use of both quantitative and qualitative data

evaluation capacity The development of evaluation data collection tools and materials

The evaluation team decided that a mixed-methods approach would be best. The mixed-methods approach is a methodology used in research and also in program evaluation, whereby the evaluators collect both quantitative and qualitative data from program participants. Box 5.1 presents the complete list of program goals; the evaluators would determine whether these goals were being met. Shortly after their examination of all project documents and materials, Stephanie and the evaluation team had another meeting. They invited Jonathan Post, the head of the community organization that the school district had partnered with for the mentor program. The evaluators had decided that the purpose of the meeting was to discuss the project and the need to develop evaluation tools for data collection. Developing such tools is often referred as establishing evaluation capacity (see Box 5.2). In

BOX 5.1.

Program Goals

1. To provide each eligible student with access to a community mentor 2. To work with mentors and provide them with quality training 3. To increase students’ academic achievement in school 4. To decrease incidents of student violence and behavioral problems at school and in the community 5. To increase the number of at-risk students graduating from high school

3GC05

10/21/2013

11:36:46

Page 93

The Evaluation Plan

BOX 5.2.

What Is Evaluation Capacity?

Evaluation capacity is a term commonly used in evaluation. Although it has come to mean many different things to many different people, typically it is used by evaluators to describe the development of the different tools needed to collect data. It is not uncommon for evaluators to use what are referred to as preestablished tools or instruments. Preestablished instruments typically have been developed by someone other than the researcher or evaluator. Another characteristic common among preestablished instruments is that they tend to be standardized. A standardized instrument possesses the following criteria: It includes a fixed set of questions or stimuli. It is given in a fixed time frame under similar conditions with a fixed set of instructions and identified responses. It is created to measure specific outcomes and is subjected to extensive research and development and review. And performance on the instrument can be compared to a referent such as a norm group, a standard or criterion, or an individual’s own performance [on a norm reference test, a criterion reference test, or a self-referenced test]. (Lodico, Spaulding, & Voegtle, 2006, p. 67) In most cases, preestablished measures have received extensive testing for reliability and validation during their design and development phases, prior to being marketed and disseminated. Most preestablished measures used in education are developed for use by professionals other than educational researchers or program evaluators. They are used by school administrators, general and special education teachers, school psychologists and counselors, and the like. For many projects, however, these tools, the data collected by evaluators, or both may be used to address various evaluation objectives.

93

3GC05

10/21/2013

94

11:36:47

Page 94

Chapter 5 Evaluation of a Community-Based Mentor Program

TABLE 5.1.

Evaluation Matrix for the Mentor Program

Tools, Timeline and Formative or Evaluation Stakeholder Instruments, Design for Summative Objectives Group or Types of Data Data Collection Data Status

addition, whether or not the program could be delivered with fidelity also had to be considered. Before they met with Jonathan, Stephanie sat down with her team and, using an evaluation matrix, planned some of the activities. Presented in Table 5.1 is the matrix she and Jonathan used. Through both her course work and her on-the-job training experience, Stephanie had learned that a thorough evaluation plan can be very helpful for both the evaluator and the client. As the team began to lay out the evaluation activities, Stephanie soon realized that not all of her team members believed as much as she did that such detailed planning of the evaluation—including which data would be collected when—was important. In fact, one member said, “We are wasting a lot of precious time laying out every detail of this evaluation; we should be out there collecting data—that’s what evaluation is all about.” Stephanie said she agreed with her team member that evaluation was about collecting data and that they would soon be doing so; but they also had to realize that this plan not only was for them but also would serve as a tool or set of talking points to open up a dialogue with their client. Grudgingly, her teammates agreed. On the day of the meeting Jonathan arrived on time, and so did Stephanie’s team. After the introductions, Stephanie began by handing out the latest draft of the evaluation matrix to everyone,

3GC05

10/21/2013

11:36:47

Page 95

The Evaluation Plan

saying that the team had reviewed the project and come up with a plan. “Our next step,” Stephanie added, “will be to start to develop our instruments and tools for collecting data. We call this building evaluation capacity.” “Tools,” said Jonathan, wrinkling his forehead. Stephanie knew that often evaluators used language or terms that were unfamiliar to clients. “Surveys, interview protocols— these are tools that evaluators use to collect data,” she explained. She pulled a couple of surveys from a previous project and laid them out on the table in front of Jonathan. Jonathan put on his reading glasses and examined the documents. Then he reached into his briefcase, pulled out a stack of papers, and handed them to Stephanie. “And what is this?” she asked. “Survey data that we have collected from all the mentors, the students they are working with, and their family members or guardians,” said Jonathan. “Oh.” Stephanie felt her face begin to contort as she flipped through the papers. “We decided to collect some of the data ourselves to make it easier on whoever stepped in to do the evaluation,” said Jonathan. Stephanie handed the surveys to the other members of the team, who began to rifle through them. “That’s great. I am sure that we can put these to good use.” For the rest of their time together, Stephanie went through the remainder of the evaluation plan and explained it to Jonathan. She told him their evaluation team would be setting up a focus group of mentors to interview. She explained that a focus group is a smaller sample of people, often with similar experiences, who are interviewed in a group setting. She further explained that to ensure that all the program goals were properly addressed, the evaluation team would also be collecting data from the students’ schools. She noted that the team would work to get school district permission to obtain access to this sensitive data. At meeting’s end, Jonathan thanked them for working with the program and said he looked forward to it. The team thanked him for coming, and Stephanie saw him out. After closing the door, Stephanie turned to the other members of her team.

95

focus group A data collection approach similar to one-to-one interviews, except with a small number of people participating together

3GC05

10/21/2013

96

11:36:48

Page 96

Chapter 5 Evaluation of a Community-Based Mentor Program

One of them said, “We can’t possibly use those surveys and data. Did you look at them? The scales they used make absolutely no sense whatsoever, and the items have nothing to do with evaluating the project’s goals and objectives.” “I agree,” said another member. “Circular file,” said the third. She pointed to the trash can in the far corner of the room. “Data should only be collected by professional researchers who know what they are doing.” Stephanie could feel her stomach tensing up. “I agree, but what am I supposed to tell Jonathan?” “Tell him the truth,” said one of the former administrators. “Tell him the data isn’t valid or rigorously collected, and we can’t use it.” Stephanie joined the others at the conference table and slumped back into her chair. It was true. The data had minimal if any value. And Stephanie knew they had little use for the data in their evaluation plan. But she also knew that not using it could spell potential disaster for an evaluation project that had already gotten off to a shaky start. Stephanie opened the folder of completed surveys and started to sort through them again. Is there anything we can use? she asked herself. Anything at all? Considering the bad experience the client and the participating mentors had had with the previous evaluator and the potential harm to the program that the past evaluator’s actions might have caused, Stephanie realized that building trust was very important. She convinced the members of her team to use the data collected by the client in their evaluation report. They noted in the report that the survey and data were collected by the participants. Seeing these used in the report and presented at a later meeting built great confidence and trust. The mentors felt that the evaluators were interested in what they had to say about the program. They also realized that their survey didn’t exactly address some of the questions or objectives of the evaluation. Recognizing this, Jonathan worked with Stephanie and her team to develop a more rigorous survey, specifically designed to address some of the project’s evaluation objectives and the mentors’ needs and questions. The mentors now trusted the evaluation team, and the next time around they allowed the team to collect the data.

3GC05

10/21/2013

11:36:48

Page 97

Key Concepts

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS The evaluator not only used the evaluation matrix to help guide the data collection efforts but also incorporated the needs and perceptions of the client. Despite her careful work, Stephanie faced a serious challenge when working with the client using a participatory evaluation approach. The data collected by the stakeholders for the community-based mentor program was not as valid or reliable as the evaluation team had hoped. Recognizing this, Stephanie, the lead evaluator on the team, had to carefully show team members that it was necessary to keep the program’s stakeholders involved in the data collection and evaluation process so that the final results of their evaluation report would be used for programmatic refinement. At the same time, she had to convey to the client that further, more valid data needed to be collected, despite all the effort and work that had occurred thus far on the project.

FINAL THOUGHTS In this case study, Stephanie’s past course experience provided her with a perception of program evaluation that was slightly different than that of her colleagues. Despite the fact that the data collected by the client might not have had the rigor that data collected from the evaluators would have had, Stephanie was able to recognize the importance of using the data for the evaluation report. By including the data collected by the client, the evaluation team was able to begin to develop a sense of trust with the client that had been fractured because of past experiences.

KEY CONCEPTS Evaluation capacity Evaluation program theory Focus group Mixed-methods approach Participatory evaluation model

97

3GC05

10/21/2013

98

11:36:48

Page 98

Chapter 5 Evaluation of a Community-Based Mentor Program

DISCUSSION QUESTIONS 1. As a professional evaluator, you will probably find yourself working with people who have very different backgrounds. One of the wonderful things about program evaluation is that it attracts a wide range of professionals. Take a few minutes to list some of the advantages (and perhaps some disadvantages) you can think of to working on an evaluation team composed of people with such varied experiences. What skills, talents, and past experiences would you be able to bring to the project, and how might you establish an evaluation framework that would work to incorporate both your skills and the skills of members of your team? 2. Unlike what is required for teachers, administrators, school counselors, and school psychologists, there is no official certification by the state or federal government for program evaluators. In essence, anyone can call himself or herself a program evaluator and practice this craft. Do you think there should be a certification process for program evaluators? Why or why not? Note your position and list a few comments that support your beliefs on the subject for a class discussion. 3. Read the Altschuld (1999) article in the “Suggested Reading” section that pertains specifically to certification for program evaluators. After reading them, reflect on this article. Did anything in them change your opinion about the issues? If so, please be prepared to discuss why in class. 4. What ethical challenges do you think Stephanie and the other evaluators in this case had to address? If you were one of the evaluators, how would you have addressed the ethical issues that you identified?

CLASS ACTIVITIES 1. Conduct a literature search on participatory evaluation. You may also want to read the items in the “Suggested Reading” section that pertain to this. Based on your reading, what should Stephanie have done with the data that the client had collected?

3GC05

10/21/2013

11:36:48

Page 99

Suggested Reading

2. It is never too early to start preparing for the interview process. Whether you have already had a job interview for a program evaluation position or not, make a list of the different things you might bring to an interview. These might include, for example, past experiences in which you performed job-related activities (such as data entry) that might be valuable to an employer. 3. Surf the Web to “visit” several different colleges and universities and review the various courses that make up their program evaluation degrees. 4. Surf the Web and look at newspapers and other media to find program evaluation positions. Keep a running list of the different skills that these positions require. Have a discussion in class about where evaluators-in-training obtain these particular skills.

SUGGESTED READING Altschuld, J. W. (1999). The case for a voluntary system for credentialing evaluators. American Journal of Evaluation, 20, 507–517. Chaudary, I. A., & Imran, S. (2012). Exploring action research as an approach to interactive (participatory) evaluation. Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 1998(80) 5–23. O’Sullivan, R. O. (2012). Collaborative evaluation within a framework of stakeholder-oriented evaluation approaches. Evaluation Program Planning, 35, 518–522.

99

3GC05

10/21/2013

11:36:48

Page 100

3GC06

10/21/2013

11:41:57

Page 101

CHAPTER

6 TEACHER CANDIDATES INTEGRATING TECHNOLOGY INTO THEIR STUDENT TEACHING EXPERIENCE An Objectives-Based Approach to Evaluation LEARNING OBJECTIVES After reading this case study you should be able to 1. Understand how to collect and analyze alternative types of data when conducting an evaluation 2. Understand how program evaluators create self-developed tools in certain situations 3. Understand portfolios, how they are used in education for both instruction and assessment, and some of the noted benefits of and challenges with their use 4. Understand some of the various challenges evaluators often face in collecting data, particularly alternative or nontraditional forms of data

THE EVALUATORS Jason Simpson and Daphne Stevenson were two professional evaluators who had worked in program evaluation in both higher education settings and the areas of teacher candidate programs and portfolios. They were very familiar with the different methods used by colleges and universities to train teacher candidates for the classroom.

teacher candidate A student studying to become a teacher

portfolios Collections of materials used for assessments through which an individual shows what he or she knows as well as what he or she can do

101

3GC06

10/21/2013

102

11:41:57

Page 102

Chapter 6 Integrating Technology into Student Teaching Experience

THE PROGRAM Jason and Daphne were evaluating a three-year program conducted by a local university—a major teacher training institution, training approximately one-third of the state’s teachers. As part of this new initiative, the university wanted to infuse technology into their teacher education courses. One of the university’s program goals was to increase the exposure to technology that teacher candidates were getting in their course work, in the hope that this technology-rich experience would translate into teacher candidates’ increased use of technology in their field placements and student teaching settings. In addition, the university hoped that teacher candidates’ access to technology would in turn make an impact on the host teachers who supervised them in their field placements and student teaching settings. To make this initiative possible, the university purchased (through a grant) laptop computers, LCD projectors, digital cameras, and software. As part of their course work, teacher candidates were supposed to check out this equipment (much as they would check out books from a library) and bring it into their student teaching settings.

THE EVALUATION PLAN For this particular project the evaluation was underpinned by several objectives, presented in Box 6.1.

BOX 6.1.

Evaluation Objectives

1. To document, wherever possible, an increase in knowledge and use of and access to technology for university faculty in the teacher candidate program 2. To document, wherever possible, an increase in knowledge and use of and access to technology for teacher candidates enrolled in the university’s teacher education program 3. To document an increase in the use and types of technology teacher candidates are introducing into their field placements and student teaching settings 4. To document the impact of increased access to technology for teacher candidates in their field placements and student teaching settings

3GC06

10/21/2013

11:41:57

Page 103

The Evaluation Plan

Jason and Daphne met with the project director to discuss the evaluation. The project director handed them the evaluation objectives for the discussion. “As you can see,” said the director, “we have four main evaluation objectives that are guiding our project.” Jason and Daphne reviewed the document and began to take down some notes. “Now, we have an internal evaluator who has been conducting most of our evaluation for us,” said the director. “But according to our grant funder, we also need to hire an external evaluator to conduct some of the evaluation as well.” “We have worked with many internal evaluators before,” said Jason. “Wonderful,” said the director. “We would like you to focus on evaluation objectives 3 and 4, mainly 3.” Daphne and Jason nodded and reread the last two objectives. Daphne said, “For objective 3, to document the use and types of technology, we will probably want to get a list of the host teachers and meet with them and perhaps interview them . . . ” “We might even want to create a survey as well and give that to all the host teachers, too,” added Jason. “Well . . . ” said the project director. “That might be a problem.” “Why?” both Daphne and Jason asked at the same time. “Well, you have to realize the very important role the host teachers play here in our teacher education program at the university. We rely on them tremendously to have our teacher candidates come into their classroom for fifteen weeks and do their student teaching . . . ” “We understand,” said Daphne. “Both Jason and I have worked in higher education and have conducted evaluations in high ed.” “So you know how vital our host teachers are?” “Yes, we certainly do.” “Good,” said the director. She paused for a moment, then said, “Well, the person at the university who oversees the whole student teaching component is very concerned that the evaluation of our technology project and all the data collection that will have to be done will become too much of an inconvenience for our host teachers—and we wouldn’t want to lose any of them.”

103

3GC06

10/21/2013

104

11:41:57

Page 104

Chapter 6 Integrating Technology into Student Teaching Experience

“I see,” said Jason. “So what does this mean in terms of conducting the evaluation?” “Well, it means that you won’t be able to access the student teaching sites and interview or survey the host teachers.” “What about the teacher candidates? Can we interview them?” asked Daphne. “Yes, you may,” said the director. “However, some of the students have graduated, and others have already left for the summer.” Jason asked, “Might we be able to get some contact information for them at home, in case we want to interview them over the phone?” “We might be able to,” said the director. “I would have to ask the dean to see if there are any legal issues regarding the confidentiality of the contact information.” “Is there any other data that would help us evaluate the teacher candidates’ experience and how they went about using technology in their student teaching settings and the types of technology they used?” Daphne asked. She hadn’t expected the evaluation to take such a turn. “Not unless their teacher candidate portfolios would help.” “Teacher candidate portfolios?” Again they both responded in unison. The project director went on to explain that during the grant project each teacher candidate had to develop a teacher candidate portfolio and maintain that portfolio throughout the program (see Box 6.2). As part of creating the portfolio, each candidate had to reflect on his or her student teaching experiences and show artifacts or documents that supported his or her ability to meet the twelve competencies required by the program.

BOX 6.2. Overview of Portfolios in Education and Teacher Training Portfolios have been a cornerstone in education, particularly over the course of the last several decades (Spaulding & Straut, 2006). During the late 1980s and early 1990s portfolios played a pivotal role in the authentic assessment movement (Wiggins, 1992, 1998). In addition, studies have examined portfolios for their effects on student learning in a wide variety of content areas, such as science

3GC06

10/21/2013

11:41:57

Page 105

The Evaluation Plan

(Roth, 1994), as well as math, social studies, and literacy. Although a substantial portion of the literature on portfolios has focused on their use in instruction and assessment of student learning, there is also a main thrust in the literature whereby portfolios are used in the training of teacher candidates. An examination of the literature on portfolios reveals that much of the work focuses specifically on the use of teacher candidate portfolios in teacher preparation programs (Barton & Collins, 1997; Klecker, 2000; Morgan, 1999; Shannon & Boll, 1996). In this context portfolios have played a variety of roles, from assisting new prospective teachers in obtaining employment to serving among the main assessment tools for verifying program completion and student readiness for graduation (Morgan). With the increased emphasis now being placed on technology and integration of technology into classroom instruction, portfolios used in teacher preparation programs have also begun to incorporate technology. A review of the literature on electronic portfolios, however, reveals much of the research in this area to be opinion based rather than empirical. Advocates of e-portfolios note many benefits of their use, ranging from increased creativity for their creators to increased “interactivity” among stakeholders involved in teacher preparation practices. These stakeholders consist of the teacher candidate, faculty from the teacher education program, and host teachers from the field placement experiences (Spaulding, Straut, Wright, & Cakar, 2006). Using portfolios as collections of artifacts to demonstrate competencies of teacher candidates has been a longstanding tradition in many of this country’s finest teacher preparation institutions. In more recent times, however, these portfolios and the processes associated with them have come under increased scrutiny as the institutions of higher education have worked to incorporate them.

“Can I see one of these portfolios?” Jason asked. “Certainly.” The director turned on a nearby computer, typed a few words on the keyboard, then said, “This is a senior who just graduated from the program.”

105

3GC06

10/21/2013

106

11:41:57

Page 106

Chapter 6 Integrating Technology into Student Teaching Experience

“They are electronic portfolios?” asked Daphne. “Yes, another requirement of the program is that all teacher candidates demonstrate that they can create an e-portfolio.” The director clicked on a few links and began to move through the student’s portfolio. “Have you always had portfolios for your teacher education program?” asked Daphne. “Yes; we used to have paper portfolios, but as part of our technology grant project we moved the system to e-portfolios. It allows the faculty to review the students’ portfolios much more easily and provide feedback as the students are working on assembling the portfolio.” The director kept scrolling through the e-portfolio. Daphne and Jason could easily see that the e-portfolio had lots of examples and artifacts showing different ways the teacher candidate had worked to integrate technology into her field placements. “How many e-portfolios do you have?” asked Jason. The project director thought for a few moments. “Over the course of the last three years of the project, I would say we have collected about three hundred portfolios.” “Three hundred?” asked Daphne. “Yes, I would say so, give or take a few.” Daphne looked at Jason with an overwhelmed expression. “Is it enough data?” asked the director. “Oh, it’s more than enough data, I would think,” Daphne finally said with a smile. “But how are we going to go about analyzing it all? That’s the real question.” Daphne and Jason found that the portfolios became a rich source of data for the third and fourth evaluation objectives. As they began to review the portfolios, they saw patterns emerging among the artifacts the teacher candidates had included to document their technology efficiency. Daphne and Jason began a running list of these patterns and eventually created a checklist of possible ways in which student teachers could have integrated technology into their student teaching classrooms. Exhibit 6.1 presents the checklist they developed.

3GC06

10/21/2013

11:41:57

Page 107

The Evaluation Plan

EXHIBIT 6.1.

Technology Use and Integration Checklist for Portfolio Analysis □

Candidate shows evidence of being able to use technology. For example, the candidate

• Has created an e-portfolio using PowerPoint. • Has digital or video artifacts in the portfolio. These may be digital pictures of field placement classrooms or students studying or working together. Note: These are not necessarily examples in which the candidate or his or her students are using technology. □

Candidate shows evidence of integrating technology into instruction for didactic teaching purposes. An example of this is the candidate’s introducing PowerPoint to the setting and delivering direct instruction of a lesson through the use of this technology.



Candidate shows evidence of integrating technology into instruction for inquiry-based teaching or for student collaboration purposes, whereby students are asking questions and using technology to collect data to answer them (for example, having students conduct “research”).



Candidate shows integration of technology in field placement classrooms and describes how technology or technology integration was used to address a particular issue or problem in the school or classroom as it related to instruction.



Candidate shows integration of assistive technology in field placement classrooms to meet students’ special learning needs.



Candidate shows evidence of a product that he or she has developed using technology. This may be an entirely new lesson that integrates technology. In this situation students (not the instructor) are using the technology.



Candidate shows evidence of working collaboratively with host teachers to integrate technology into field placement classrooms.



Candidate shows evidence of working collaboratively with other staff, such as technology support personnel or administrators, to address technology issues or integrate technology into the field placement classrooms.



Candidate shows evidence of student work that uses technology, and assesses the student work to determine whether the student has reached the learning objectives. If the student has not, then the candidate provides evidence that supports refinement of instructional practices or a change in the use of technology.

107

3GC06

10/21/2013

108

11:41:58

Page 108

Chapter 6 Integrating Technology into Student Teaching Experience

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS Daphne and Jason’s evaluation plan was a success. In analyzing the portfolios using the checklist, they discovered that 95 percent of the teacher candidates provided artifacts or examples in their portfolio from each of the categories on the technology checklist. The evaluators presented this information to the client. As part of their presentation, the evaluation team decided to use examples of the teacher candidates’ work to support their findings. The client was pleased with both the presentation and the wide variety of technology use that teacher candidates were able to demonstrate in the portfolios. With both the work that the university’s internal evaluator had undertaken and Daphne and Jason’s contribution, the program director was able to provide the funder with sufficient evidence to show that the program had successfully met all of its evaluation objectives.

FINAL THOUGHTS Many times, evaluators like Daphne and Jason find themselves in situations in which they cannot collect the data that they would prefer to gather. In such situations, evaluators often have to step back and think about other data they can collect using alternative approaches. In this case study, Daphne and Jason respected the wish of the project director not to overtax host teachers with an array of surveys and interviews, and instead examined the setting and found that the teacher candidates’ portfolios supplied a wealth of information to meet their evaluation objectives.

KEY CONCEPTS Portfolios Teacher candidates

DISCUSSION QUESTIONS 1. Sometimes evaluators are faced with challenges in collecting the kinds of data—and the quality and quantity of data— that they would like. In this case, Daphne and Jason were

3GC06

10/21/2013

11:41:58

Page 109

Class Activities

faced with the fact that they were not going to have access to the host teachers. Review the case again and be prepared to enter into a discussion about how you would go about dealing with this dilemma if you were the evaluator. 2. In examining the portfolios, Daphne and Jason looked for different ways that teacher candidates documented the integration of technology into their student teaching classrooms. Can you think of any other ways that they might have analyzed the portfolios? If so, be prepared to share your ideas in a class discussion. 3. What ethical considerations do you think the two evaluators in the case had to address? How might these evaluators have gone about addressing these ethical challenges?

CLASS ACTIVITIES 1. Contrary to what many may think, not all work in program evaluation in education focuses directly on programs in public schools, teacher instruction, and student learning and behavioral outcomes. The ways in which we train our teachers of tomorrow to use technology or a host of other instructional approaches are also commonly examined, as in the program in this case study. Therefore, in your role as an evaluator who may one day work in the field of education, it is important that you understand fundamentally how teacher education programs are designed. Take a look at three or four of your local colleges’ and universities’ teacher education programs on the Web. Note some of the commonalities you see across programs. What are some differences? Be sure to note the number of credits that constitute each program and the different programs available: childhood education, secondary education, special education, and so on. Also pay particular attention to the different field placements and student teaching opportunities that each of these institutions provides. Be prepared to present your findings in class and have a discussion with other class members about how teachers are trained for the classroom.

109

3GC06

10/21/2013

110

11:41:58

Page 110

Chapter 6 Integrating Technology into Student Teaching Experience

2. Find a teacher who uses technology in his or her classroom. Set up a time to go in and conduct several observations of the teacher and the class using the technology use and integration checklist developed by Jason and Daphne. Did the observation protocol hold true? Were there ways in which the teacher integrated technology that were picked up by the protocol? Were there any ways in which the teacher integrated technology that were not on the protocol? Be ready to present your observations and findings in class.

SUGGESTED READING Mouza, C. (2002–2003). Learning to teach with new technology: Implications for professional development. Journal of Research on Technology in Education, 35, 272–289. Page, M. S. (2002). Technology-enriched classrooms: Effects on students of low socioeconomic status. Journal of Research on Technology in Education, 34, 389–409. Slagter van Tryon, P. J., & Stein Schwartz, C. (2012). A pre-service teacher training model with instructional technology graduate students as peer coaches to elementary pre-service teachers. Tech Trends, 56(6), 30–36.

3GC07

10/21/2013

11:56:5

Page 111

CHAPTER

7 EVALUATION OF A PROFESSIONAL DEVELOPMENT TECHNOLOGY PROJECT IN A LOW-PERFORMING SCHOOL DISTRICT Ex Post Facto Evaluation LEARNING OBJECTIVES After reading this case study you should be able to 1. Understand benchmarking and be able to generate benchmarks for an evaluation project 2. Define what a logic model does and give its main components 3. Discuss some of the benefits of collecting evaluation data while the project is occurring and some of the challenges of performing an evaluation after a project has ended

THE EVALUATOR Samantha Brown had worked for the Johnstown School District for ten years. Like many of the school staff, Sam—or “Miss Sam,” as everyone at the school called her—wore many hats. Her official title was director of special projects. One of her main responsibilities was to pursue and oversee special projects that were outside of the district’s normal curriculum and funding. She spent most of her time trying to obtain externally grant-funded projects for the

111

3GC07

10/21/2013

112

11:56:6

Page 112

Chapter 7 Evaluation of a Professional Development Technology Project

district. In most cases, she pursued requests for proposals (RFPs) from the state and federal governments and other related funding agencies, and she wrote grants proposals in the hope of being awarded these additional monies to support such initiatives. Although Sam did not have a degree in program evaluation or grant writing, she did have an undergraduate degree in communication and a master’s in English. Sam had strong communication and writing skills. One summer she had completed grant writing workshops at the local university, and she had also attended grant writing and program planning conferences. Accordingly, although Sam did not have formal training in program evaluation, she was familiar with it because of her grant writing work.

THE PROGRAM The district had focused significant time and energy on the area of technology, particularly in regard to increasing teachers’ ability to successfully integrate technology into the classroom. In fact, the district had spent the last three years developing the technology infrastructure at their three elementary schools, two middle schools, and high school. Although this development had been very costly, the state technology grants they had managed to acquire provided them with the monies to wire all their school buildings for the Internet. In addition, the district had been able to build three computer labs at the high school, a computer lab and cart at each of the middle schools, and two mobile computer labs at each of the three elementary schools. The mobile labs consisted of carts holding twenty wireless laptop computers, a teacher computer, an LCD projector, and a DVD player. Along with setting up the hardware and software for the buildings, each year the district had also used the grant monies to hire several consultants to deliver a variety of professional development workshops for teachers and other appropriate staff members. Because of the wide range of computer abilities among teachers and staff, workshops covered topics from basic computer skills and knowledge to more advanced computer skills, such as integrating technology into teachers’ instructional practices and using technology to meet all students’ learning needs. As part of this process, the district had also made evaluation of the project one of the job responsibilities of the district’s

3GC07

10/21/2013

11:56:6

Page 113

The Evaluation Plan

technology coordinator. During the three years of the technology project, however, there had been three different technology coordinators. Currently the position remained vacant. One day Sam received a call from the superintendent. He informed Sam that although he was very happy with what the technology project had done for the district, there had been little in the way of program evaluation for the project. In fact, he had just received a phone call from the technology coordinator at the state’s department of education, who told him that the program was supposed to have an annual program evaluation and that the state had never received an evaluation report during the project’s three years. The coordinator had informed the superintendent that the Johnstown School District would be ineligible to apply for future RFPs in technology if the district didn’t submit an evaluation report that encompassed the last three years’ work within thirty days. The superintendent asked Sam to design and carry out an evaluation for the district’s technology program before the deadline set by the state. Sam told the superintendent that she would try her best. He thanked Sam for all her hard work in securing the external funds, but reminded her that if the district didn’t meet this challenge, it would not be able to continue with its technology initiative. Unsure where to start, Sam decided that her first step would be to gather all the data that the previous technology coordinators had collected. Next she would review that data, compare it to the main goals of the grant, and then determine what additional data she would need to write the project’s summative evaluation report. Sam was given access to the office of the former technology coordinator. Going through the file cabinets, she was pleased to come across a large binder marked “DATA FOR TECHNOLOGY GRANT.” Things are looking up, Sam told herself. We might make the state’s deadline after all. But her hopes were quickly dashed: when she opened the binder, it was empty.

THE EVALUATION PLAN Later, back in her office, Sam opened her files for the project and found three documents. The first was a listing of the intended

113

3GC07

10/21/2013

114

11:56:6

Page 114

Chapter 7 Evaluation of a Professional Development Technology Project

benchmarks Specific outcomes that define the success or worth of a program

benchmarks the district had originally proposed to meet. Benchmarks are developed to help gauge whether the program, incentive, or activity is producing the results or outcomes deemed necessary by a group. When performance is being examined, benchmarks are generally put in place to allow for comparisons across samples of people being studied (Mathison, 2005). Table 7.1 presents the benchmarks for Sam’s evaluation assignment. In addition, Sam came across a logic model that she and others had created when writing the initial grant. A logic model is an organizer used by evaluators to think about, collect, and manage different kinds of data. Table 7.2 presents the logic model Sam found. Box 7.1 includes an overview of logic models in general. Given the time constraints, Sam realized that collecting data from multiple sources (for example, through surveys, interviews, observations, and document analyses) was not going to be possible. She decided that a well-designed survey administered to all

TABLE 7.1.

The District’s Technology Benchmarks

Benchmark 1

100 percent of students have access to technology (such as computers) on a daily basis.

Benchmark 2

100 percent of students have a basic understanding of computer functions and applications.

Benchmark 3

100 percent of students have the opportunity to participate in technology-rich learning environments (such as the classroom) in which technology is being used to both deliver and drive instruction.

Benchmark 4

100 percent of students have the opportunity to participate in student-centered or student-directed projects and guide their own learning.

Benchmark 5

100 percent of students have the skills and opportunity to work with and train other students in using technology.

Benchmark 6

100 percent of students are able to develop technology-rich activities, lessons, and products and can demonstrate how these components meet the state’s learning standards.

3GC07

10/21/2013

11:56:6

Page 115

The Evaluation Plan

TABLE 7.2.

115

Overview of the Logic Model Guiding the Project

Evaluation Logic Model Component

Purpose of Evaluation Activities

Activities

To document all activities associated with the project. This includes documenting the number of technology workshops for teachers, the types of workshops, the content covered, and the number of workshop attendees.

Outputs of activities

To document teacher perceptions of the technology workshops, for example, teacher satisfaction with workshops, new knowledge gained, and plans for implementation.

Intermediate outcomes

To document teachers’ changes in practices associated with technology integration and any immediate outcomes associated with improved student classroom behavior or learning.

End outcomes

To document student academic achievement as a direct result of technology integration into the classroom.

BOX 7.1.

Overview of Logic Models

Increasingly popular among program evaluators in recent times, the idea of using logic models has been around since their introduction in the 1960s with the work of Edward Suchman (1967) and others (cited in Rogers, 2005). Although logic models are typically displayed using diagrams or flowcharts, Rogers notes that they can also be portrayed through the use of narratives. More important, program evaluators should recognize that logic models can be developed either before program implementation or after completion of activities—the latter in an ex post facto (after the fact) evaluation. According to Rogers (2005), critics of logic models have noted that those focusing on certain processes and outcomes may in fact limit the evaluator’s ability to find and document other unanticipated points of the program and unexpected outcomes. In addition, logic models are not exclusively designed by evaluators. In fact, many evaluators benefit from collaborating with their clients, stakeholders, or both to design logic models.

logic models Templates or blueprints that show the relationships between activities and the outcomes of those activities

ex post facto (after the fact) evaluation An evaluation that takes place after the program has already occurred

3GC07

10/21/2013

116

11:56:7

Page 116

Chapter 7 Evaluation of a Professional Development Technology Project

teachers in the district would be the only methodology that would collect a wide range of data in a timely fashion. But would she be able to develop a survey instrument that would allow her to capture all the data necessary to tie back to and meet the needs of the various sections of the logic model? Using the logic model as her guide, Sam developed survey items for each of the logic model components (for example, outputs of activities). Sam found that the logic model provided her with a framework to ensure that she was gathering the wide range of information she needed from participants to ultimately address all of the evaluation objectives. Overall, despite the lack of annual evaluations, it appeared that the district had met most of the benchmarks. The evaluation also revealed that most teachers and staff were comfortable with basic computer skills and wanted to work on more specific aspects of integrating technology into the classroom. In addition, teachers also wanted more professional development in using assistive technology to meet the learning needs not only of students with special needs but also of all the learners in their building. Many times people in certain jobs find themselves having to conduct an evaluation of a program after the program has finished. As in Sam’s case, often they find that past evaluations of the program have been less than ideal. In this instance, however, Sam was able to use her past experience and background in program evaluation. By fully examining the situation and understanding the program goals and evaluation objectives, she was able to develop an evaluation plan that could be executed quickly and efficiently to obtain the data required by the funding agency. Other, less experienced evaluators might not have been so successful, particularly if they had tried to implement an evaluation plan that did not take into consideration the constraints under which the data collection was taking place.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS When the next technology RFP came out from the state’s department of education, Sam went to work, writing a grant proposal that would help the district purchase additional computers for the buildings and provide professional development for working

3GC07

10/21/2013

11:56:7

Page 117

Discussion Questions

with assistive technology. The district was awarded $400,000 for three years to increase the use of technology in an effort to decrease the gap in performance on state assessments between general education and special education students. In her grant proposal, Sam made sure to state that the district would hire an external evaluator to perform the evaluation on an annual basis.

FINAL THOUGHTS In the end Sam was able to take a bad situation and, using her knowledge of research and program evaluation methods, salvage the program and gain future funding. From this experience Sam learned a lot about herself as an evaluator and about the importance of program evaluation in relation to continued funding of a program. When working with programs after that, Sam was always careful to make sure the program evaluator was collecting data throughout the course of the project, and she required her evaluators to submit quarterly or biannual reports that not only provided project directors with formative data but also ensured that the necessary data was getting collected. Sam made certain there would never again be another empty folder waiting for her.

KEY CONCEPTS Benchmarks Ex post facto (after the fact) evaluation Logic models

DISCUSSION QUESTIONS 1. In examining the program’s structure, what do you think might have been some initial benefits of the mobile computer labs over the traditional computer labs? 2. Sam got herself into quite a predicament by relying on the district’s technology coordinator to serve as the project’s main internal evaluator. Knowing what you do about the role and responsibilities of an external evaluator compared

117

3GC07

10/21/2013

118

11:56:7

Page 118

Chapter 7 Evaluation of a Professional Development Technology Project

with those of an internal evaluator, discuss some of the pros and cons for each in regard to accurately evaluating the program in this particular situation. What are some things Sam could have done during the implementation of the program that could have resolved some of the challenges she faced? 3. Reexamine the list of the district’s technology benchmarks in Table 7.1. Be prepared to discuss how you think these benchmarks would have been used by the evaluator on this project. Do you think this list is complete, or could you generate a few more benchmarks? What do you see as some of the benefits of establishing these benchmarks? What do you see as some challenges in or limitations of their use, particularly as they apply to this project? 4. What ethical considerations do you think the evaluator in the case had to address? How might the evaluator have gone about addressing these ethical challenges?

CLASS ACTIVITIES 1. Taking into consideration the district’s benchmarks and the logic model (see Tables 7.1 and 7.2, respectively), develop a survey that Sam could have administered to all teachers across the district who had participated in the last three years of technology professional development workshops. 2. What does the integration of technology into the classroom mean to you? Conduct some informal interviews with teachers you know. Ask them about technology integration. What does it mean to them? Have they ever integrated technology into their classroom? What changes, if any, have they seen from their students in terms of increased academic performance and from their own teaching as a result of this experience? Were there any challenges? If so, how did they try to address them? Consider how these findings would be incorporated into an evaluation report.

3GC07

10/21/2013

11:56:7

Page 119

Suggested Reading

SUGGESTED READING Hopson, M. H., Simms, R. L., & Knezek, G. A. (2001–2002). Using a technology-enriched environment to improve higher-order thinking skills. Journal of Research on Technology in Education, 34, 109–119. Page, M. S. (2002, Summer). Technology-enriched classrooms: Effects on students of low socioeconomic status. Journal of Research on Technology in Education, 34, 389–409.

119

3GC07

10/21/2013

11:56:7

Page 120

3GC08

10/21/2013

12:27:8

Page 121

CHAPTER

8 EXPANSION OF A HIGH SCHOOL SCIENCE PROGRAM LEARNING OBJECTIVES After reading this case study you should be able to 1. Identify and understand the differences between the roles of program developers and program evaluators 2. Describe the differences between a statewide evaluation of a program and an evaluation of a program in a single facility 3. Note several barriers or challenges that arise when expanding programs to new settings 4. Understand the important role formative feedback can play in addressing critical issues uncovered through evaluation data collection efforts

THE EVALUATORS Jennifer Wright and Ed Abbey were internal evaluators working for an organization that sponsored science-based programs for public schools. Although both Jennifer and Ed were based out of Washington, DC, their evaluation work took them to locations all across the country to observe programs and collect data. One of their chief responsibilities as evaluators for the organization was to monitor various projects that the organization had funded. In some cases, Jennifer and Ed found themselves conducting a

121

3GC08

10/21/2013

122

12:27:8

Page 122

Chapter 8 Expansion of a High School Science Program

metaevaluation An evaluation that examines other evaluations to determine the overall worth of programming

meta-evaluation. This type of approach required them to conduct an evaluation of a single program not just in a particular school, but across multiple locations.

THE PROGRAM Over the past five years the organization had worked to fund an inquiry-based science program for high school students. Students participated in the program during their sophomore, junior, and senior years. The purpose of the program was to have students conduct authentic scientific research on a topic of interest to them. One aspect of this approach was that the high school science instructor teaching the course served as a facilitator for the student, making sure the student met the project’s required goals. The teacher also met weekly with each student and, using a portfolio that served as an “organizer” for the student who was conducting research, conferred with the student to review that week’s goals and to select new goals for the student for the following week. Because students could select from such a wide variety of research topics, from puffins to DNA, their teachers often did not have the expertise to assist them appropriately. To address this issue, students each worked with a mentor who was an expert in the chosen field of study.

THE EVALUATION PLAN In the first three years of the project, the organization funded close to 150 school districts in one state to implement the program. As part of their responsibilities as internal evaluators, Jennifer and Ed had conducted a large-scale evaluation annually as the program expanded across the state. Box 8.1 presents an overview of the annual evaluation plan. To execute this plan, Jennifer and Ed had teacher, student, administrator, and parent surveys mailed to all 150 program sites at the end of each year. These surveys collected a wide range of evaluation data for the program. In addition, Jennifer and Ed selected approximately ten programs for site visits. The evaluators spent several days on-site, conducting interviews with stakeholders (students, parents, teachers, administrators, and so on). The evaluators also reviewed materials, such as the students’ portfolios, and in some cases observed students

3GC08

10/21/2013

12:27:8

Page 123

The Evaluation Plan

BOX 8.1.

Overview of Jennifer and Ed’s Annual Evaluation Plan 1. Mail the survey packet to each site; the packet includes • Surveys for the teachers implementing the program, school

administrators, the students participating in the program, and their parents • Contact information collected from student mentors partici-

pating in the program and a mentor survey sent annually to the mentors 2. Make site visits to the ten schools selected annually for this evaluation and conduct the following activities: • Interviews with teachers implementing the program, school

administrators, and parents; focus groups with students • A review of materials and projects, including students’

presentations of their research projects, student portfolios, and so on 3. Conduct a site visit of the annual summer training institute, have teachers complete a posttraining survey, and conduct focus group interviews with teachers participating in the training.

presenting their research at the school research fairs, which were open to the community. Another effective feature of the program was its three-week summer training institute for teachers, Science Away!, held each year. New teachers who wanted to implement the program were trained at the institute by the program developers. Each week of the training was designed to simulate a year of the program. New teachers got to experience firsthand the goals and activities expected of students in the program. Over the course of the past three years the training had been very successful, with 100 percent of new teachers who were trained that summer implementing the program back at their school in the fall.

123

3GC08

10/21/2013

124

12:27:8

Page 124

Chapter 8 Expansion of a High School Science Program

Based on the success of both the program itself and its training, Science Away! decided to fund the project for an additional two years (years four and five). However, instead of continuing to fund program training and implementation in the one state, the organization wanted the project developers to expand the program to four different states in the next two years. Seeing this as an exciting endeavor, the project developers began to work with superintendents and state education departments in several nearby states so that they would have a ready audience. They selected a state to expand their efforts into and chose a central location for the training. Next they recruited twenty teachers and held their three-week summer training institute.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS

barriers Challenges, faced by either those directly participating in a program or those administering a program, that prevent the program from occurring as planned

Jennifer and Ed were busy following their annual data collection methods for evaluation of the program across the state, but they made plans to visit the new out-of-state training site toward the end of the institute’s third week. This would give them some time to observe part of the actual training that these new teachers were receiving, as well as to collect some additional data. They planned to conduct a couple of focus groups with the teachers during one of their breaks from the training and to administer a training survey. The survey would gather teacher perceptions of the overall quality of the training, their perceptions of their level of preparedness to implement the program a few months later in the fall, and any barriers or challenges teachers believed they might face in trying to implement the program back at their school. All three of these methods—site visits, focus groups, and surveys—were the same ones Jennifer and Ed had used during the first three years of evaluating the program. From the programs implemented within the state, the evaluation data had consistently revealed that following the training, teachers who had not felt prepared to implement the program at the beginning of the school year typically did not implement it or failed to keep the program going once it started. As planned, during a break in the training, the evaluators administered the survey to the teachers. Later, just before lunch, Jennifer quickly scanned the surveys. Much to her surprise, she

3GC08

10/21/2013

12:27:8

Page 125

Summary of Evaluation Activities and Findings

discovered that all the teachers had indicated that they would not be implementing the program that coming fall. “Ed, can I see you for a minute?” she whispered. Taking the surveys, she drew him out of the classroom and into the hallway where it was quiet. “What’s going on?” “Take a look at the teachers’ surveys,” she said, handing them to him one at a time. “For the ‘Yes/No’ question we have about implementing the program for the fall academic year, everyone has checked ‘No.’” No wonder they were surprised: in the past all teachers had indicated that they would be implementing the program following the training. Ed flipped through the surveys one more time just to be sure. It was true: no one was planning to implement the program in the fall. The evaluators had planned to also split the group up and conduct two group interview sessions—focus groups. To gather more information about this issue of delayed implementation, Jennifer and Ed decided to add a question to their focus group protocol. Later, during the two focus groups, all the teachers validated the survey finding and reaffirmed that they were not going to be implementing the program that school year. Confused, the evaluators asked the teachers why. They learned that under their state education department rules, all new curricula adopted by school districts had to be submitted to a year-long review by the district before being adopted. This was not the case on the East Coast, where school districts could adopt any curriculum desired by the administration. The evaluators then asked the teachers if they had conveyed any of this information to the developers during the last three weeks. The teachers said that they had not told the developers because they thought they were nice people and didn’t want to upset them. Jennifer and Ed were now faced with a dilemma. They could tell the project developers that the teachers were not going to be implementing the program in the fall as they had anticipated. Jennifer was worried, however, that this information might upset the developers, and she feared that they might take it out on the teachers during the last day of training. In contrast, if they did not tell the developers, they would go ahead and deliver the remainder

125

3GC08

10/21/2013

126

12:27:8

Page 126

Chapter 8 Expansion of a High School Science Program

of the training under the false assumption that the teachers would be implementing the program in the fall. After they finished their focus groups and the teachers began to return to the training room, Jennifer and Ed still did not know what they should do to resolve the situation. As in this case study, sometimes evaluators “uncover” information about the program they are studying that is critical to its overall success. Jennifer and Ed were faced with a situation in which high school teachers being trained to implement a science program were unable to do so because of a technicality arising from their particular state’s department of education. This had not been an issue in-state where Jennifer and Ed had evaluated the program. However, delivering that information to project developers posed a real challenge. Jennifer and Ed feared that the project developers might become upset at the news, which could have serious consequences for the relationship that the project developers had created with these teachers. By not informing the developers, however, they would be doing them a disservice in that they would continue with the remainder of the training as though the teachers were indeed planning to implement their science classes in the upcoming school year.

FINAL THOUGHTS Program evaluation is unpredictable: evaluators never know what they are going to discover. Therefore, a successful evaluator should never take a program for granted, becoming so familiar with a particular program and how it works that he or she views his or her approach with tunnel vision. What works in one setting many times does not transfer over and function the exact same way in another setting.

KEY CONCEPTS Barriers Meta-evaluation

DISCUSSION QUESTIONS 1. What are some possible challenges you might encounter when serving as an evaluator of a program that has been

3GC08

10/21/2013

12:27:8

Page 127

Suggested Reading

implemented at different locations around the state, as opposed to evaluating a program in a single school or district? 2. After reviewing Jennifer and Ed’s evaluation plan in Box 8.1, what are some additional evaluation activities that you think could have been conducted as part of this evaluation? 3. Discovering that the teachers were not going to implement the science program in the fall was a surprise to the evaluators, forcing them to decide whether to tell the program developers what they had found out during the training session. Based on what you have learned in this case study, what would you have done if you were the evaluator? 4. What ethical considerations do you think the two evaluators in the case had to address? How might these evaluators have gone about addressing these ethical challenges?

CLASS ACTIVITIES 1. Often outcomes or benefits for those participating in a program go beyond the immediate scope of the program. In this case of the high school science program, students went on to further education and careers. How, if at all, might the program have assisted students in their future endeavors? After reviewing the science program, generate a list of possible outcomes or benefits you think such a science program would have produced for students and develop an evaluation plan for tracking these high school students beyond the scope of the program. 2. Jennifer and Ed used a variety of tools to collect both qualitative and quantitative data. Review the evaluation plan in Box 8.1 and develop some instruments as though you were going to evaluate this program.

SUGGESTED READING Duschl, R. A. (1997). Strategies and challenges to the changing focus of assessment and instruction in science classrooms. Educational Assessment, 4, 37–73.

127

3GC08

10/21/2013

12:27:8

Page 128

3GC09

10/21/2013

12:32:16

Page 129

CHAPTER

9 EVALUATION OF A PROVEN PRACTICE FOR READING ACHIEVEMENT LEARNING OBJECTIVES After reading this case study you should be able to 1. Understand what constitutes a proven practice versus a practice that is developed in a more naturalistic setting 2. Describe the processes associated with developing proven practices 3. Understand what extraneous variables are and how they can interfere with our understanding of what works and what doesn’t work when it comes to curriculum and instructional practices 4. Understand some of the challenges evaluators face when delivering findings and data to clients and how this reporting can have ramifications for all those involved in a program

THE EVALUATORS Dennis Fuller and Margaret Lamb were both faculty members in educational psychology at a small teachers’ college. As part of their professional work the two faculty members often worked on projects outside of the college. Dennis had an extensive background in program evaluation. He taught several courses at the college in program evaluation, and most recently his department had had some extensive discussions about developing a master’s program in program evaluation. Margaret had a background in educational research and literacy.

129

3GC09

10/21/2013

130

12:32:16

Page 130

Chapter 9 Evaluation of a Proven Practice for Reading Achievement

proven practices Activities that are supported by research to be effective in meeting their desired end outcomes

extraneous variables Outside influences that can account for more of the change than the actual program that is being examined

funding cycle A set of processes whereby grants are issued and eligible individuals or groups may submit their potential program for possible funding

THE PROGRAM The federally funded Reading Right program was geared toward improving student literacy in low-performing school districts. School districts whose students had performed poorly on the state’s English language arts (ELA) assessment were eligible to apply for funds to implement this program. Under this initiative, the Reading Right program had been noted to comprise proven practices. A practice or activity is often referred to as “proven” when it has been subjected to a series of studies with an experimental or quasi-experimental design. To control for extraneous variables—that is, variables that may be making a more significant impact than the actual program itself—these studies are conducted in very controlled, sometimes laboratory-like settings. In repeated lab trials, with students who had low literacy competencies, the Reading Right curriculum had made noticeable and significant improvements. The funding cycle or timeline for the federal funding was two years. However, districts could apply for another round of possible funding to extend the initiative to a total of four years. As part of that extension process, an external evaluation of the program was required. This evaluation was multifold; Box 9.1 presents some evaluation questions that had to be answered.

BOX 9.1.

Evaluation Questions for the Reading

Right Program 1. How is the program being implemented across the schools? 2. Is it being correctly implemented at all sites? 3. What are administrator, teacher, and staff perceptions of the program? 4. Do these stakeholders see any benefits to students as a result of implementing the Reading Right program? 5. Do they see any challenges with its implementation? 6. Has student performance in literacy on the state’s ELA assessment improved?

3GC09

10/21/2013

12:32:16

Page 131

Summary of Evaluation Activities and Findings

131

It was also a requirement under the grant that all eligible schools implement the Reading Right program—failure to do so would result in the district’s no longer receiving funding. For the evaluation, Dennis and Margaret were hired by an urban district that had received two years of Reading Right funding and was reapplying for an extension of additional years. In this district there were seven elementary schools; all were eligible and had been implementing Reading Right. The district superintendent, who had hired Dennis and Margaret, was adamant about ensuring that all the elementary schools were implementing Reading Right. He told the evaluators that if a school was not implementing the program correctly, he wanted to know and have the school named in the evaluation report so they could work with the school to improve program implementation.

THE EVALUATION PLAN To answer the evaluation questions, Dennis and Margaret began to collect both quantitative and qualitative data. They created a survey that went out to all teachers, administrators, and staff at the seven elementary schools. They also collected school data on student performance on the ELA assessment. In addition to collecting data for test years that corresponded to program implementation, they decided to also collect baseline data or preliminary data for three years prior to the implementation of Reading Right. Finally, they developed interview protocols and began to meet with teachers one-to-one and in small groups. The purpose of the interviews was to gather more in-depth data from teachers about the Reading Right curriculum and how they went about implementing it.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS In six of the elementary schools Dennis and Margaret found that the Reading Right curriculum was being implemented with fidelity, meaning that its procedures, activities, and assessments were being followed according to the curriculum guidelines and procedure manuals. Dennis and Margaret found that these schools had made no notable gains in student performance following their

baseline data Data that is gathered before participants engage in any programming

3GC09

10/21/2013

132

12:32:16

Page 132

Chapter 9 Evaluation of a Proven Practice for Reading Achievement

implementation of Reading Right when compared to the baseline data. One district out of the seven had made some impressive gains with increasing student performance on the ELA assessment over the course of the past two years. However, during the interviews with teachers from this district it was revealed that the teachers were not using the Reading Right curriculum. Teachers told the evaluation team that they “pretended” to use the Reading Right curriculum, and they had all the materials available in their classroom, but when they closed their door they “did their own thing.” The teachers noted that over the past few years they had developed their own curriculum, based on what they found to work with their students. During the interviews teachers referred to theirs as a “grassroots” reading curriculum. And they were not about to give it up and use a curriculum, like Reading Right, that they didn’t know much about and didn’t really know would even work for their students. Afterward, Margaret and Dennis discussed their findings. What were they to do? If they reported the finding that the one elementary school wasn’t implementing the program, the whole district would be in jeopardy of losing the funding. In addition, the superintendent would know which school wasn’t implementing the proven practice and would want to change what the teachers were doing. Ironically, it was this school that had made the only gains in ELA assessment scores over the past couple of years. After much debate, in the end, Margaret and Dennis reported that the Reading Right curriculum was not being implemented with fidelity to the model in the one elementary school. They pointed out, however, that this was the only school that had made notable gains in student performance on the ELA assessment. They also made recommendations for the district to further study the curriculum that teachers were delivering to students in that elementary school. Because ELA assessment scores across the other six schools did not vary much from the initial baseline, school board members were leery of continuing with the Reading Right curriculum, despite the large amount of grant funds they would be able to obtain from the federal government. Instead, the district had the evaluation team further investigate the one elementary school and break down the grassroots curriculum that these teachers had created from years of experience on the job.

3GC09

10/21/2013

12:32:16

Page 133

Discussion Questions

Shortly afterward, the state education department posted an RFP to improve low performance on the ELA assessment. The district submitted a grant proposal based on the grassroots curriculum in the one elementary school. It received a large grant to continue to train teachers in other schools using this curriculum. As the district moved the curriculum into the other schools, student ELA assessment performance slowly increased there as well.

FINAL THOUGHTS At first glance, the idea of a “proven practice” is admittedly attractive. As our evaluators discovered, however, sometimes it is not the proven practice that is making the change or impact occur, but a combination of this practice and other extraneous variables working together. Often a program will be credited with improving an educational setting, when that credit should actually go to the dedicated administrators, teachers, and staff members who worked hard to put that program in place. When other districts see such improvement they too want to adopt the associated practices, only to find that these practices produce results that are less than satisfactory when they are implemented as originally intended.

KEY CONCEPTS Baseline data Extraneous variables Funding cycle Proven practices

DISCUSSION QUESTIONS 1. Pretend for a moment that you are Margaret or Dennis. What would you have done if you were the evaluator on the project? How might you have gone about delivering the information from your evaluation to the district superintendent and the school board? What other concerns would you have had with delivering this information?

133

3GC09

10/21/2013

134

12:32:16

Page 134

Chapter 9 Evaluation of a Proven Practice for Reading Achievement

2. Evaluators often collect different kinds of information and data. For example, Margaret and Dennis collected both qualitative and quantitative data. They examined the district’s actual ELA assessment data during the time the Reading Right program was occurring, as well as baseline data from three years prior to program implementation. Why is collecting baseline data important? What might be some ramifications of not collecting such data that could interfere with the evaluation findings? 3. In this chapter’s discussion about proven practices and establishing proven practices in a lab-like setting, the term extraneous variables was mentioned. What are some extraneous variables that might have led to changes in literacy levels but had no relationship to or connection with the Reading Right program?

CLASS ACTIVITIES 1. Margaret and Dennis developed several instruments in conducting their evaluation. Based on what you have learned from the case study, develop a draft of a survey and interview protocol they could have used in gathering data about the Reading Right program from across the seven elementary schools. 2. One of the goals of the evaluation was to determine whether the Reading Right program was being delivered appropriately, according to how it was created and delivered in the lab-like studies. How, as an evaluator, do you think you might have gone about doing this? 3. At the end of the case study it was revealed that the district decided to go ahead and expand the curriculum that was shown to be successful in the one elementary school. Based on that narrative, how might you go about conducting an evaluation of this new program? Develop an initial plan and evaluation matrix that you or your group can present to the class. 4. What ethical considerations do you think the two evaluators in the case had to address? How might these evaluators have gone about addressing these ethical challenges?

3GC09

10/21/2013

12:32:16

Page 135

Suggested Reading

SUGGESTED READING Beswick, J. F., Willms, D. J., & Sloat, E. A. (2005). A comparative study of teacher ratings of emergent literacy skills and student performance on a standardized measure. Education, 126, 317–382. Graham, S., & Herbert, M. (2001). Writing to read: A meta-analysis of the impact of writing and writing instruction on reading. Harvard Educational Review, 81, 710–744.

135

3GC09

10/21/2013

12:32:16

Page 136

3GC10

10/21/2013

12:37:33

Page 137

CHAPTER

10 PROJECT PLAN FOR EVALUATION OF A STATEWIDE AFTER-SCHOOL INITIATIVE LEARNING OBJECTIVES After reading this case study you should be able to 1. Discuss the various benefits and challenges of conducting a statewide evaluation 2. Understand different approaches to or models for delivering enrichment-oriented after-school programming 3. Explain what is meant by the term higher collaboration of services 4. Explain what is meant by the term partner in relation to collaboration among different agencies 5. Develop a plan for conducting a statewide evaluation of an afterschool program that addresses some of the technical and methodological issues inherent in this type of evaluation

THE EVALUATOR Tina Larson and her colleagues formed a small consultancy. In recent years, with the growing emphasis on school accountability, the amount of evaluation work had increased exponentially. Although the firm worked on a wide variety of projects, a substantial portion of its revenue came from competitive grants. These projects were typically funded through the request for proposal (RFP) process, whereby the evaluator worked collaboratively with

137

3GC10

10/21/2013

138

12:37:33

Page 138

Chapter 10 Project Plan for Evaluation

statewide evaluation An evaluation whereby multiple sites have been funded to deliver similar programming

model A structure or approach that is developed for others to follow and use accordingly

competitive funds Monies earmarked to fund programs that are available to groups through the grant process

curriculum The planned activities that students will be engaged in to meet intended learning objectives

a school district, agency, or group to evaluate a single project. In cases of RFPs, many school districts might receive an award to implement their proposed project. In some instances, an RFP might not be looking for interesting projects to fund but rather a single evaluator to evaluate multiple programs that have already been funded. Many times a state education department will propose such an RFP. Tina’s firm now needed to respond to a state RFP. The state would be reviewing proposals for a statewide evaluation of all the after-school programs the state had funded under this effort and would issue one contract for the group that provided the best overall comprehensive evaluation plan. The deadline for the bid was in a week. Although Tina had informally discussed aspects of the proposed evaluation plan with members of her firm, the group had set aside a couple of hours to meet and flesh out their ideas for their proposal.

THE PROGRAM For the previous eight years, after-school programming had been a main focus for the state’s education department. During this time the state provided funding to approximately 150 school districts. After-school programs funded under these efforts had a certain model or structure for serving students. Prior to this funding initiative many school districts had historically provided their own after-school programming for students. Because these programs were expensive to operate, however, only affluent districts were able to offer such services to their student body. The advent of the state’s competitive funds—grants for which eligible school districts competed through the RFP process— made it possible for many high-need school districts to deliver a rich array of programming. Although this initiative had many goals, one of its major goals was to decrease incidents of violence and crime-related behaviors associated with students and school dismissals. Another program goal was to decrease student misbehavior during the school day and to increase student academic performance in the classroom and on the state’s annual standardized assessments. Another main component of after-school programming that was present in this initiative was the curriculum or set of activities being provided, with high-quality programs offering

3GC10

10/21/2013

12:37:34

Page 139

The Program

an array of activities. Box 10.1 lists some of the broad categories of possible activities. In addition, after-school programs under this initiative were required to assist students in improving their academic performance. Parent and community involvement was also a main component of high-quality after-school programs under this model. In addition, these programs were supposed to provide academic enrichment for parents through parenting classes and degree work (such as GED preparation). One distinctive aspect of this particular program was the way in which the activities were provided. The program required school districts to partner or collaborate with services or agencies in their respective communities, such as the local YMCA or Boys & Girls Club. Such agencies generally had a long-standing history in their community of providing high-quality after-school programming. Other possible partners included 4-H programs, local libraries, nature centers, and museums. Figure 10.1 gives an overview of one after-school program’s structure. As mentioned earlier, traditionally some schools in the state had offered after-school programming. However, for many schools that did not have the resources or the expertise to provide such programming, students generally either left their school premises at the end of the day to attend such programs elsewhere or returned to a home without adult supervision. As depicted in Figure 10.1, the students at each site no longer went to after-school programming elsewhere; instead, the programming, for the most part, came to them at their school building. This particular model is often referred to in the field

BOX 10.1.

Overview of Broad Categories for After-School Program Activities

Arts and crafts

Book Club

Music

Chess Club

Dance

Science Club

Cooking (home and career)

Journalism Club

139

partner An individual or group that supports another individual or group during the grant process

3GC10

10/21/2013

140

12:37:34

Page 140

Chapter 10 Project Plan for Evaluation

Nature Center

YMCA School Building Local Museum

Local Library

FIGURE 10.1.

Structure of After-School Program Higher Collaboration of Services

higher collaboration of services A process whereby supports and resources are brought to participants rather than provided at different sites or locations

of program development and evaluation as a higher collaboration of services, meaning that instead of an individual’s having to go to several services, the services themselves are “bundled” together as one or in such a way that it is easier for individuals to benefit from them.

THE EVALUATION PLAN Everyone from Tina’s group came to the planning meeting having read the requirements in the RFP. Tina decided that she would facilitate the meeting. In years past, the firm had served as an evaluator for several individual school districts that had provided after-school programming under the state’s initiative. As a result, Tina had built some solid understanding of these after-school programs—how they functioned and how they were structured at the individual school level. The consensus among the group’s members was that they were interested in the firm’s bidding on the statewide RFP. In general, the firm had mainly focused on conducting individual school evaluations, working directly for a school district or agency and evaluating a single program. However, everyone in the firm realized that a statewide project such as this one would surely bring a great deal of recognition to the growing firm and would no doubt lead to other large-scale evaluation projects down the road.

3GC10

10/21/2013

12:37:34

Page 141

The Evaluation Plan

The group decided that they would discuss some of the bigger methodological issues of the project first. Overall, the purpose of the statewide evaluation was to determine whether this afterschool programming initiative had any effect on student performance in school and on standardized tests, and on decreased incidents of violent behavior in the immediate school community. This aspect of the project was very clear to the members of the firm. But how they would design an evaluation plan to go about trying to show if such change occurred still remained somewhat of a challenge. “What we really want to do in the evaluation plan is show, wherever possible, that the after-school programming worked,” said Ben, the executive director of the firm. “I agree, and we can start to do this by examining numbers of student attendances in each of their respective after-school programs and correlate these to student performance and incidents of violent behavior for each student involved,” said Stan, a member of the firm’s measurement and statistics department. Tina had let the others talk first. Now she chimed in: “Well, ultimately, yes, that’s what we want to do as evaluators, but the big question is how?” Joan, another member of the stats department, expanded on Stan’s idea: “We could assess each program’s student attendance records. According to the state, it is mandatory that funded programs keep careful records of student attendance. Like Stan suggested, we could start there, and correlate the attendance data with other variables, such as student performance on the state’s standardized test in English language arts or math.” “That makes total sense,” said Kara, a new employee fresh out of a master’s program in educational research and statistics. “Think of the after-school program as the independent variable (IV) or treatment—correlating these two variables (that is, the number of days attending the after-school program and student performance on the state tests) would help to show that the more of the IV students received, the higher their scores were on certain tests.” “That would make sense and help to show that the afterschool programs were doing something,” said Joan. “I like it,” said Ben, half-listening to the conversation as he text messaged a prospective client on his cell phone about the next day’s meeting. “How about if we even found some students in

141

3GC10

10/21/2013

142

12:37:34

Page 142

Chapter 10 Project Plan for Evaluation

each school or district who were to participate in the after-school program but chose not to, so they didn’t receive any? They could serve as a sort of comparison group.” “That would show even more evidence that the programs were making an impact,” said Stan. He scribbled a few notes on a legal pad. The group continued with the discussion, talking about different correlations that they could run and possible comparisons of data sets to help show cause-and-effect relationships. Finally Tina interrupted, “These are all interesting ideas, but I think we are overlooking a major problem.” The room fell silent; all eyes turned toward her. “How so?” asked Stan. Tina felt a slight tightness in her throat, but she was confident in relying on her past experience in evaluating these programs to help support her point. “Well, the plan that you are talking about to evaluate the programs assumes that all the programs function the same way.” “What?” Stan stopped taking notes. Ben also stopped his text messaging and put down his cell phone. Tina went on, “You are assuming that the 150 or so after-school programs the state has funded all function the same way. But that is just not the case. The firm has evaluated several school districts that received funding from the state, and even though they are part of the same initiative and the programs are all required to have the same overall goals and objectives, they went about structuring their programs and delivering activities very differently.” Tina went on to explain that one variable on which programs might differ comprised the partners they employed. In some cases, she noted, school districts used a combination of partners (as depicted in Figure 10.1). However, depending on the community agencies and after-school programming groups in a certain area, those partnerships that made up the program could look very different. In addition, even though the school districts kept careful records of student attendance, the types of activities that schools could offer as part of their programming—as well as the types of activities that students chose to do—could and did vary. For example, one school’s after-school program might have focused heavily on the performing arts, whereas a neighboring school’s program might have had a science or engineering focus.

3GC10

10/21/2013

12:37:34

Page 143

Summary of Evaluation Activities and Findings

In addition, another school might not have had a theme or focus, and students might have selected from a “menu” of after-school programming activities. Tina also explained that even though the state’s after-school programming initiative focused primarily on elementary and middle schools, some districts’ programs were also implemented at the high school level. And she noted that even if schools had implemented the exact same program, the funding awards that each school received would have been different. “How could you compare an elementary school that received $100,000 for programming and the same program in another elementary building that received $500,000?” she asked. “I just think there are too many variables, too many assumptions that have to be made to look at this thing in our usual way.” No one said a word, except Ben. He picked up his cell phone and called his secretary, telling her to cancel the rest of his appointments for the day. “In reality, folks,” Tina continued, “we could be looking at 150 different ways to offer after-school programming.” Faced with this new information, the members of the firm had a fresh concern. Now they were worried not about how to conduct a statewide evaluation across multiple after-school programs, but about whether they even could.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS Tina and her colleagues were determined to produce a competitive proposal for the contract bid for the state’s evaluation of the after-school programming initiative. Based on the new information that Tina brought to the table, the group began to work out a possible strategy for collecting data. They decided that they first needed to identify after-school programs that had similar characteristics (for example, same grade levels, same hours of operation, similar partners from the community, and similar curricula). They decided to obtain this information by proposing to first design and develop a survey. The survey would be filled out by each after-school program’s director. They would mail copies of the survey to the 150 or so funded sites around the state. Then they would analyze the data the surveys yielded and, based on their

143

3GC10

10/21/2013

144

12:37:34

Page 144

Chapter 10 Project Plan for Evaluation

findings, identify groups of districts that implemented afterschool programming in the same way. Next they would begin to examine correlations between student attendances in those similar programs, academic achievement, and the other outcomes associated with the initiative overall. Finally, they planned to examine the achievement of comparable students who were eligible to attend after-school programming but did not. Several months later Tina received a phone call from the state’s evaluation and assessment office congratulating the firm on its successful application. Shortly thereafter, members of the firm met with several directors from the state who were charged with overseeing the statewide after-school project, and the team began to put their evaluation plan into place.

FINAL THOUGHTS Evaluating multiple programs across various sites is a challenge for even the most seasoned evaluator. It is important when conducting an evaluation of multiple programs not to assume that all the programs are the same. In fact, in most cases the programs will all “look” different, even if they are supposed to be similar in nature. They may have received different amounts of funds and resources, cater to different stakeholder groups, and provide different activities, to name a few possibilities. It is necessary to understand and document the variations that could exist across programs before combining programs or lumping them into different clusters based on similar traits.

KEY CONCEPTS Competitive funds Curriculum Higher collaboration of services Model Partner Statewide evaluation

3GC10

10/21/2013

12:37:34

Page 145

Suggested Reading

DISCUSSION QUESTIONS 1. One challenge the evaluation team faced when placing their bid was the different ways in which the after-school programs could have been implemented across the different sites. Review the case study again; what do you see as some of the benefits and challenges in how after-school activities have traditionally been conducted in many communities? What are some of the benefits and challenges for afterschool programs that are conducted using the model depicted in Figure 10.1? 2. In the context of after-school programming, the term higher collaboration of services refers to the process whereby community partners come to the school or other educational setting and deliver their services. Review the case again and be prepared to discuss why the term has been applied to such program structures as the one found in Figure 10.1. What makes it a higher collaboration of services? With what areas could you see this type of program model working well? Please be ready to explain. 3. What ethical considerations do you think Tina and her colleagues had to address in this case? How might these evaluators have gone about addressing these ethical challenges?

CLASS ACTIVITIES 1. Based on the case study and the methods discussed for evaluating the program described earlier, develop some of the evaluation tools that would be needed to conduct the evaluation. For example, develop the survey that would be administered to all the funded sites, or develop an interview protocol that could be used for the site visits to be conducted under the evaluation plan.

SUGGESTED READING Bottorff, A. K. (2010). Evaluating summer school programs and the effect on student achievement: The correlation between Standford-10 standard test

145

3GC10

10/21/2013

146

12:37:34

Page 146

Chapter 10 Project Plan for Evaluation

scores and two different summer programs (ERIC Document Reproduction Service No. ED 525626). Mahoney, J. L., Pavente, M. E., & Lord, H. (2007). After-school program engagement: Links to child competency and program quality and content. Elementary School Journal, 107, 385–404. McGarrell, E. F. (2007). Characteristics of effective and ineffective afterschool programs. Criminology & Public Policy, 6, 283–288. Zhang, J. J., Lam, E.T.C., Smith, D. W., Fleming, D. S., & Connaughton, D. P. (2006). Development of the scale for program facilitators to assess the effectiveness of after-school achievement programs. Measurement in Physical Education and Exercise Science, 10, 151–167.

3GC11

10/21/2013

13:24:33

Page 147

CHAPTER

11 EVALUATION OF A TRAINING PROGRAM IN MATHEMATICS FOR TEACHERS

LEARNING OBJECTIVES After reading this case study you should be able to 1. Understand what professional development is and several approaches to its implementation 2. Understand why a needs assessment is conducted and each step of the process 3. Define what action research is and the steps associated with it 4. Define what a semistructured interview is and the kinds of important data that can be collected

THE EVALUATORS Barbara Lincoln and Seth Jackson were professional evaluators. They generally worked independently, conducting evaluations and doing other consulting activities. On certain occasions when an evaluation project warranted it, however, they would team up and work collaboratively. Barbara was a former professor in educational theory and practice and had an extensive background in action research and qualitative methods. Seth, too, had retired from a long and successful career as a secondary math educator. Both Barbara and Seth

147

3GC11

10/21/2013

148

13:24:33

Page 148

Chapter 11 Evaluation of a Training Program in Mathematics for Teachers

were hired to serve as evaluators on a professional development project in math instruction for middle school teachers.

THE PROGRAM

needs assessment A process that documents what participants in a potential program require

item analysis A process whereby each question on a test is critically examined

The program was designed to assist poorly performing school districts in the area of mathematics by providing high-quality, ongoing professional development. As part of this process, institutions of higher education (IHEs) would link to and collaborate with identified districts and use expert faculty in the area of mathematics and learning to provide such professional development training. In all, four IHEs and thirteen school districts made up the program. Under this program design, teachers from the districts participated in weekend professional development trainings at each IHE. These trainings were held on Saturdays every other month, across the academic year, and teachers were paid a stipend for their participation in the project. To provide professional development aligned with the needs of the individual teachers, the project director and IHE faculty worked to conduct a needs assessment—a systematic data collection process to identify the issues that the districts needed to address, as opposed to their making an arbitrary decision. The needs assessment entailed reviewing each participating district’s math assessment data for the last four years at the fifth-grade level and conducting an item analysis of test problems students had answered incorrectly, to see if there were any notable patterns across schools, classrooms, and teachers. Based on the item analysis, a series of professional development trainings were then designed specifically to target the identified deficiencies. In years one through three of the project, approximately one hundred teachers had received professional development training in these areas of need. In addition, the teachers had received numerous supplies and materials (such as algebra tiles, which are hands-on manipulatives) to bring back to their classroom and use to assist students in their learning. As part of the year four evaluation activities, Barbara and Seth began to conduct their evaluation by attending the professional development trainings. Observations were among their main data collection tools during the trainings. Both Barbara and Seth wanted to get a real feel for the kinds of activities that were being conducted—to better understand the approach and rapport the professional development

3GC11

10/21/2013

13:24:33

Page 149

The Evaluation Plan

149

trainers had with the teachers and to be able to fully describe what these trainings were like.

THE EVALUATION PLAN These were some of Barbara and Seth’s initial questions: ■

Were these trainings targeting the right strategies?



Did teachers find the sessions informative and useful?



Did teachers believe that they would be able to implement the strategies they were being trained in when they returned to their classroom?

Barbara and Seth wanted to conduct some semistructured semistructured interviews with training participants. Unlike structured inter- interviews views, in which one closely follows a protocol, a semistructured Interviews that use format afforded the evaluators the flexibility to conduct short, a preestablished list of questions informal interviews whenever there was a break in the training but may not follow (such as during a coffee break). Using a more formal approach, the list exactly Barbara and Seth also planned to collect end-of-year data from participating teachers through focus groups and a survey. This summative survey would work to document the following data: ■

The various professional development activities in which teachers participated



Any changes in their instruction through implementing the new math strategies they had learned about



Any challenges these teachers encountered



Any benefits or outcomes teachers observed in their students when they implemented these strategies

In addition, Barbara and Seth planned to visit teachers’ classrooms and to observe students as they were introduced to the new math strategies. During one of their breakout groups, Barbara made an interesting discovery while chatting informally with a group of teachers standing in line for coffee. “The strategy using the algebra tiles to show how to multiply positive and negative numbers sounds very useful, doesn’t it?” she asked them.

3GC11

10/21/2013

150

13:24:33

Page 150

Chapter 11 Evaluation of a Training Program in Mathematics for Teachers

“I guess,” said one of the teachers. “I don’t think my kids would be able to get this at all,” said another teacher. “I know mine wouldn’t,” chimed in a third. Barbara took a sip of her coffee and looked around to see where Seth was. She wanted him to hear this. “Well, what about the other strategies you have learned about? How have they worked?” No one responded. Then one of the teachers shrugged her shoulders and said, “I don’t know, I’ve never really been able to try things. It always seems that they never really fit into what I’m teaching.” “I know what you mean,” interjected the second teacher. “It’s really hard to stop what I have planned to cover in class and try out this month’s strategy from the training.” Barbara was surprised at what she was hearing. Careful not to show any concern, she said, “So it sounds as though you haven’t had great success trying these different strategies.” In a chorus all the teachers replied, “No.” Barbara could barely wait until lunch to inform Seth about her discovery. When she told him, he confessed that he had made a similar finding: very few of the teachers were going back to their school and implementing any of the strategies with their students. At the end of the day the project director held a meeting with Barbara and Seth and the faculty members who had been conducting the training. The purpose of the meeting was to discuss evaluation findings, how they thought the training was going, and what activities they wanted to focus on in the next year of the project. During the meeting, when it came time for Barbara and Seth to speak, Barbara delivered the news, based on the semistructured interviews they had conducted. “Impossible,” said the project director. He picked up a stack of surveys that the teachers had filled out at the end of the day and began to thumb through them. “Teachers have indicated in their surveys that they have been going back and using these strategies in their classroom.” Everyone turned to Barbara and Seth. Seth broke the silence by saying, “Participants don’t always provide valid data on surveys.” “I don’t know,” said the project director. “I think that the teachers have always been pretty honest with us.”

3GC11

10/21/2013

13:24:33

Page 151

Summary of Evaluation Activities and Findings

151

“I’m not too sure about that,” said one of the faculty members in charge of delivering the professional development. “I have to agree with the evaluators. From what I have been experiencing during the workshops, it appears that the teachers really aren’t embracing these strategies as we had once hoped.” Now the project director looked completely perplexed. “Well, so what do we do now?” “We need to make them do it,” said one of the trainers. “We could withhold their stipend unless they can show us that they are using it,” said the director. “Or we could recruit new people into the training and phase out those teachers who aren’t fully participating,” said another of the trainers. Barbara and Seth looked at each other. With their rich background in program evaluation, they knew all too well that trying to force participants to comply was an approach that would most likely spell disaster. Figuring out how exactly they could get teachers involved was going to be a major challenge—but an extremely important one if the project were going to carry on, be successful, and meet all of its intended goals and objectives.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS Following the meeting, it was clear to everyone that the way they had been delivering professional development training to the teachers was not as effective as they had once hoped or believed it to be. Barbara and Seth returned home for several days to discuss ideas for how they could get the teachers more involved in the project and implementing the strategies they had learned about. More important, they also discussed ways they could do this that would not follow a top-down approach. As depicted in Figure 11.1, in this approach, typically those in an administrative or higher-level position will control or dictate what is done when and for what purpose. As Barbara and Seth closely examined this program’s professional development model, they realized that it placed all the emphasis on the professional development training itself, putting it on top of or at a higher level than the teachers who were receiving it. The teachers may have felt as though the

top-down approach A process whereby those in administration decide what is best and then provide programming for participants

3GC11

10/21/2013

152

13:24:33

Page 152

Chapter 11 Evaluation of a Training Program in Mathematics for Teachers

Professional Development

Teachers

FIGURE 11.1.

Model of the Top-Down Approach to Professional Development

Professional Development

Action Research

FIGURE 11.2.

Model of Professional Development with Action Research

action research A research process whereby participants or practitioners conduct their own research and use results to make changes to their practice

professional development training was “being done to them” rather than their being given a meaningful role in the process. The following week Barbara and Seth met with the project director and trainers. First they presented the top-down model as the way the professional development had been structured for the last three years of the project. They pointed out some of the inherent challenges with such a model, particularly from an empowerment perspective. Then they unveiled the new model that they had developed for the following year’s program (see Figure 11.2). Their new model, in which Barbara had incorporated her love of action research (see Box 11.1), included the basic element of action research while continuing with the element of professional development on which the project was built. Using the action research model, teachers who had participated in the

3GC11

10/21/2013

13:24:34

Page 153

Summary of Evaluation Activities and Findings

BOX 11.1.

Overview of Action Research

There are many approaches to action research, but the general process tends to be the same in all of them. Figure 11.3 depicts the general process, described by Lodico, Spaulding, and Voegtle (2006), whereby the teacher-researcher identifies a problem or issues that need to be addressed, reviews multiple sources and forms of data, and reflects on his or her own teaching practices. Following the analysis of this data, the teacher develops a plan to address the problem, implements the plan, and then continues to collect data to monitor the plan, refining it as necessary. One of the advantages of action research is that the findings discovered from the research are readily applied by the teacher-researcher.

Identify a problem and a research question. Reevaluate your initial ideas about the problem and your research question. Analyze the data and plan actions to address the problem.

FIGURE 11.3.

Read and learn about research on the topic.

Action Research Approach Used by Teachers to Improve Practice.

Reflect on your own experiences. Create a plan for data collection.

Collect data from multiple sources.

Overview of the Action Research Model

project would be asked to take a strategy that they had learned about over the course of the three years of professional development and undertake an action research project. The teachers would serve as the researchers, conducting authentic research in

153

3GC11

10/21/2013

154

13:24:34

Page 154

Chapter 11 Evaluation of a Training Program in Mathematics for Teachers

their own classroom to improve instructional practices and maximize student learning. They would either analyze some current data or implement a pretest measure. They were to introduce the strategy over the course of a few days and then test students on whatever unit they deemed appropriate. Then they would examine students’ posttests; determine whether an acceptable level of learning had occurred; and, if it hadn’t, modify the strategy, implement the refined strategy once again, and continue to collect data and make modifications until the desired learning outcome was achieved. Next, teachers would report their data, findings, reflections, and even recommendations to the professional development trainers, who would be sitting on a panel, listening to the presentations of action research projects and asking questions. After Barbara and Seth had finished presenting the model, the project director said, “I like it a lot. In a way, it does kind of force teachers to try out one of the strategies we have taught them.” “And it does it in such an interesting way,” added one of the trainers. “It puts some of the responsibility for trying these strategies on them, but at the same time empowers them by giving them a voice.” “Well,” said the project director, rubbing his hands together, “let’s try it and see if it works.”

FINAL THOUGHTS A couple of months later, Barbara and Seth presented the idea of incorporating the action research component to the teachers. Illustrating the concept with a PowerPoint presentation, they explained what action research was and how the project was going to incorporate it, so that it would now be the teachers’ turn to provide the program trainers with data. After the presentation, the project director and trainers described how the teachers had responded as they were watching the presentation. They had begun to sit up straighter in their chairs with smiles on their faces. The following month’s training was dramatically different from the previous training. Teachers presented their action research projects and the results to their colleagues and trainers. Most of the teachers who implemented a strategy in their

3GC11

10/21/2013

13:24:34

Page 155

Discussion Questions

classroom had found, to their surprise, that their students had great success with it. In some cases that success did not completely meet the teacher’s expectations, but the teachers were already making modifications to their strategy and continuing to monitor their practice as they implemented it. Both the project director and the trainers were impressed with how much more invested the teachers were, the intricate data on their students the teachers were collecting, and the effectiveness of these professional development practices. Barbara and Seth used the teachers’ action research projects in their evaluation report. In fact, the action research component embedded in a professional development model attracted the interest of the state’s department of education, which funded the project. Education department members enjoyed it so much that they later hired Barbara and Seth to work with another professional development project that had encountered similar difficulties in getting teachers to implement and share new instructional practices.

KEY CONCEPTS Action research Item analysis Needs assessment Semistructured interviews Top-down approach

DISCUSSION QUESTIONS 1. Review the two different models for professional development used in this project. What are some of the benefits and challenges a program evaluator might encounter in working with a client to deliver both approaches? 2. List the different steps of the needs assessment used for this project. Be prepared to explain why such a process is important.

155

3GC11

10/21/2013

156

13:24:34

Page 156

Chapter 11 Evaluation of a Training Program in Mathematics for Teachers

3. What is action research, and what are the steps in an action research process? How are these steps similar to or different from those of a needs assessment or research study? 4. What are some of the benefits and challenges of using semistructured interviews as a method for collecting data? How does an evaluator make sense of information when data collected using two different methods (such as surveys and interviews) is contradictory? 5. Evaluators often come on board with a project that is already in operation, as did Barbara and Seth. Develop a list of different approaches or techniques that you believe would work to foster and build relationships with project directors, staff, and other relevant stakeholder groups when starting to work on projects such as this. 6. What ethical considerations do you think the two evaluators in the case had to address? How might these evaluators have gone about addressing these ethical challenges?

CLASS ACTIVITIES 1. Review the evaluation activities in this case study again. Based on this information, develop an evaluation matrix that could have been used for this evaluation to ensure that all essential evaluation data was collected, and suggest additional methodologies that might have addressed any gaps in the data collection process. 2. The approach that Barbara and Seth took with their evaluation plan certainly worked to empower the teachers. Reflecting on this and other evaluation projects you have either worked on or learned about, develop a list of other possible techniques that an evaluator can use to empower participants in a program.

SUGGESTED READING Feldman, A. (2007). Teachers, responsibility, and action research. Educational Action Research, 15, 239–252.

3GC11

10/21/2013

13:24:34

Page 157

Suggested Reading

Luckcock, T. (2007). The soul of teaching and professional learning: An appreciative inquiry into the Enneagram of reflective practice. Educational Action Research, 15, 127–145. Thompson, P. (2007). Developing classroom talk through practitioner research. Educational Action Research, 15, 41–60.

157

3GC11

10/21/2013

13:24:34

Page 158

3GC12

10/21/2013

13:30:41

Page 159

CHAPTER

12 AN EVALUATOR-IN-TRAINING’S WORK ON A SCHOOL ADVOCACY PROGRAM Issues of Confidentiality LEARNING OBJECTIVES After reading this case study you should be able to 1. Define what is meant by the term confidentiality in program evaluation 2. Understand some of the challenges evaluators encounter when trying to maintain confidentiality for participants in the evaluation process 3. Understand some of the challenges that evaluators face in obtaining accurate data when observing participants in a program

THE EVALUATOR Shirah Smith was a doctoral candidate in educational psychology at a state university. As part of her course work she had also taken several classes in program evaluation. For her practicum she was required to work for a faculty member at the university on an actual evaluation project. Shirah made an appointment with the faculty member to get a better sense of the program she would be working on and what the project would entail.

159

3GC12

10/21/2013

160

13:30:41

Page 160

Chapter 12 An Evaluator-in-Training’s Work

THE PROGRAM At their meeting, her professor, Dana Nephews, gave Shirah some introductory materials. “Here is an overview of the project that we are currently working on. It’s an advocacy program at a middle school. Are you familiar with advocacy programs?” As part of her doctoral experience Shirah had worked on several different kinds of programs, including after-school and other similar enrichment programs. She had never come across something called an advocacy program before, however. “Is it an enrichment program?” she asked. The professor handed her some more papers. “Yes, I guess it could be considered enrichment of some sort. One main purpose of the program is to provide students with an adult mentor—an advocate, I guess you could say. Each student in the building is assigned to an adult. The adult probably serves as an advocate for three or four students. The adult meets with the assigned students for twenty-five minutes each morning, in place of homeroom, and they discuss different current events and any problems that the students are having.” “How do they have enough teachers to cover all the students?” asked Shirah. “Good question. That was one of my initial questions too when we started,” said Professor Nephews. “It’s not just teachers who are serving as advocates, but anyone employed at the school.” “Do you mean staff members too?” “Yes, even the school administrator has a handful of students who meet in his office every morning to discuss current topics.” “That’s very interesting,” said Shirah. “It really is,” said the professor. “The idea is that every adult will know several students very well and be able to assist if a student is having social or emotional problems.” The professor explained other aspects of the program. For example, the advocacy program also had a community service component, whereby on weekends students would help out elderly people in the neighborhood, raking lawns and performing other such tasks. “It sounds like a great program,” said Shirah.

3GC12

10/21/2013

13:30:41

Page 161

The Program

161

“It is,” said the professor, “but I also have to warn you about something.” “What?” “Some of the teachers in the school do not like the program and don’t necessarily want the program at the school.” “Really? Why?” “We aren’t sure,” said Professor Nephews. “We didn’t become aware of this until we became more involved in the program and started spending more time interviewing the teachers. There is this core of teachers who feel that the program is violating a contractual issue and that the program is really being forced on them by the principal. These are also teachers who are not tenured and therefore feel that if they don’t fully participate and embrace the program it will come back to haunt them.” “And so the principal would deny them tenure because they don’t like the advocacy program?” The professor shrugged her shoulders. “I know it seems unlikely, but they are scared. Their perceptions are very real. We can’t deny them that.” “Have you talked to the principal about this—how these teachers feel?” asked Shirah. Professor Nephews took out a pencil and made a few notes on her legal pad. “That is one of the problems that I wanted to warn you about.” “The principal?” “Yes, we have had some problems initially with him. Well, I shouldn’t say problems; they were more like concerns, really.” “Concerns?” confidentiality “Yes.” Professor Nephews finished taking her notes and A consideration looked back up at Shirah. “When we initially made this discovery that involves in the evaluation—that some of the nontenured teachers did not collecting data like the program but felt that they couldn’t be forthright with how and reporting they felt because they feared some sort of retaliation—we findings to ensure reported the findings in our first evaluation report. We protected that one’s sources are not named the teachers’ confidentiality by aggregating or combining our aggregating findings, saying ‘some teachers’ interviewed said this.” Combining by “So what is the problem?” “Well, the problem is now the principal is on a sort of collapsing or ‘witch hunt’ trying to find out who among the teachers feels reducing in size or number this way.”

3GC12

10/21/2013

162

13:30:41

Page 162

Chapter 12 An Evaluator-in-Training’s Work

“Wow,” said Shirah. “It sounds intense. Are you sure you want me to get involved in this?” “Yes, of course. I didn’t mean to put you off the project by telling you all of this. I just thought you should know so that you will be extra careful in regard to maintaining participants’ confidentiality. It’s a wonderful program, and I think it will be a good learning experience for you.”

THE EVALUATION PLAN The evaluation plan for the project included the use of both quantitative and qualitative methods. As part of their evaluation duties, the evaluation team surveyed all teachers and staff in the building and conducted interviews and focus groups with both the students and the advocates. They also conducted some informal observations of the students and advocates during the first twentyfive minutes of school, the period dedicated to the advocacy program. Further, the evaluators designed and developed a survey to gather the perceptions of those teachers, administrators, staff, and volunteers who were serving as advocates in the program.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS Shirah worked with Professor Nephews to set up several observations of advocates and their meetings with students each morning. One of the purposes of conducting these observations was to give Shirah and the other evaluators the opportunity to learn more about the project and what specific activities were conducted during these advocacy sessions. This information not only would serve as rich data for the overall evaluation report but also would enable the evaluators to write detailed narratives describing these activities. To conduct the observations, Shirah sat in on several sessions, taking notes and listening to the group discussions. In one session the students were talking to their advocate about a community project in which they had participated the weekend before, for which they had visited the homes of the elderly in the community and done odd jobs for them. Listening to the

3GC12

10/21/2013

13:30:41

Page 163

Summary of Evaluation Activities and Findings

BOX 12.1.

163

Sampling of Community Activities

Rake lawns Clean outside windows Mulch flower beds Trim hedges Plant flowers Mend broken fences Collect dead tree limbs off lawns Collect rocks for rock gardens Fix broken doors Remove winter windows and put in summer window screens

conversations they were having, Shirah began an informal list of the specific jobs the students mentioned. Box 12.1 presents the list she started to compose. Students discussed how providing this help to the elderly made them feel about themselves. They also talked about a safety issue in school that was on the front page of that day’s newspaper. The article discussed school safety policies and an incident in which a fifth grader brought a gun to school. Shirah noted that the students talked at length about the article, how they had concerns about their own safety at the school and on the bus, and how they knew several students who in the past had brought knives to school. After each observation Shirah summarized her notes, and at the end of the week she met with her professor to debrief or review her findings. Professor Nephews also had Shirah reflect on what she had discovered and learned from the process, as well as generate new questions from the observations that she now wanted to have answered. “These morning advocacy sessions that I have been observing seem very powerful,” said Shirah, flipping through her notes. “How so?” replied her professor.

debrief To have a discussion between two or more individuals following an activity

3GC12

10/21/2013

164

13:30:41

Page 164

Chapter 12 An Evaluator-in-Training’s Work

“The rich conversations that I have observed between the advocates and the students—they have been discussing some serious issues and reflecting on them. Students have also been able to relate to safety issues and discuss those issues in their own lives and at school.” Shirah went on to describe the discussion about school safety and several students’ knowledge of others who had brought knives to school. “So it sounds as though from what you have observed it’s all quite positive.” “Definitely,” replied Shirah. “And what’s puzzling is why some teachers would find the program to be troublesome and not part of their job.” “I think it’s a very complex issue,” said Professor Nephews. “But just in what you described to me about the school safety issue, I think the teachers who are ‘anti’ the program are uncomfortable because they aren’t quite sure what their role is.” “What do you mean?” “I think because the program is occurring during school hours, during their traditional homeroom time, teacher-advocates feel that students are going to divulge information to them in these rich conversations that they in turn will have to report.” “Hmmm, I didn’t think about it that way.” Shirah sat back in her chair and made a few notes on her pad. “The example of the issues surrounding school safety is a perfect one,” said Professor Nephews. “Students said that they knew of other students who had brought knives to school. What if students were to mention specific names of students? Or talk about other things regarding their personal safety, such as abuse? That moves it from a discussion between students and the advocate to something the advocate will have to report to a higher authority, such as the building principal.” Shirah thought for a moment, then said, “So those teachers who are against the program may feel it’s putting them in a very difficult position?” “I think so,” said Professor Nephews. “And I think it is particularly concerning them because the program itself places them in a potentially ethically challenging situation, which in turn goes against some of the aspects of the program—the adult’s being an advocate with the student and developing this ongoing relationship in which they can have these in-depth, rich discussions. But

3GC12

10/21/2013

13:30:41

Page 165

Summary of Evaluation Activities and Findings

the catch-22 is that if, during those discussions, the student really starts talking about serious issues, the advocate may be required by law to report it. So some of these teachers feel that they can’t, by law, provide confidentiality to the students they are working with each day.” “This is a problem.” “And then on top of that, the principal, Mr. Baldwin, is trying to use the program to bring issues to the surface. But he tries to address those issues by going after personnel.” “It sounded like such a nice, simple program to help kids,” said Shirah, closing her notebook. “Who would have ever thought that it would end up in such a mess?” Her professor didn’t say a word, but nodded her head in agreement. The following week, Professor Nephews told Shirah that she was to observe Miss Jones’s advocacy group. She also advised Shirah that Miss Jones was one of the teachers who had initially expressed concerns about the advocacy program. She said she was telling Shirah this so that she would take particular precautions to protect the teacher’s confidentiality. Shirah said that she would make sure that whatever information she gathered from the setting would not be reported in any way that would specifically identify the teacher in conjunction with what she said or did. Shirah made sure to arrive early for her first observation of Miss Jones’s advocacy meeting. Four students convened with Miss Jones in a small room that the school used for special meetings. Three of the students were in fifth grade, and one was in sixth. On the first two days of her observations Shirah noted some tension in the room. She noted that Miss Jones seemed a little reserved in how she discussed issues with the students. For example, one student brought up the issue of making choices, particularly sorting wrong from right choices when it came to drug use, and the teacher seemed to skip over a point that could have been pursued further. Shirah thought that some of what she perceived as the teacher’s holding back could have been because of observer effect people who are being observed tend to act differently than how they typically would when unobserved in the same setting. Shirah had learned about observer effect in one of her research and evaluation courses, so she made a note

165

observer effect A phenomenon whereby individuals or groups behave differently because they are being observed by someone unfamiliar from outside the setting

3GC12

10/21/2013

166

13:30:41

Page 166

Chapter 12 An Evaluator-in-Training’s Work

alongside her observations that this common phenomenon might be occurring. After a couple of observations Shirah noticed that the group seemed more relaxed. She also noted that Miss Jones did not skip over issues that students brought up as she had done before. Shirah felt as though the time she had spent in Miss Jones’s advocacy group had allowed Miss Jones to develop a sense of trust in her. Later, Shirah was in the school cafeteria getting something to eat before going on to her next observation. She felt a tap on her shoulder and turned around to find the school principal standing behind her. He introduced himself, and she did the same. “You don’t have to wait in the line with the students,” he said, motioning for her to follow him. “Teachers and school guests can cut to the front.” “Thank you,” said Shirah. She followed him to the front of the line. “I was wondering how I would eat my lunch and get to my next appointment.” He took two of the bright orange cafeteria trays from a stack and passed one to her. “Busy with the evaluation?” he asked. “Yes, I have some interviews to do after lunch,” said Shirah. She remembered what Professor Nephews had said about the school principal, so she was on guard. “Were you doing interviews this morning, too?” “No, I was observing.” “Oh, that’s right—I saw in the office log that you signed in to see Miss Jones.” Shirah felt herself tense up. “Yes.” “So how did her group session go?” he asked. Shirah thought carefully about what she would say. “Good. Very good. I enjoyed it a lot.” “That’s great,” said the principal. “Enjoy your day.” He turned to get his food and pushed his tray down the line toward the cashier. Later, as Shirah ate her lunch, she reflected on their conversation. She thought about what the principal had asked her and how she had replied. The conversation had seemed very casual, and she was certain that she had not revealed any specific information about Miss Jones or what had gone on in the group’s

3GC12

10/21/2013

13:30:42

Page 167

Summary of Evaluation Activities and Findings

session that she had observed. Taking a forkful of mashed potatoes, Shirah felt relieved. Shirah’s feeling of confidence did not last for long. The next morning she returned to the conference room to observe Miss Jones’s advocacy group. She knocked on the door, and Miss Jones answered it. “Oh, hi,” said Miss Jones. “Didn’t you get my message?” “No,” said Shirah, surprised. “What message?” “I left a message with Professor Nephews that I am uncomfortable being observed, and because it is voluntary, I would prefer not to have the evaluators sit in on our discussions.” “Oh,” said Shirah. She tried to hide the disappointment in her voice. “I see. Okay then, thank you.” Miss Jones closed the door in Shirah’s face. Shirah left the school, very upset. What did I do wrong? Why is Miss Jones suddenly so uncomfortable with having me observe the advocacy sessions? How am I going to explain this to Professor Nephews? These questions and more raced through her mind as she drove back to the university. Most of all, she worried about whether this would compromise the entire evaluation project. Later Shirah met with Professor Nephews to review what had happened. Apparently the principal, not meaning to, had seen Miss Jones in the hall later that day; he had told her that he’d spoken with the evaluator, who had said that the advocacy session had gone well and Miss Jones had done a good job. Miss Jones, in turn, concluded that the evaluator and principal had met purposely to discuss her work, and that if the evaluator had told him this, what else might she have told him? Feeling that her confidentiality had been breached, Miss Jones decided that because the observations were voluntary, she could choose to stop participating. This was certainly her right as a participant in the evaluation. The evaluators continued to observe advocacy sessions through the remainder of the school year, but no one from the evaluation team could observe Miss Jones’s group again. Being an evaluator-in-training and participating in a “real” evaluation is a richly rewarding experience that can influence an evaluator for years to come. In this case study Shirah had the opportunity to both learn about program evaluation and partake in

167

3GC12

10/21/2013

168

13:30:42

Page 168

Chapter 12 An Evaluator-in-Training’s Work

a real evaluation of an advocacy program. Despite the potential benefits of such a program, the program itself—because of the discussions with students about serious issues—posed some challenges. The most serious challenge, however, concerned confidentiality, and although Shirah went to great lengths to protect the confidentiality of her participants, in the end the teacher Shirah was observing had a different perception.

FINAL THOUGHTS Struggling with issues surrounding confidentiality is something that every evaluator will have to deal with at some point in his or her career. Although on the surface maintaining confidentiality seems a fairly simple concept to adhere to, in reality confidentiality issues look completely different when an evaluator steps away from the textbook definition and begins to collect data in a real-world setting.

KEY CONCEPTS Aggregating Confidentiality Debrief Observer effect

DISCUSSION QUESTIONS 1. Observer effect was noted in this case study when Shirah observed students and their mentor in their advocacy group. Observer effect tends to have an impact on the validity of data, in that those who are being observed may not behave or respond to their setting as they usually would because of the presence of an outside observer—in this case, the program evaluator. How did Shirah deal with this possibility of observer effect? Was she successful in doing so? 2. Discuss the challenges in this case study that Shirah and her professor faced in maintaining confidentiality for those participating as advocates in this program.

3GC12

10/21/2013

13:30:42

Page 169

Class Activities

3. Considering the outcome of the case study, if you were Shirah, how would you have responded to the principal when he approached you in the lunch line that day?

CLASS ACTIVITIES 1. Examine the methodology used for the program evaluation. Is a possible stakeholder group—or groups—not being included in the data collection process? If so, name the group or groups and discuss how you would go about collecting data from them. Also be prepared to discuss how collecting data from the group or groups would help strengthen the evaluation of this program. 2. Evaluations are conducted under a wide range of circumstances. The setting for this evaluation could be considered a bit hostile: the school principal’s agenda for using the evaluation and the evaluation data was inherently different from that of the program evaluators. Break into small groups and, pretending that your group is the evaluation team, come up with a plan detailing how you would have continued to conduct the evaluation in this setting. Think about ways in which you would report data, the types of data you would collect, and methods for data collection that would suit your plan of action. Think about ways in which you could use the aspects of evaluation just discussed to work with the principal and get him to think about the purpose of the evaluation in a different way. 3. What ethical considerations do you think Shirah and the other evaluators in the case had to address? How might the evaluators have gone about addressing these ethical challenges?

169

3GC12

10/21/2013

170

13:30:42

Page 170

Chapter 12 An Evaluator-in-Training’s Work

SUGGESTED READING Fitzpatrick, J. L., & Morris, M. (Eds.). (1999). Current and emerging ethical challenges in evaluation [Special issue]. New Directions for Evaluation, 1999. (82) Newman, D. L., & Brown, R. D. (1996). Applied ethics for program evaluation. Albany: State University of New York Press. Parry, O. (2004). Whose data are they anyway? Practical, legal, and ethical issues in archiving qualitative research data. Sociology, 38, 139–152.

3GC13

10/21/2013

13:37:24

Page 171

CHAPTER

13 EVALUATION OF A SCHOOL IMPROVEMENT GRANT TO INCREASE PARENT INVOLVEMENT

LEARNING OBJECTIVES After reading this case study you should be able to 1. Understand some of the challenges evaluators face when collecting valid interview data on-site 2. Understand how an evaluator’s biases may influence the data being collecting in a school setting 3. Recognize observer effect and be able to make recommendations for further addressing this phenomenon when collecting data

THE EVALUATORS Matt and Linda Jackson were a husband-and-wife evaluation team. Before they became consultant evaluators, they both worked for the state’s education department in the division of testing and measurement. Since their semiretirement they had worked with local school districts and nonprofits, writing grants, providing training, and conducting program evaluations as needed.

THE PROGRAM Increasing parent involvement was a challenge for many school districts throughout the country. Low parent involvement, particularly

171

3GC13

10/21/2013

172

13:37:24

Page 172

Chapter 13 Evaluation of a School Improvement Grant

as students move from middle school to high school, was well documented throughout the literature. Because of this, the state provided competitive funds or grants for school districts with low parent involvement. The monies were to be used for each district to develop a plan to increase parent involvement throughout. The plan had to include the establishment of at least one parent center at one of the schools. The parent center was believed to be a key element in increasing parent involvement. Under the state initiative, these parent centers would also provide support for parents, parenting classes and workshops, information about other social and support services, and so on. Linda and Matt were hired to conduct an evaluation of ten school districts’ parent involvement initiatives. Fifty districts had received monies from the parent involvement initiative grants; of these, ten districts were chosen by the state because of their geographical locations across the state and other key variables (such as whether they were urban, suburban, or rural).

THE EVALUATION PLAN key informant A person identified by an evaluator who will assist the evaluator in collecting data from a particular site or location

director of special projects A person who is in charge of grant-funded projects

For their evaluation plan Linda and Matt decided to use a mixedmethods approach. To understand the breadth of parent involvement projects across all the participating districts, they prepared a two-page survey that they mailed out to the key informant or point person in charge of the project at each district. (In some districts this key informant was referred to as director of special projects.) The survey was broken down into three parts. The first part would document the presence of the key components of the district’s plan; for example: ■

A designated area or room in at least one school occupied and used solely as the parent center



A full-time or designated person to operate and direct the parent center



Regular hours of operation (five hours a day, for a minimum of three days a week, and at least one weekend per month)

The second part of the survey would gather information from project directors about the kinds of activities and programs they

3GC13

10/21/2013

13:37:24

Page 173

The Evaluation Plan

had developed and implemented (training programs for parents, GED preparation, and the like). The third part would focus on successes or outcomes the districts had experienced in regard to not only increased parent involvement but also an expansion into different types of parent involvement—for example, homework help, regular attendance at school events, attendance at meetings with school staff, and so on. In addition to conducting this survey, the evaluators would work to gather extensive information about the program in each district. This would include benefits and successes, as well as any challenges project directors had faced in getting the parent center established and up and running in their district. To do this the evaluators would conduct one-day site visits to the ten selected parent centers. As part of this process, they planned to conduct interviews with the building principal, parent center director, teachers and other related school staff, students, and parents at each site. To help coordinate the event, each project director had lined up several representatives from each of those stakeholder groups and would assemble them throughout the day for the evaluators to interview. And to ensure that the interviews were aligned and standardized, the evaluators also developed an interview protocol to guide them when conducting their interviews. After Linda and Matt had conducted six of the ten site visits, they took a day to go over their findings and summarize their work. At this point they were disappointed. Of the six parent centers they had visited, most were missing several of the key components required under the direction of the project. Three of the six centers did not have a permanent director at the time of the visit. Five of the six had no regular scheduled hours of operation or activities for parents to attend. And three of the six did not have a designated area for the parent center at a building in the district. The evaluation team’s seventh site visit was to a school in Alder Central School District. Considering its rural location, they found this district to be demographically surprising. One-third of the students were designated English language learners. The school also had a large Hispanic population (about a third of the student body) and a high transience rate (also about a third). To Linda and Matt, the school’s demographics looked more like those of a School in Need of Improvement in a more urban area than those of a school in a rural school district. In addition, Linda

173

English language learners Individuals who are learning to speak English

3GC13

10/21/2013

174

13:37:24

Page 174

Chapter 13 Evaluation of a School Improvement Grant

had collaborated with the principal of the school years before when she worked at the state’s education department, and she remembered the administrator as not being the most effective building leader she had ever encountered. She was not expecting to find much in the way of a successful parent center at Alder. Pulling into the parking lot of the district’s single junior high and high school building, Linda said, “I certainly hope this center is better than the ones we have seen so far this week.” “Can’t be worse, can it?” said Matt. They entered the building and headed to the office to sign in and receive their visitor badges. Putting on her badge, Linda felt a tap on her shoulder. She turned and, to her surprise, found an attractive woman with dark curly hair, a warm smile, bright eyes, and what felt like a positive aura about her. “Hi, I’m Sophia Hernandez, the school principal.” She put out her hand and shook both Linda’s and Matt’s. “Welcome! I am so glad that you were able to come and see what we are doing with our parent center.” “Hello,” said Linda. “I was expecting Mr. Baxter. Isn’t he the school principal?” “Well,” said Sophia, “you are not going to see Mr. Baxter today unless you brought along your golf clubs. He retired just after the school year started, and the district brought me in as an interim principal.” She led Linda and Matt to a large, freshly painted room with big windows and plenty of sunlight filtering in. “This is nice,” said Linda. She looked around the room, noting the new computers, the tasteful furniture, and the racks of materials and pamphlets. She walked over and began to look through a few of the pamphlets. “How long has the parent center been operational?” asked Matt. “About eight months now,” said an unfamiliar voice from behind them. They all turned to see a second woman. “Let me introduce our parent center director, Sarah Benson.” Linda and Matt introduced themselves. “Sarah is a former social worker,” said Sophia. “We were very lucky to get her. She has done a wonderful job coordinating a lot of the services available to parents.”

3GC13

10/21/2013

13:37:24

Page 175

The Evaluation Plan

“We have an usually high migrant worker population here,” said Sarah. Linda said, “Yes, we noticed that from your district’s demographics. Why is that?” “We have a large wine industry here,” said Sophia. “Migrant workers from Central America come up into the district to harvest the grapes; then they move south to pick some of the other crops.” “So one of the things I have been able to do to get parents more involved is provide a connection to some of the social services that I have contact with,” said Sarah. “I recently did a session about how to get a green card. We had about eighty parents attend. Then, once I get them in here, I get them hooked with what their child is doing in school and how they can help at home.” “Sounds great,” said Linda. Matt began to run down some of the questions on their interview protocol. “Do you have regular hours? Workshops for parents?” “Yes,” said Sarah. She handed him a packet. “We prepared these materials for you. You’ll find parent sign-in sheets for the monthly workshops we have been conducting, a schedule of hours of operation, and materials that we have handed out and used to work with parents. This month we have really been focusing on homework.” She pointed to a big sign over the door that read, “Do you know where your kid is? Do you know if she has done her homework?” “Wonderful!” said Linda. “We have a whole agenda set up for you today,” said Sophia. She handed each of them an itinerary of their interviews. “We have scheduled several parents for you to talk with at nine thirty, then we have teachers at eleven and again at one. Then we have some students at two and some administrators at three thirty.” “Sounds like a packed day,” said Matt. “And the nice thing is that it can all be done here in the parent center,” said the principal. “I can even have lunch delivered so you don’t have to go out.” “Won’t you need the parent center during the day for parents?” asked Matt. “We should be fine,” said Sarah. “Wednesdays are slow at the center.”

175

3GC13

10/21/2013

176

13:37:24

Page 176

Chapter 13 Evaluation of a School Improvement Grant

All day Linda and Matt interviewed the various stakeholders as they came in according to the schedule. It was exhausting, but by the end of the day they had spoken to everyone that they needed to. As they left the school, Linda said, “This was, without a doubt, the best parent center we’ve seen.” “I agree,” said Matt. “It clearly met all the program criteria.” After the long day of work Linda and Matt decided to stay in a nearby motor lodge. That evening they reviewed their notes from the day’s extensive interviews. While they were working, Linda discovered that she had left part of her notes on a legal pad at the school. The next morning, on their way out of town, Linda and Matt stopped quickly at the school to pick up the notes. They needed to get to their scheduled interviews in the next district. Matt pulled the car up in front of the school and waited while Linda ran in. Linda did not stop at the office to check in. Remembering where the parent center was, she made her way down the hall. When she rounded the corner to the parent center, she could hear laughter and noise coming from inside. They must be having a parent workshop or activity, Linda thought. She entered the parent center doorway, but did not find parents actively engaged in a workshop as she expected. Instead, she found the room full of teachers kicked back in the nice new furniture, having coffee and pastries. The new computers had been removed, as had the rack of informational pamphlets. The bright posters and materials that had been there the day before were also gone. The teachers stopped talking and looked at Linda. And she looked back at them, not knowing what to say or what to think. “Can we help you?” one of the teachers finally asked. “I came to get my notebook from yesterday,” Linda replied. But she didn’t know what good the notes would do now.

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS Linda and Matt were an evaluation team with a great deal of experience in education and conducting program evaluations.

3GC13

10/21/2013

13:37:24

Page 177

Final Thoughts

The duo took on the task of conducting site visits to ten different schools, each of which should have created a parent center to increase parent involvement. Linda and Matt used a mixed-methods approach, combining both surveys and site visits to gather both breadth and depth of information pertaining to schools’ newly established parent centers. Surveys were mailed to school administrators. The survey consisted of three parts, each designed to gather specific data. The first part of the survey gathered key information from administrators about the parent center. The second part gathered information about the types of activities and programs held at the parent center. The third part gathered information about any outcomes or successes the administrators had seen from parents using the centers. In addition to distributing surveys, Linda and Matt also conducted a visit to each site. As part of their routine at each site visit the evaluators observed the parent center, interviewed school officials and teachers, and talked to parents who were present. Both Linda and Matt believed that their one-day visit to each center was enough to collected the necessary data for their evaluation report, and that the data would be valid and reliable. However, this belief didn’t last long; when Linda entered one parent center unannounced the following morning, she found a scene quite different from what had been presented to them the day before.

FINAL THOUGHTS Linda and Matt were disappointed. What they had thought was an exemplary parent center turned out to be a carefully orchestrated illusion. They completed all their site visits and submitted their evaluation report to the state. They found that all the parent centers they visited were lacking major components or failing to meet criteria required of them in the grant. Based on their evaluation data, the state decided to more closely monitor the awards given to parent centers in the future. The state also decided to develop trainings to help schools more fully implement parent centers with all of the required characteristics.

177

3GC13

10/21/2013

178

13:37:24

Page 178

Chapter 13 Evaluation of a School Improvement Grant

KEY CONCEPTS Director of special projects English language learners Key informant

DISCUSSION QUESTIONS 1. In this case study the evaluators had each project director arrange their interviews with the various stakeholders at each site. Thinking retrospectively, what are some possible disadvantages of scheduling interviews in this way? What might be some more effective ways of interfacing with stakeholder groups on-site? 2. Linda was pleasantly surprised to find a new principal at the school in the rural district, having worked with the previous principal in her former position at the state’s education department. In addition, the demographics of the district reminded both Linda and Matt of those of a School in Need of Improvement. Considering both of these factors, describe how we as evaluators bring our biases to each setting we enter. How might you, as an evaluator, try to control for some of these biases? 3. Observer effect is manifested when the people you are observing act or perform in a way that is not indicative of their usual behavior. Where in this case study can you see evidence of observer effect occurring? And how could you methodologically try to correct it? 4. What ethical considerations do you think the two evaluators in the case study had to address? How might these evaluators have gone about addressing these ethical challenges?

CLASS ACTIVITIES 1. See if there are any parent centers in any of the schools in your community. As a future evaluator, set up an appointment with a school administrator to visit a parent

3GC13

10/21/2013

13:37:24

Page 179

Suggested Reading

center. See what kinds of activities this center is offering, the type of facility, and the kinds of outcomes or changes in parent involvement the school has experienced as a result of establishing the center. 2. As part of their mixed-methods approach, the evaluators conducted interviews with stakeholders. They interviewed teachers, staff, parents, and students. Develop an interview protocol that you believe could have served as a framework for the project. How would you have altered items to account for the different stakeholder groups’ needs and perspectives?

SUGGESTED READING Cooper, C. W., & Christie, C. A. (2005). Evaluating parent empowerment: A look at the potential of social justice evaluation in education. Teachers College Record, 107, 2248–2274. McMurrer, J. (2012). Changing the school climate is the first step in many schools with federal improvement grants (ERIC Document Reproduction Service No. ED 533561). Reutzel, R. D., Fawson, P. C., & Smith J. A. (2006). Words to go: Evaluating a first-grade parent involvement program for making words at home. Reading Research and Instruction, 45(20), 119–159.

179

3GC13

10/21/2013

13:37:24

Page 180

3GC14

10/21/2013

12:1:43

Page 181

CHAPTER

14 EVALUATING THE IMPACT OF A NEW TEACHER TRAINING PROGRAM LEARNING OBJECTIVES After reading this case study you should be able to 1. Understand some of the challenges of tracking college graduates as they enter the workforce 2. Develop some strategies and ideas for collecting data on college graduates as they enter the workforce 3. Understand the term strong evidence and how it relates to data collection

THE EVALUATORS Dan Jackson and Marilyn Smith were faculty members at a small teachers’ college. Both Dan and Marilyn taught educational research and program evaluation at the graduate level. They also had served as faculty at the college for the last ten years and therefore had a good understanding of the college and how it worked. In addition to teaching their courses, both faculty members had been hired by the college to serve as internal evaluators for a recent federal grant the college had received. This new grant was the largest in the college’s eighty-year history. Recently, the college had experienced a marked decline in enrollment. This was similar to what was happening in many private institutions; in the school of education, however, this decline in student numbers had been more

181

3GC14

10/21/2013

182

12:1:43

Page 182

Chapter 14 Evaluating the Impact of a New Teacher Training Program

dramatic. At a recent meeting, college administrators had announced to the media that they anticipated that this new grant, designed to revamp the college’s special education teacher program, would increase prospective students’ interest in becoming teachers and thus improve student enrollment.

THE PROGRAM The overall purpose of the new grant was to create more effective teachers for the special education classroom. Teacher candidates graduating from this new program would have a new skill set that would make them effective classroom teachers and would ideally increase the performance of their future students. To do this, the project was broken down into different sections or phases. In the first phase the higher education institution was to examine the current course work for teacher educators and to collectively decide into which courses to infuse the latest evidence-based instruction. The second phase would begin with a series of workshops and trainings for higher education faculty. The purpose of these workshops was to familiarize faculty with the list of evidencebased strategies that had already been selected as part of the grant process and were available on a Web site for faculty to download and use in their classes. In addition to training higher education faculty in these new strategies, the third phase of the project focused on changing the classrooms to which teacher candidates were assigned for field placements and student teaching. Teachers from these districts were also brought onto campus and trained alongside faculty members in the evidence-based strategies. The hope was that these teachers would see value in the evidence-based strategies and seamlessly incorporate them into their own classroom instruction; therefore, when teacher candidates conducted their field placements and met their student teaching requirements, the teachers with whom they would work directly would be modeling the evidence-based strategies.

THE EVALUATION PLAN Dan and Marilyn began to examine the evaluation plan laid out by the federal agency that was funding the project. The federal requirements for the project made it explicitly clear that the

3GC14

10/21/2013

12:1:43

Page 183

The Evaluation Plan

183

project had to show impact. Further, the federal agency overseeing the grant project held a series of webinars, the purpose of which was to provide technical information to all the institutions of higher education from across the country that received the grant award. The webinars were also important because they informed the institutions about the evaluation process and the types of data and outcomes the federal agency was expecting as a result of project implementation. At the first webinar an administrator from the federal agency provided an overview of the grant, its purpose, and results from previous grant rounds. In addition, the administrator covered the evaluation component of the grant. Mentioning that the government wanted to see impact, the administrator went on to tell the institutions that each institution’s evaluation design should be strong able to demonstrate strong evidence. “What do you mean by strong evidence?” asked one of the evidence Results that come webinar participants. “When we say strong evidence,” said the administrator, “we from research designs that are are looking for projects that use a treatment-control group design, either with random sampling and random assignment.” experimental or For a moment the webinar went silent, with none of the quasiexperimental participants saying a word. in nature Finally, the administrator said, “We know this is going to be difficult. Our previous grant rounds did not have to address this issue, but for this round we are trying to raise the bar and show more conclusively that these evidence-based practices are making a difference.” “So can we try to show that our new teacher graduates from the program outperform the previous graduates on the state certification exam or in their portfolio review?” asked one of the faculty members. “You could do that, but when we say evidence we are looking at the impact that your teacher candidates will have on their future students once they get a job.” Again the webinar went silent. “Just how are we supposed to do that?” asked another participant. “Well, that’s why your grants were selected. We felt that your proposals had the best chance of doing this out of any of the other ones that were reviewed,” said the administrator. She

3GC14

10/21/2013

184

12:1:43

Page 184

Chapter 14 Evaluating the Impact of a New Teacher Training Program

went on to give the awardees work to do before the next webinar, asking them each to develop a logic model demonstrating the relationships between the evidence-based practices that they would infuse into their program’s already existing course work and how that would change their teacher candidates—and then how teacher candidates would eventually change their students’ learning and achievement on a standardized assessment. After the webinar Dan and Marilyn met with the project committee to share what they had learned and to discuss how they were going to possibly comply with the grant requirements. “One challenge I can see that we are going to have to try to address early on is, How are we going to manage to track our teacher candidates once they graduate and go out into the world and get jobs?” asked Dan. “We have college e-mail,” said someone at the meeting. “Yes, but those e-mail accounts close out after students graduate,” replied Marilyn. “And how do we begin to compare the results of our teacher candidates who go through the new program against the state assessment scores of other teachers’ students?” asked a member of the committee. “That is a very good question,” replied Dan. “No one really seemed to have an answer for us. I kind of felt that they were leaving it up to us to see if we could figure it out.” “And even if we were able to compare public students’ scores of our teachers against those teachers who didn’t graduate from our program, we would be selecting a teacher from the very building that we deliver the evidence-based strategies to,” said Marilyn. “No, that’s not going to work.” “And what about randomization? How are we going to do that?” “This is really a mess,” said another member. “How are we going to get a plan figured out?” “I don’t know, but we have to,” replied Marilyn. “Maybe we can give the grant back,” said Dan. “We just started and haven’t used any of the money yet.” “That’s a possibility,” said a member of the committee. Everyone chuckled. Just then the vice president of the college popped his head into the meeting.

3GC14

10/21/2013

12:1:43

Page 185

Final Thoughts

“I don’t want to disturb all of you, and pardon the intrusion, but I just wanted to tell all of you personally how delighted we are to have this grant project at the college and that we are going to be featuring it at the next collegewide ‘think tank’ meeting. So keep up the good work. I know you’ll make us proud!” And with that the vice president ducked back out the door. Dan got up and locked the door. Then he turned to the group and said, “Anyone want to order pizza? I think we are going to be here for a while . . .”

SUMMARY OF EVALUATION ACTIVITIES AND FINDINGS The evaluators were faced with the challenge of trying to show the impact of their teacher training program on the success of their teacher candidates’ future students. As part of the evaluation activities the evaluators assembled the project’s committee both to brainstorm ideas and to participate in a webinar with a representative from the federal agency that was funding the program. The evaluators and members of the committee shared with the representative their concerns about how difficult it would be to measure the impact of their new teacher training program using the state assessment scores of future students. In addition, they discussed the challenges of using a comparison group design. This design would require the evaluators to identify and isolate new teachers with similar characteristics who did not graduate from their teacher training program. They would then have to compare the performance on state standardized assessments of students whose teachers did and did not attend their teacher training program.

FINAL THOUGHTS As an evaluator, from time to time you may find yourself in an interesting position, faced with what seems to be an impossible task. As shown here in this case, Dan and Marilyn, along with members of the project committee, ended up with a lot of work ahead of them. They had to come up with an evaluation plan that was doable in the practical sense but also fulfilled the needs laid out by the funder. Understanding the components of a logic model

185

3GC14

10/21/2013

186

12:1:43

Page 186

Chapter 14 Evaluating the Impact of a New Teacher Training Program

would certainly benefit the group and the plan that they were about to create.

KEY CONCEPTS Strong evidence

DISCUSSION QUESTIONS 1. Take a look back through the case again. What other challenges can you see from either a data collection or ethical perspective that might have affected the evaluators’ work? 2. One question that was quickly asked had to do with how the evaluators were going to track the teacher candidates after they graduated. This very real question is one that becomes a challenge for many institutions of higher education. Be prepared to discuss some possible methods of following this population of graduates for longitudinal purposes, as well as possible challenges with these methods.

CLASS ACTIVITIES 1. Pretend you are on the project committee. Help out Dan and Marilyn and prepare an evaluation plan for the project. Be sure to include all aspects of the project in your plan, and prepare to gather formative and well as summative information for the committee. Most important, work out a plan to gather the kind of evidence required by the federal agency funding the project. You can also keep a list of all the challenges or problems that you foresee with your proposed plan.

SUGGESTED READING Stufflebeam, D. (1999). Foundational models for 21st century program evaluation. Kalamazoo, MI: The Evaluation Center, Western Michigan University. Retrieved from https://www.globalhivmeinfo.org/Capacity Building/Occasional%20Papers/16%20Foundational%20Models% 20for%2021st%20Century%20Program%20Evaluation.pdf

3GBREF

10/21/2013

13:45:58

Page 187

REFERENCES Barton, J., & Collins, A. (1997). Portfolio assessment: A handbook for educators. Menlo Park, CA: Addison-Wesley. Brown, R. D. (1985). Supervising evaluation practicum and intern students: A developmental model. Educational Evaluation and Policy Analysis, 7, 161–167. Chelimsky, E. (1997). The political environment of evaluation and what it means for the development of the field. In E. Chelimsky & W. Shadish (Eds.), Evaluation for the 21st century (pp. 53–68). Thousand Oaks, CA: Sage. Fitzpatrick, J. B., Sanders, B. L., & Worthen, J. F. (2004). Program evaluation: Alternative approaches and practical guidelines. Boston: Allyn & Bacon. Klecker, B. (2000, March). Content validity of pre-service teacher portfolios in a standards-based program. Journal of Instructional Psychology, 27(1), 35–38. Kvale, S., & Brinkman, Svend. (2008). InterViews: Learning the craft of qualitative research interviewing. Thousand Oaks, CA: Sage. Lodico, M. G., Spaulding, D. T., & Voegtle, K. H. (2006). Methods in educational research: From theory to practice. San Francisco, CA: Jossey-Bass. Mathison, S. (2005). Encyclopedia of evaluation. Thousand Oaks, CA: Sage. Morgan, B. M. (1999). Portfolios in a pre-service teacher field-based program: Evolution of a rubric for performance assessment. Education, 119, 416–426. Newman, D. L., & Brown, R. D. (1996). Applied ethics in program evaluation. Thousand Oaks, CA: Sage. Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. (ERIC Document Reproduction Service No. ED 413 355). Patton, M. Q., & Patrizi, P. (2005). Editors’ notes. New Directions for Evaluation, 2005(105), 1–3. Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: A comprehensive guide. San Francisco, CA: Jossey-Bass. Rogers, P. J. (2005). Logic models. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 232–235). Thousand Oaks, CA: Sage.

187

3GBREF

10/21/2013

188

13:45:58

Page 188

References

Roth, W. M. (1994). Experimenting in a constructivist high school physics laboratory. Journal of Research in Science Teaching, 1, 197–223. Scriven, M. (1967). The methodology of evaluation. In R. E. Stake (Ed.), Perspectives of curriculum evaluation (Vol. 1, pp. 39–55). Chicago, IL: Rand McNally. Shannon, D. M., & Boll, M. (1996). Assessment of preservice teachers using alternative assessment methods. Journal of Personnel Evaluation in Education, 10, 117–135. Spaulding, D. T., & Lodico, M. G. (2003, November). Providing hands-on learning opportunities for evaluators-in-training: A model for classroom design. Paper presented at the 18th annual conference of the American Evaluation Association, Reno, NV. Spaulding, D. T., & Straut, D. (2005, March). Using e-portfolios to document teacher candidate experiences with technology integration during field placements: A validation study. Paper presented at the 15th annual conference of the Society for Information Technology and Teacher Education (SITE), Orlando, FL. Spaulding, D. T., Straut, D., Wright, T., & Cakar, D. (2006, April). The fundamentals of validation: A three-phase plan for year-one evaluation of a PT3 project to transform teacher education through the use of technology. Paper presented at the 16th annual conference of the Society for Information Technology and Teacher Education (SITE), Phoenix, AZ. Suchman, E. A. (1967). Evaluative research: Principles and practice in public service and social action programs. New York, NY: Russell Sage Foundation. Trevisan, M. S. (2002). Enhancing practical evaluation training through long-term evaluation projects. American Journal of Evaluation, 23, 81–92. Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American Journal of Evaluation, 25, 255–272. Tyler, R. W. (1949). Basic principles of curriculum and instruction. Chicago, IL: University of Chicago Press. Weeks, E. C. (1982). The value of experiential approaches to evaluation training. Evaluation and Program Planning, 5, 21–30. Wiggins, G. (1992). Creating tests worth taking. Educational Leadership, 49(8), 26–34. Wiggins, G. (1998). Educative assessment: Designing assessment to inform and improve student performance. San Francisco, CA: Jossey-Bass.

3GBINDEX

10/21/2013

13:41:56

Page 189

INDEX Abbey, Ed, 121–122 Accreditation, 56 Accuracy standards, 42–43 Action research: defined, 152; mathematics teacher training case study and, 152–154; overview of, 153 Activities documentation, 17, 18, 62, 65 Activity outputs documentation, 18 Advocacy, 160 Advocacy case study: activities and findings summary, 162–168; challenges faced in, 160–162; evaluation plan, 162; evaluator, 159; final thoughts, 168; observations carried out in, 162–163; observer effect noted in, 165–166; program overview, 160–162 AEA. See American Evaluation Association After the fact evaluation. See Ex post facto evaluation After-school initiative case study: activities and findings summary, 143–144; afterschool program examples in, 139; challenges faced by, 142–143; evaluation plan, 140–143; evaluator, 137–138; final thoughts, 144; higher collaboration of services and, 140; partnering and, 139; program outline, 138–140; RFP and, 138 Aggregating, 161

American Evaluation Association (AEA): Joint Committee and, 42; overview about, 11 Applied research approach, 10 Approach, evaluation: CIPP model and, 48–50; consumer-oriented, 44, 55; decision-based, 44, 47–50; defined, 43; eclectic, 56, 59; expertise-oriented, 44, 55–56; goal-free, 44, 45–46; objectives-based, 44–47; overview, 43–44; participatory, 44, 50–55, 89–97; summary, 56–57; Tylerian, 44, 46–47 Archival data: as baseline data, 30; defined, 29; disadvantages of, 30–31; as supplementary, 31; types of, 29–30 Authentic assessment movement, 104 Award, 74, 76 Barriers: defined, 124; high school science case study and, 124–126 Baseline data, 30, 131 Benchmarks: defined, 14; evaluation objectives and, 45; need for project and, 73; professional development technology case study and, 113–114; Tylerian approach and, 46–47 Benson, Sarah, 174 Blended approach, 7 Body, of evaluation report, 32–35 Brothers, Stephanie, 89–91 Brown, Samantha, 111–112

189

3GBINDEX

10/21/2013

190

13:41:56

Page 190

Index

Budget, 74 Budget narrative, 74 Capacity, evaluation: communitybased mentor program and, 92, 93, 95; defined, 92; preestablished instruments and, 93 Capacity objective, 60–61, 65 Case studies: community-based mentor program, 89–97; framework guiding, 13–14; high school science, 121–126; inquiry-based instruction in mathematics, 71–86; mathematics teacher training, 147–155; parent involvement, 171–177; professional development technology, 111–117; reading achievement, 129–133; school advocacy, 159–168; special education teacher training, 181–186; statewide after-school initiative, 137–144; teacher candidates using technology, 101–108 Checklist: portfolio analysis, 107; survey and, 24, 26 CIPP model: context evaluation and, 48–49; creation of, 48; defined, 48; input evaluation and, 50; process evaluation and, 50; product evaluation and, 50 Client, 6, 71 Community-based mentor program case study: activities and findings summary, 97; evaluation capacity and, 92, 93, 95; evaluation matrix for, 94; evaluation plan, 92–96; evaluator for, 89–91; final thoughts, 97; focus group and, 95; goals, 91, 92; mixed-methods approach to, 92; participatory approach and, 89–97; participatory evaluation model

and, 90; purpose, 91–92; SINI and, 91; surveys and, 95–96 Competitive funds, 138 Confidentiality: defined, 161; school advocacy case study and, 159–168 Conflicting interests, 54 Consumer-oriented approach, 44, 55 Context. See CIPP model Context evaluation, 48–49 Cover page, 32 Curriculum, 138 Data analysis: portfolios and, 107; triangulation, 31 Data collection: alternative forms of, 29; archival, 29–31; baseline data and, 30, 131; capacity objective and, 60–61, 65; data sources and, 20–21; evaluation matrix and, 19–20; factors influencing, 20; focus group and, 27, 28–29; intent objective and, 60–61, 65; item analysis and, 148; journal, 29; needs assessment and, 148; observations as, 148–149; oneto-one interviews, 27, 28, 29; participatory approach and, 54–55; photography as, 29; portfolios and, 107; program evaluation, 6; surveys and, 20–27; tools, 20–31 Debrief, 163 Decision-based approach: CIPP model, 48–50; defined, 47–48; overview, 44 Demographics sections, of surveys, 25, 27 Director of special projects, 172 Documentation: activities, 17, 18, 62, 65; activity outputs, 18; end outcomes, 18; program implementation, 17–18

3GBINDEX

10/21/2013

13:41:56

Page 191

Index

Eclectic approach, 56, 59 Eligibility, RFP and, 74, 75 End outcomes: defined, 64; documentation, 18; as evaluation objective, 66 English language learners, 173 e-portfolios, 105–106 Ethics: defined, 41; examples, 41–42; Joint Committee standards and, 42–43 Evaluation accountability standards, 43 Evaluation approach: CIPP model and, 48–50; consumer-oriented, 44, 55; decision-based, 44, 47–50; defined, 43; eclectic, 56, 59; expertise-oriented, 44, 55–56; goal-free, 44, 45–46; objectives-based, 44–47; overview, 43–44; participatory, 44, 50–55, 89–97; summary, 56–57; Tylerian, 44, 46–47 Evaluation capacity: communitybased mentor program and, 92, 93, 95; defined, 92; preestablished instruments and, 93 Evaluation findings: dissemination of, 35–36; overview, 14; responsibilities surrounding, 35; summer camp example, 36–37; use of, 36 Evaluation matrix: for communitybased mentor program, 94; defined, 78; inquiry-based instruction in mathematics, 78, 80; structure of, 19–20; summer camp project, 19; trust and, 20 Evaluation objectives: activities documentation, 17, 18, 62, 65; activity outputs documentation, 18; benchmarks and, 45; capacity, 60–61, 65; categories, 17–18; defined, 15; developing, 16–17; end outcomes, 66; end outcomes documentation, 18;

established, 16; evaluation matrix and, 19–20; example, 45; fidelity, 62, 65; how to use, 64, 66; inquiry-based instruction in mathematics, 79–80; intent, 60–61, 65; intermediate outcomes, 64, 65; outputs of activities, 63, 65; participant satisfaction, 63, 65; program implementation documentation, 17–18; project goals and, 16–17; rater reliabilityand, 61–62; summary, 66–67; summer camp project illustrating, 15, 16–17, 33, 36; sustainability, 66; teacher candidates using technology case study,102; trustand, 16; validation, 61–62, 65; variety of,60 Evaluation plan: community-based mentor program, 92–96; defined, 73; high school science case study, 122–124; inquiry-based instruction in mathematics, 78–81; mathematics teacher training case study and, 149–151; overview, 14; parent involvement case study, 172–176; professional development technology case study, 113–116; reading achievement case study, 131; RFP and, 73; school advocacy case study, 162; special education teacher training case study, 182–185; statewide afterschool initiative case study, 140–143; teacher candidates using technology case study, 102–107 Evaluation program theory, 89 Evaluation report: body of, 32–35; cover page, 32; executive summary, 32; guidelines, 31–32; participatory approach and, 53; sections, 32–35; summer camp example, 33–35; writing of, 31–35

191

3GBINDEX

10/21/2013

192

13:41:56

Page 192

Index

Ex post facto evaluation: defined, 115; professional development technology case study and, 111–117 Executive summary, 32 Expertise-oriented approach: accreditation and, 56; defined, 55; internalized criteria used in, 55–56; overview, 44 External evaluators: challenges of, 12–13; defined, 72; participatory approach and, 50–51; perspective and, 11; role of, 12; trust and, 12–13 Extraneous variables, 130 Feasibility standards, 42 Fidelity objective, 62, 65 Findings, evaluation: dissemination of, 35–36; overview, 14; responsibilities surrounding, 35; summer camp example, 36–37; use of, 36 Focus group: community-based mentor program and, 95; data collection and, 27, 28–29; defined, 27; high school science case study and, 125; precautions, 28–29; principles of, 27, 28; recording, 29 Formative evaluation: applied research approaches compared with, 10; defined, 4; purpose of, 9–10; summative evaluation compared with, 9 Free response items, 26–27 Fuller, Dennis, 129 Funding agency: defined, 74; RFP and, 74–75 Funding cycle, 130 Goal-free approach: defined, 45; objectives-based approach compared with, 45–46; overview, 44

Grant writer, 73 Grassroots reading curriculum, 132–133 Hernandez, Sophia, 174 High school science case study: barrier faced by, 124–126; evaluation plan, 122–124; evaluators, 121–122; final thoughts, 126; focus group and, 125; meta-evaluation and, 121–126; program outline, 122; Science Away! and, 123–124 Higher collaboration of services, 140 Input. See CIPP model Input evaluation, 50 Inquiry-based instruction in mathematics case study: activities and findings summary, 81–86; components desired for, 77–78; evaluation matrix, 78, 80; evaluation objectives, 79–80; evaluation plan, 78–81; evaluator, 71–76; final thoughts, 86; needs assessment for, 76–77; phases, 78, 79; professional development and, 76; program, 76–78; project narrative and, 73; RFP and, 73–76; semistructured interviews and, 82; student outcomes, 84 Intent objective, 60–61, 65 Intermediate outcomes, 64, 65 Internal evaluators: advantages of, 13; perspective and, 11; role of, 12; trust and, 13 Inter-rater reliability, 61–62 Interviews: one-to-one, 27–29; protocol, 27; semistructured, 82, 149–150; structured, 82; teacher candidates using technology case study and, 103–104

3GBINDEX

10/21/2013

13:41:56

Page 193

Index

Intra-rater reliability, 61 Item analysis, 148 Jackson, Dan, 181–182 Jackson, Linda, 171 Jackson, Matt, 171 Jackson, Seth, 147–148 Johnstown School District. See Professional development technology case study Joint Committee on Standards for Educational Evaluation: AEA and, 42; standards, 42–43 Jones, Miss (advocacy mentor), 165 Journals, 29 Key informant, 172 Lamb, Margaret, 129 Larson, Tina, 137–138 Likert scales, 22–25, 26 Lincoln, Barbara, 147–148 Logic model: defined, 114, 115; narrative and, 115; overview, 115; professional development technology case study and, 114, 115–116; special education teacher training case study and, 184; survey developed using, 116 Mathematics case study, inquirybased instruction in: activities and findings summary, 81–86; components desired for, 77–78; evaluation matrix, 78, 80; evaluation objectives, 79–80; evaluation plan, 78–81; evaluator, 71–76; final thoughts, 86; needs assessment for, 76–77; phases, 78, 79; professional development and, 76; program, 76–78; project narrative and, 73; RFP and, 73–76; semistructured interviews and, 82; student outcomes, 84

Mathematics teacher training case study: action research and, 152–154; activities and findings summary, 151–154; challenges faced in, 149–151; evaluation plan and, 149–151; evaluators, 147–148; final thoughts, 154–155; needs assessment and, 148; observations used in, 148–149; program outline, 148–149; semistructured interviews used in, 149–150; top-down approach and, 151–152 Matrix, evaluation: for communitybased mentor program, 94; defined, 78; inquiry-based instruction in mathematics, 78, 80; structure of, 19–20; summer camp project, 19; trust and, 20 Mentor program case study: activities and findings summary, 97; evaluation capacity and, 92, 93, 95; evaluation matrix for, 94; evaluation plan, 92–96; evaluator for, 89–91; final thoughts, 97; focus group and, 95; goals, 91, 92; mixed-methods approach to, 92; participatory approach and, 89–97; participatory evaluation model and, 90; purpose, 91–92; SINI and, 91; surveys and, 95–96 Meta-evaluation: defined, 122; high school science case study and, 121–126 Mixed-methods approach: to community-based mentor program case study, 92; parent involvement case study using, 171–177 Model: action research, 152–154; defined, 138; participatory evaluation, 90; top-down approach, 151–152

193

3GBINDEX

10/21/2013

194

13:41:56

Page 194

Index

Model, CIPP: context evaluation and, 48–49; creation of, 48; defined, 48; input evaluation and, 50; process evaluation and, 50; product evaluation and, 50 Model, logic: defined, 114, 115; narrative and, 115; overview, 115; professional development technology case study and, 114, 115–116; special education teacher training case study and, 184; survey developed using, 116 Narrative: budget, 74; logic model and, 115; project, 73 National Council for Accreditation of Teacher Education (NCATE), 56 Need for project, 73 Needs assessment: defined, 76; for inquiry-based instruction in mathematics and, 76–77; mathematics teacher training case study and, 148 Nephews, Dana, 160 Objectives, evaluation: activities documentation, 17, 18, 62, 65; activity outputs documentation, 18; benchmarks and, 45; capacity, 60–61, 65; categories, 17–18; defined, 15; developing, 16–17; end outcomes, 66; end outcomes documentation, 18; established, 16; evaluation matrix and, 19–20; example, 45; fidelity, 62, 65; how to use, 64, 66; inquiry-based instruction in mathematics, 79–80; intent, 60–61, 65; intermediate outcomes, 64, 65; outputs of activities, 63, 65; participant satisfaction, 63, 65; program implementation documentation, 17–18; project goals and, 16–17;

rater reliability and, 61–62; summary, 66–67; summer camp project illustrating, 15, 16–17, 33, 36; sustainability, 66; teacher candidates using technology case study, 102; trust and, 16; validation, 61–62, 65; variety of, 60 Objectives-based approach: activities documentation, 62, 65; benchmarks and, 45; benefits, 47; capacity and, 60–61, 65; defined, 44; disadvantages, 47; early, 46–47; end outcomes and, 18, 64, 66; as evaluation approach, 44–47; fidelity and, 62, 65; goal-free approach compared with, 45–46; inquiry-based instruction in mathematics, 71–86; intent and, 60–61, 65; intermediate outcomes and, 64, 65; as most utilized, 14; objectives use and, 64, 66; outputs of activities and, 63, 65; overview, 44–45; participant satisfaction and, 63, 65; rater reliability and, 61–62; RFP and, 73–76; summary, 66–67; sustainability and, 66; teacher candidates using technology case study and, 101–108; Tylerian approach and, 44, 46–47; validation and, 61–62, 65; variety of objectives and, 60. See also Evaluation objectives Observation: mathematics teacher training case study and, 148–149; school advocacy case study and, 162–163 Observer effect, 165–166 One-to-one interviews: probes and, 27, 28; protocol, 27; recording, 29; summer camp example, 28 Open-ended or free response items, 26–27

3GBINDEX

10/21/2013

13:41:56

Page 195

Index

Outputs of activities objective, 63, 65 Parent involvement case study: activities and findings summary, 176–177; evaluation plan, 172–176; evaluators, 171; final thoughts, 177; key informant and, 172; mixed-methods approach used in, 171–177; program overview, 171–172; survey used in, 172–173 Participant satisfaction objective, 63, 65 Participatory approach: challenges, 54–55; community-based mentor program and, 89–97; conflicting interests and, 54; data collection and, 54–55; evaluation report and, 53; evaluator and, 50–51; example, 51–52; overview, 44; stakeholders and, 52–53; strengths, 53 Participatory evaluation model, 90 Partner, 139 Perspective, 11 Phonics, 7 Photography, 29 Plan, evaluation: community-based mentor program, 92–96; defined, 73; high school science case study, 122–124; inquiry-based instruction in mathematics, 78–81; mathematics teacher training case study and, 149–151; overview, 14; parent involvement case study, 172–176; professional development technology case study, 113–116; reading achievement case study, 131; RFP and, 73; school advocacy case study, 162; special education teacher training case study, 182–185; statewide after-school initiative case study,

140–143; teacher candidates using technology case study, 102–107 Portfolios: authentic assessment movement and, 104; checklist for analysis of, 107; defined, 101; e-portfolios, 105–106; teacher candidates using technology case study and, 104–107; technology and, 105; uses of, 104–105 Post, Jonathan, 92 Preestablished instruments, 93 Probes, 27, 28 Process. See CIPP model Process evaluation, 50 Product. See CIPP model Product evaluation, 50 Professional development, 76 Professional development technology case study: activities and findings summary, 116–117; benchmarks and, 113–114; evaluation plan, 113–116; evaluator, 111–112; ex post facto evaluation and, 111–117; final thoughts, 117; logic model and, 114, 115–116; program outline, 112–113 Program: defined, 3, 5; implementation documentation, 17–18; merit determination, 43; types of, 5 Program evaluation: changing practice and, 6–8; data collection, 6; defined, 5; findings and recommendations, 8; formative and summative evaluation and, 8–10; internal and external evaluators, 11–13; objectivesbased, 14; setting and participants, 6; summary, 37–38; training in, 10–11; vignette, 3–4. See also specific subject Project narrative, 73 Propriety standards, 42–43

195

3GBINDEX

10/21/2013

196

13:41:57

Page 196

Index

Protocol, interviews, 27 Proven practices: cautions about, 133; defined, 130 Rater reliability, 61–62 Reading achievement case study: activities and findings summary, 131–133; evaluation plan, 131; evaluation questions for, 130; evaluators, 129; final thoughts, 133; program outline, 130–131 Reading Right program: findings about, 131–132; grassroots reading curriculum and, 132; parameters, 130–131; as proven practice, 130 Recording, 29 Report, evaluation: body of, 32–35; cover page, 32; executive summary, 32; guidelines, 31–32; participatory approach and, 53; sections, 32–35; summer camp example, 33–35; writing of, 31–35 Request for proposal (RFP): award and, 74, 76; budget narrative in, 74; common elements, 73–74; defined, 73; eligibility and, 74, 75; evaluation plan in, 73; funding agency and, 74–75; inquiry-based instruction in mathematics case study and, 73–76; need for project in, 73; process overview, 74–76; project narrative in, 73; SINI and, 75; statewide evaluation and, 138 Research, action: defined, 152; mathematics teacher training case study and, 152–154; overview of, 153 Research approach, applied, 10 RFP. See Request for proposal Rogers, P. J., 115

School advocacy case study: activities and findings summary, 162–168; challenges faced in, 160–162; evaluation plan, 162; evaluator, 159; final thoughts, 168; observations carried out in, 162–163; observer effect noted in, 165–166; program overview, 160–162 Schools in Need of Improvement (SINI): community-based mentor program and, 91; RFP and, 75 Science Away! (teacher training institute), 123–124 Science case study: barrier faced by, 124–126; evaluation plan, 122–124; evaluators, 121–122; final thoughts, 126; focus group and, 125; meta-evaluation and, 121–126; program outline, 122; Science Away! and, 123–124 Scriven, Michael, 55 Semistructured interviews: defined, 82; inquiry-based instruction in mathematics case study and, 82; mathematics teacher training case study and, 149–150 Simpson, Jason, 101 SINI. See Schools in Need of Improvement Smith, Marilyn, 181–182 Smith, Shirah, 159 Special education teacher training case study: activities and findings summary, 185; evaluation plan, 182–185; evaluators, 181–182; final thoughts, 185–186; logic model and, 184; program overview, 182; strong evidence and, 183 Stakeholders: evaluation matrix and, 19–20; participatory approach and, 52–53; project narrative and, 73; survey and, 21

3GBINDEX

10/21/2013

13:41:57

Page 197

Index

Statewide after-school initiative case study: activities and findings summary, 143–144; after-school program examples in, 139; challenges faced by, 142–143; evaluation plan, 140–143; evaluator, 137–138; final thoughts, 144; higher collaboration of services and, 140; partnering and, 139; program outline, 138–140; RFP and, 138 Statewide evaluation, 138 Stevenson, Daphne, 101 Strong evidence, 183 Structured interview, 82 Stufflebeam, David, 48 Suchman, Edward, 115 Summative evaluation, 4; defined, 8; formative evaluation compared with, 9; purpose of, 8–9 Summer camp project: evaluation findings example, 36–37; evaluation matrix for, 19; evaluation report example, 33–35; objectives, 15, 16–17, 33, 36; one-to-one interview example, 28; survey, 22–25 Surveys: checklists and, 24, 26; community-based mentor program and, 95–96; data collection and, 20–27; data sources and, 20–21; demographics sections, 25, 27; designing, 21, 22–25; elements of successful, 21, 26; Likert scales and, 22–25, 26; logic model used to develop, 116; open-ended or free response items in, 26–27; parent involvement case study and, 172–173; scales for collecting data via, 21, 26–27; stakeholders and, 21; summer camp, 22–25; teacher candidates using

technology case study and, 103–104 Sustainability objective, 66 Teacher candidate, 101 Teacher candidates using technology case study: activities and findings summary, 108; evaluation objectives, 102; evaluation plan, 102–107; evaluators, 101; final thoughts, 108; interview and, 103–104; objectives-based approach and, 101–108; portfolio analysis checklist, 107; portfolios and, 104–107; program outline, 102; surveys and, 103–104 Teacher training case study, mathematics: action research and, 152–154; activities and findings summary, 151–154; challenges faced in, 149–151; evaluation plan and, 149–151; evaluators, 147–148; final thoughts, 154–155; needs assessment and, 148; observations used in, 148–149; program outline, 148–149; semistructured interviews used in, 149–150; top-down approach and, 151–152 Teacher training case study, special education: activities and findings summary, 185; evaluation plan, 182–185; evaluators, 181–182; final thoughts, 185–186; logic model and, 184; program overview, 182; strong evidence and, 183 Technology, 105 Technology case study, professional development: activities and findings summary, 116–117; benchmarks and, 113–114; evaluation plan, 113–116; evaluator, 111–112;

197

3GBINDEX

10/21/2013

198

13:41:57

Page 198

Index

Technology case study (continued) ex post facto evaluation and, 111–117; final thoughts, 117; logic model and, 114, 115–116; program outline, 112–113 Technology case study, teacher candidates using: activities and findings summary, 108; evaluation objectives, 102; evaluation plan, 102–107; evaluators, 101; final thoughts, 108; interview and, 103–104; objectives-based approach and, 101–108; portfolio analysis checklist, 107; portfolios and, 104–107; program outline, 102; surveys and, 103–104 Thomas (program evaluator), 71–73 Tools, data collection, 20–31

Top-down approach, 151–152 Treatment, 10 Triangulation, 31 Trust: evaluation matrix and, 20; evaluation objectives and, 16; external evaluators and, 12–13; internal evaluators and, 13 Tyler, Ralph, 46 Tylerian approach: benchmarks and, 46–47; defined, 46; overview, 44 Utility standards, 42 Validation: defined, 61; overview, 65; rater reliability and, 61–62 Whole language, 7 Wright, Jennifer, 121–122