170 70 2MB
English Pages 344 Year 2010
TENSIONS IN TEACHER PREPARATION: ACCOUNTABILITY, ASSESSMENT, AND ACCREDITATION
ADVANCES IN RESEARCH ON TEACHING Series Editor: Stefinee Pinnegar Recent Volumes: Volume 1:
Teachers for Understanding and Self Regulation
Volume 2:
Teacher Knowledge of Subject Matter
Volume 3:
Planning and Managing Learning Tasks
Volume 4:
Case Studies of Teaching and Learning
Volume 5:
Learning and Teaching Elementary Subjects
Volume 6:
Teaching and Learning History
Volume 7:
Expectations in the Classroom
Volume 8:
Subject-Specific Instructional Methods and Activities
Volume 9:
Social Constructivist Teaching: Affordances and Constraints
Volume 10:
Using Video in Teacher Education
Volume 11:
Learning From Research on Teaching: Perspective, Methodology and Representation
ADVANCES IN RESEARCH ON TEACHING VOLUME 12
TENSIONS IN TEACHER PREPARATION: ACCOUNTABILITY, ASSESSMENT, AND ACCREDITATION EDITED BY
LYNNETTE B. ERICKSON Brigham Young University, UT, USA
NANCY WENTWORTH Brigham Young University, UT, USA
United Kingdom – North America – Japan India – Malaysia – China
Emerald Group Publishing Limited Howard House, Wagon Lane, Bingley BD16 1WA, UK First edition 2010 Copyright r 2010 Emerald Group Publishing Limited Reprints and permission service Contact: [email protected] No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a licence permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. No responsibility is accepted for the accuracy of information contained in the text, illustrations or advertisements. The opinions expressed in these chapters are not necessarily those of the Editor or the publisher. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-85724-099-6 ISSN: 1479-3687 (Series)
Emerald Group Publishing Limited, Howard House, Environmental Management System has been certified by ISOQAR to ISO 14001:2004 standards Awarded in recognition of Emerald’s production department’s adherence to quality systems and processes when preparing scholarly journals for print
CONTENTS LIST OF CONTRIBUTORS
ix
DISCLAIMER
xiii
CHAPTER 1 TENSIONS: NEGOTIATING THE CHALLENGES OF ACCOUNTABILITY AND ACCREDITATION Nancy Wentworth and Lynnette B. Erickson
1
CHAPTER 2 EDUCATING THE EDUCATORS: ACCREDITATION AS A TEACHING AND LEARNING TOOL Ann Monroe-Baillargeon
11
CHAPTER 3 DECORATING FOR NCATE Richard D. Osguthorpe and Jennifer L. Snow-Gerono
35
CHAPTER 4 TENSIONS, COLLABORATION, AND PIZZA CREATE PARADIGM SHIFTS IN A TEACHER EDUCATION PROGRAM Lynnette B. Erickson, Nancy Wentworth and Sharon Black
55
CHAPTER 5 INTERNATIONAL PERSPECTIVES ON ACCOUNTABILITY AND ACCREDITATION: ARE WE ASKING THE RIGHT QUESTIONS? Brenda L. H. Marina, Cindi Chance and Judi Repman
69
CHAPTER 6 LIVING WITH ACCREDITATION: REALIZATIONS AND FRUSTRATIONS OF ONE SMALL UNIVERSITY Judith A. Neufeld
87
v
vi
CONTENTS
CHAPTER 7 IS THIS DATA USEFUL? THE IMPACT OF ACCREDITATION ON THE DEVELOPMENT OF ASSESSMENTS Sam Hausfather and Nancy Williams CHAPTER 8 MAKING STONE SOUP: TENSIONS OF NATIONAL ACCREDITATION FOR AN URBAN TEACHER EDUCATION PROGRAM Carolyne J. White, Joelle J. Tutela, James M. Lipuma and Jessica Vassallo CHAPTER 9 DEVELOPING DATA SYSTEMS FOR CONTINUOUS IMPROVEMENT UNDER THE NCATE STRUCTURE: A CASE STUDY Elaine Ackerman and John H. Hoover CHAPTER 10 WHAT’S THAT NOISE? THINGS THAT KEEP US AWAKE AT NIGHT: THE COST OF UNEXAMINED ASSUMPTIONS IN PRE-SERVICE ASSESSMENT AND ACCREDITATION James H. Powell, Letitia Hochstrasser Fickel, Patricia Chesbro and Nancy Boxler
105
123
137
163
CHAPTER 11 REVISITING SELF IN THE MIDST OF NCATE AND OTHER ACCOUNTABILITY DEMANDS Cheryl J. Craig
183
CHAPTER 12 DOES NATIONAL ACCREDITATION FOSTER TEACHER PROFESSIONALISM? Ken Jones and Catherine Fallona
199
CHAPTER 13 SOOTHING CERBERUS: THE WYOMING ODYSSEY Linda Hutchison, Alan Buss, Judith Ellsworth and Kay Persichitte CHAPTER 14 ACCREDITATION: RESPONDING TO A CULTURE OF PROGRAM EVALUATION Linda E. Pierce and Susan Simmerman
213
231
Contents
CHAPTER 15 WESTERN GOVERNORS UNIVERSITY: A RADICAL MODEL FOR PRESERVICE TEACHER EDUCATION Thomas W. Zane, Janet W. Schnitz and Michael H. Abel
vii
253
CHAPTER 16 TRANSFORMATION FROM TENSION TO TRIUMPH: THREE PERSPECTIVES ON THE NCATE PROCESS James M. Shiveley, Teresa McGowan and Ellen Hill
271
CHAPTER 17 REFLECTIONS ON THE SHARED ORDEAL OF ACCREDITATION ACROSS INSTITUTIONAL NARRATIVES Lynnette B. Erickson and Nancy Wentworth
293
ABOUT THE AUTHORS
319
LIST OF CONTRIBUTORS Michael H. Abel
Western Governors University, Salt Lake City, UT, USA
Elaine Ackerman
Concordia College, Moorhead, MN, USA
Sharon Black
David O. McKay School of Education, Brigham Young University, Provo, UT, USA
Nancy Boxler
College of Education, University of Alaska Anchorage, Anchorage, AK, USA
Alan Buss
University of Wyoming, Laramie, WY, USA
Cindi Chance
College of Education, Georgia Southern University, Statesboro, GA, USA
Patricia Chesbro
College of Education, University of Alaska Anchorage, Anchorage, AK, USA
Cheryl J. Craig
University of Houston, Houston, TX, USA
Judith Ellsworth
University of Wyoming, Laramie, WY, USA
Lynnette B. Erickson
Department of Teacher Education, Brigham Young University, Provo, UT, USA
Catherine Fallona
University of Southern Maine, Gorham, ME, USA
Letitia Hochstrasser Fickel
College of Education, University of Alaska Anchorage, Anchorage, AK, USA ix
x
LIST OF CONTRIBUTORS
Sam Hausfather
School of Education, Maryville University of St. Louis, St. Louis, MO, USA
Ellen Hill
School of Education, Health and Society, Miami University, Oxford, OH, USA
John H. Hoover
St. Cloud State University, College of Education, St. Cloud, MN, USA
Linda Hutchison
University of Wyoming, Laramie, WY, USA
Ken Jones
University of Southern Maine, Gorham, ME, USA
James M. Lipuma
Humanities Department, New Jersey Institute of Technology, Newark, NJ, USA
Brenda L. H. Marina
College of Education, Georgia Southern University, Statesboro, GA, USA
Teresa McGowan
School of Education, Health and Society, Miami University, Oxford, OH, USA
Ann Monroe-Baillargeon Alfred University, Division of Education, Alfred, NY, USA Judith A. Neufeld
College of Education, Lander University, Greenwood, SC, USA
Richard D. Osguthorpe
Boise State University, College of Education, Boise, ID, USA
Kay Persichitte
University of Wyoming, Laramie, WY, USA
Linda E. Pierce
School of Education, Utah Valley University, Orem, UT, USA
James H. Powell
College of Education, University of Alaska Anchorage, Anchorage, AK, USA
Judi Repman
College of Education, Georgia Southern University, Statesboro, GA, USA
Janet W. Schnitz
Western Governors University, Salt Lake City, UT, USA
xi
List of Contributors
James M. Shiveley
School of Education, Health and Society, Miami University, Oxford, OH, USA
Susan Simmerman
School of Education, Utah Valley University, Orem, UT, USA
Jennifer L. Snow-Gerono Boise State University, College of Education, Boise, ID, USA Joelle J. Tutela
Department of Urban Education, Rutgers University, Newark, NJ, USA
Jessica Vassallo
Department of Urban Education, Rutgers University, Newark, NJ, USA
Nancy Wentworth
Department of Teacher Education, Brigham Young University, Provo, UT, USA
Carolyne J. White
Department of Urban Education, Rutgers University, Newark, NJ, USA
Nancy Williams
School of Education, Maryville University of St. Louis, St. Louis, MO, USA
Thomas W. Zane
Western Governors University, Salt Lake City, UT, USA
DISCLAIMER The realities of accreditation are unique and varied not only between institutions but also between individuals within programs and institutions. The chapters in this book are written through the unique lenses of each set of authors, which have been shaped by their personal philosophies and experiences, and do not necessarily represent the institutions, individuals, or agencies identified in this work or the positions of the publisher.
xiii
CHAPTER 1 TENSIONS: NEGOTIATING THE CHALLENGES OF ACCOUNTABILITY AND ACCREDITATION Nancy Wentworth and Lynnette B. Erickson ABSTRACT Brigham Young University has been consistently accredited by National Council for Accreditation of Teacher Education (NCATE) since 1954. Our accreditation reports of past years focused on input information – general goals, complicated organization diagrams, and clinical performance assessments. When NCATE moved from inputs to outcomes with evidence grounded in measurable data, we worked collaboratively among teacher education faculty, faculty from the arts and sciences colleges, and public school partners to overhaul our assessment system and design new instruments. Our current accreditation reports include course and clinical assessments aligned with specific program outcomes, statistical charts detailing the levels at which these outcomes are being met, and documentation of programmatic decisions based on the findings of our assessments. Moving from input descriptions to output evidence was a painful process. However, we have come to appreciate the usefulness and value of our experiences, the tools that emerged, and the new decisionmaking processes we now engage in. This chapter is a recounting of our Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 1–10 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012004
1
2
NANCY WENTWORTH AND LYNNETTE B. ERICKSON
frustrations and the lessons we learned as we moved toward a culture of data-based decision-making.
Accountability, assessment, and accreditation are the new measures of trust in education. A recently renewed call for educational accountability has rapidly impacted all aspects and levels of education: elementary and secondary public schools; special education, Title I, programs for English Language Learners; as well as institutions of higher education. In the spirit of accountability, accreditation on all these levels has become the way to certify that educational institutions are trustworthy as stewards of education. Nowhere is this more recognizable than in the assessment systems used to accredit teacher preparation programs. Although accreditation for teacher preparation programs has existed in the United States since the early 1950s, current mandates require a move from certifying that programs meet quality standards to a demand that these programs also provide evidence that their graduates are competent and qualified (Ewell, 2008). These evidentiary efforts are now guided by standards for teacher quality established by federal and professional bodies that then authorize agencies to evaluate and accredit teacher preparation programs. Requirements for demonstrating program effectiveness have changed from merely reporting program inputs, goals, and resources to a demand for database evidence that documents and validates teacher preparation programs as having educated teacher candidates who successfully achieved mandated outcomes (Brittingham, 2009). Under the current system, accreditation changes from inputs to outcome accountability. This represents a significant paradigm shift in education. A paradigm is a fundamental way of believing and knowing the world. As a concept, it allows us to conceptualize relationships and make connections between ourselves and our world. The term paradigm shift represents the notion of a major change in a certain thought pattern – a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing (Kuhn, 1970). The accountability movement that triggered this paradigm shift is reminiscent of the argument about whether teaching is an art or a science (Dewey, 1929; Gage, 1978). Teachers and teacher educators have long debated this issue. Many believe that teachers are born to teach and that they intuitively know when they are influencing and teaching students. To these educators, teaching is an art not learned from classes, textbooks, or research. Teacher is who and what they are. Others believe that with enough
Tensions
3
training anyone can learn to teach by applying the theories and skills of effective teaching in the classroom (Brophy & Good, 1986). For most educators, however, teaching blends the art of teacher with theories and skills – the science of teaching. The conversation regarding the art and science of teaching continued since Dewey argued for this balance saying that the science of teaching helps the educator be ‘‘more intelligent and complete in his observations, knowing what to look for, and is guided in his interpretation of what he sees because he now sees it in light of a larger set of relationships’’ (p. 30). For classroom lessons to be truly effective, teachers must examine and develop their knowledge and skills, so that they can achieve that dynamic fusion of art and science that results in exceptional teaching and outstanding student achievement. Accreditation paradigm shifts from input to output measures in evaluating the quality of teacher preparation programs mirror this argument about the competing visions of the art and science of teaching. Formerly used input models of accreditation required that teacher education institutions presented evidence that they provided quality programmatic experiences for teacher candidates (e.g., conceptual frameworks, program goals, qualifications of personnel, and placements). The new output model requires that institutions not only provide evidence of the quality of their program but also document quality of their program and its effectiveness in preparing teacher candidates that meet established standards. In this new model, institutions must report assessment data documenting the performances of their candidates in their course and clinic experiences and then use that data for program improvement. These new accreditation requirements for input, output, and decision-making data create conditions for personal and programmatic conflict that often results in tensions between teacher preparation programs, their faculties, and accreditation bodies. Paradigm shifts required by accreditation in this current emphasis on accountability have been met with mixed responses. Testimonies offered by teacher educators of their accreditation experiences have been shared in faculty meetings, coffee shops, and professional association venues. Participants clearly articulate these tensions as they express their dissatisfaction and frustration with the process, yet all the while taking consolation in their accomplishments. The remainder of this chapter provides a context to build reader understanding of this new accreditation process and the accounts that follow wherein authors share their accounts and reveal tensions they experienced as teacher education faculty meeting accreditation requirements. As the reader examines the institutional narratives in this book, we hope they will consider
4
NANCY WENTWORTH AND LYNNETTE B. ERICKSON
how each of the different stakeholders participating in accreditation interpret, enact, and experience this new call for accountability and assessment.
MASTERS FOR ACCOUNTABILITY Although accountability is not a foreign concept in education, the focus under the recent federal mandate, No Child Left Behind (NCLB), includes specific expectations for student achievement, along with new standards for teacher competency. As a result of this top-down directive, districts are required to collect evidence of student progress, inservice teachers must demonstrate their competence as well, usually through passing standardized tests that assess their knowledge of subject matter and skills of teaching. Some states have moved beyond simply requiring that teachers demonstrate their competence as teachers through such tests and have begun to link student assessment performance directly to teacher effectiveness (Sanders – value added). Under the new accountability requirements, institutions are also being asked to demonstrate the quality of their teacher preparation programs by providing data to document the quality of their students – teacher candidates. Although teacher preparation programs may use this evidence to gauge the merit of their students and programs, accreditation bodies external to teacher education programs require this data to judge and evaluate teacher candidate and program quality. Numerous other groups such as state offices of education, public schools, institutes of higher education in which teacher preparation programs reside also use the data for various evaluation purposes. Each of these represents different masters who establish expectations and may have an intrusive leverage point that brings potentially conflicting interests resulting in tensions for teacher preparation programs.
National Accreditation Agencies National accreditation is vital to preparation programs since it certifies that they meet the highest professional standards for teacher preparation and provides public recognition for the quality of their program. The standards for national accreditation in educator preparation are designed to ensure that public interests in education are being attended to, candidate-learning outcomes are being focused on, and candidate learning and program decisions are grounded in evidence. Therefore, national accreditation is a
5
Tensions
major part of validating and granting credibility to teacher education programs. Federal legislation (No Child Left Behind, 2001) prompted the National Council for Accreditation of Teacher Education (NCATE) and Teacher Education Accreditation Council (TEAC) to redefine the process and requirements for teacher preparation program accreditation. NCATE moved from program inputs to candidate performance outcomes, along with appropriate assessments, to provide evidence of candidate learning and data for program decision-making. TEAC required programs to define their claims, develop and collect data using valid and reliable instruments that measure their claims, and demonstrate how data are used for program improvement. Typically as colleges of education seek national accreditation, whether NCATE or TEAC, these changes made in the requirements prove to be significant obstacles in navigating this process. These challenges increase ambivalence associated with accreditation and fuel questions related to possible personal and institutional benefits.
State Offices of Education State offices of education have also been caught in the charge for accountability and evidence-based decision-making. Under NCLB, each state has the responsibility to insure that every child has a quality teacher, and as a result, many states have made changes to their licensure requirements. Like standards for accreditation, these licensure requirements frame the design and implementation of teacher preparation programs. In altering teacher licensure requirements, states have invoked their own mandates, some of which conflict with those of accreditation. These new expectations can be frustrating and confusing for institutions involved in developing teacher preparation programs and preparing teachers for licensure. Although some states have conceded to license all teacher candidates graduating from NCATE- or TEAC-approved programs, the negotiations involved in this process have created ambiguity and tension for teacher educators as they strive to meet the expectations of several bodies.
Public Schools Teacher education programs depend on public schools for providing classroom settings where their candidates can put theories of teaching and
6
NANCY WENTWORTH AND LYNNETTE B. ERICKSON
learning into practice. Education faculty often work with public school teachers to create and assess curriculum and provide school districts with professional development for inservice teachers. These close associations influence teacher educators’ understandings of the impact of NCLB legislation on public schools and teachers. Conversely, as teacher preparation programs prepare for accreditation, they often seek to collaborate with public school teachers as they establish criteria and formats for teacher candidate performance assessments. Although the practical perspectives of public school educators are an important factor in the education of teachers, these collaborations often complicate the assessment and accreditation process. For example, tensions arise when public school teachers and teacher educators have conflicting expectations for evaluating teacher candidates’ clinical performance. One source of difference is based on whether novice teachers should be held to the same standards of performance as effective seasoned teachers. Another tension resides in how the results of summative clinical performance instruments designed to assess teacher candidates are used in the hiring process. The primary responsibility in negotiating through these tensions and others like them falls to teacher educators since their programs depend on public schools as essential components in the preparation of teachers. Therefore, although schools may not have direct authority over teacher education programs, teacher educators feel compelled to listen and respond to their requests, which makes schools another master for accountability.
Institutions of Higher Education Teacher preparation programs not only have to align with national, state, and professional preparation standards, they also reside in institutions of higher education that seek accreditation from a separate set of agencies. Therefore, as institutions of higher education participate in accreditation by regional accrediting organizations (e.g., North Central Association of Colleges and Schools, Northwest Commission on Colleges and Universities, New England Association of Schools and Colleges), they put pressure on teacher education programs to provide evidence that the goals of teacher preparation align with the aims and goals of the university. This additional layer of accreditation demands creates additional tensions for faculty. They must not only demonstrate that their program and course objectives and assessments meet requirements of NCATE or TEAC; they must also articulate how these same objectives and assessments align with
7
Tensions
university aims and goals. Such exercises in meeting competing accreditation demands lead some faculty to express frustration with the process. As Daigle and Cuocco (2002) observed, ‘‘attempts to apply traditional forms of accountability and ‘bean counting’ to an academic institution are considered by some to be unnecessary, irrelevant, misleading, and insulting.’’ Meeting the demands from national agencies, state departments of education, local school districts, and their own institutions of higher education can be a tension-filled enterprise for teacher educators involved in the accountability process. To say that teacher education programs will become accountable for preparing teachers so that all children learn and achievement gaps close in our public schools appears at first to be a lofty and noble goal, but on closer examination it is rather simplistic and naı¨ ve.
STORIES OF ACCREDITATION This book is a collection of stories told from both the institutional viewpoint and that of individual faculty within teacher education programs about their experiences in trying to meet the competing and sometimes conflicting demands of the multiple masters that are always part of the accreditation process. To identify potential contributors, we contacted institutions listed on the NCATE or TEAC websites inviting them to participate in the book by providing a narrative account documenting their accreditation experience. As institutions responded to the invitation, we identified promising chapter outlines, and we attended to whether they would collectively represent a blend of programs. We wanted to include a balanced set of accounts that would represent public and private institutions, both large and small, from various areas of the United States who have participated in NCATE or TEAC accreditation. The chapters provide this kind of balanced overview of teacher education programs who hold themselves accountable through engaging in accreditation. Table 1 provides evidence of the diversity represented in this book. It indicates where institutions are located, who accredited them and when, whether they are public or private institutions, the make up of their student demographic including the total number of students in their program and the proportion that are male or female, and finally, the number of faculty involved, whether tenure-track, adjunct, or clinical. As the title of this book suggests, we invited authors to share their personal, programmatic, and/or institutional narratives and the tensions that emerged in the process of program accreditation. Although these
Private religious
Public
Public
Private
NCATE Initial: 1954 Last: 2005 TEAC Initial: 2009 Next: 2014
NCATE Initial: 1954 Last: 2006 Next: 2013
NCATE Initial: 1999 Last: 2004 Next: 2012
NCATE First: 1980 Last: 2008 Next: 2015
TEAC Initial: 2009 Last: 2009 Next: 2014
Brigham Young University (BYU), Utah, http://education.byu.edu/, Contributors: Erickson et al.
Georgia Southern University (GSU), Georgia, http://coe.georgiasouthern. edu/, Contributors: Marina et al.
Lander University (Lander), South Carolina, www.lander.edu/education/, Contributor: Neufeld
Maryville University (Maryville), Missouri, www.maryville.edu/ed, Contributors: Hausfather and Williams
Rutgers University (Rutgers), New Jersey, www.rutgers.edu/, Contributors: White et al.
Total: 67 Male: 27% Female: 73%
Total: 166 Male: 25% (university wide) Female: 75% (university wide)
Total: 2836 Male: 33% Female: 67%
Total: 2140 Male: NA Female: NA
Total: 1110 Male: 23.2% Female: 76.8%
Total: 1945 Male: NA Female: NA
Public
NCATE Last: Spring 2009 Next: TBA
Boise State University (Boise), Idaho, http://education.boisestate.edu/, Contributors: Osguthorpe and SnowGerono
Student Demographics
Total: 20–30 Male: 30% Female: 70%
Public or Private
TEAC Private Initial: 2007 Audited: 2009 (Anticipate initial accreditation to be awarded for 3/10 to 3/15, 2010)
Accreditation Body
Summary of Contributors’ Institutions.
Alfred University (Alfred), New York, http://las.alfred.edu/education/, Contributor: Monroe-Baillargeon
Institutions
Table 1.
Tenure track: 7 Adjunct: 2 Clinical: 1
Tenure track: 20 Adjunct: 24 Clinical: 5
Tenure track: 13 Adjunct: 27 Clinical: 0
Tenure track: 70 Adjunct: 16 Clinical: 935 (in schools)
Tenure track: 85 Adjunct: 24 Clinical: 21
Tenure track: 83 Adjunct: 25 Clinical: 0
Tenure track: 4 Adjunct: 3 Clinical: 2
Faculty by Category
8 NANCY WENTWORTH AND LYNNETTE B. ERICKSON
Public
Online
Public
NCATE Initial: 1954 Last: 2007 Next: 2014
NCATE TEAC Initial: 2009 Next: 2014
NCATE Initial: 1954 Last: 2008 Next: 2016
TEAC Initial: 2007 Next: 2013
NCATE Initial: 2005 Last: 2006 Next: 2012
NCATE Initial: 1954 Last: 2008 Next: 2016
University of Houston (UH), Texas, www.coe.uh.edu, Contributor: Craig
University of Southern Maine (USM), Maine, http://usm.maine.edu/cehd/, Contributors: Jones and Fallona
University of Wyoming (UW), Wyoming, http://ed.uwyo.edu/majors.asp, Contributors: Hutchinson et al.
Utah Valley University (UVU), Utah, http://www.uvu.edu/education/, Contributors: Pierce and Simmerman
Western Governors University (WGU), Online (headquarters in Utah), http:// www.wgu.edu/degreesandprograms, Contributors: Zane et al.
Miami University (Miami), Ohio, http://www.muohio.edu/ted, Contributors: Shiveley et al.
Public
Public
Public
Public
NCATE to TEAC Initial: 2005 Next: 2010
University of Alaska-Anchorage (UAA), Alaska, www.uaa.alaska.edu/coe/, Contributors: Powell et al.
Public
NCATE Initial: 1954 Last: 2008 Next: 2015
St. Cloud University (St. Cloud), Minnesota, www.stcloudstate.edu/coe/ tqe, Contributors: Ackerman and Hoover
Tenure track: 28 Adjunct: 8 Clinical: 0
Course Mentors 125 Student Advisory 225 Assess Evaluators 300 St Teacher Supervisors 750
Total: B9000 Male: 22% Female: 78%
Total: 600 Male: 46% (university wide) Female: 54% (university wide)
Tenure track: 19 Adjunct: 23 Clinical: 0
Total: 720 Male: 4.6% (Elem), 42.2% (Sec) Female: 95.4% (Elem), 57.8% (Sec)
Tenure track: 65 Adjunct: 7 Clinical: 4.5
Tenure track: 12 Adjunct: 28 Clinical: 0
Total: B180 Male: 24% Female: 76% Total: 1731 Male: 30% Female: 70%
Tenure track: 27 Adjunct: 13 Clinical: 0
Tenure track: 8 Adjunct: 9 Clinical: 5
Tenure track: 91 Adjunct: 24 Clinical: 0
Total: Male: Female:
Total: 1130 Male: 19% Female: 81%
Total: 1095 Male: 13% Female: 87%
Tensions 9
10
NANCY WENTWORTH AND LYNNETTE B. ERICKSON
accounts are not meant to be representative of all teacher preparation programs, they do illuminate experience of participating in accreditation and allow the reader to learn from their experiences. Each chapter presents an example of how different individuals and programs negotiated the call for accountability. We invite the reader to enjoy these chapters; but also as they read, develop a deeper understanding of teacher education and accreditation processes. We ask that the reader think of what they learn across the chapters considering how and in what ways teacher educators’ knowledge about teacher candidates shifted, the differences between former accreditation experiences and current ones, how institutions demonstrated what they valued in teaching and learning, the positive and the negative tensions inherent in the accreditation process and whether these improve the quality of teacher preparation. In the final chapter, we provide a summary of our understandings that surfaced as we examined and analyzed these sometimes straightforward, other times poignant, but always informative accounts. In our analysis we focus on the shared ordeal and common tensions that emerge as teacher education programs experience the necessary changes in conforming to accountability, assessment, and accreditation requirements.
REFERENCES Brittingham, B. (2009). Accreditation’s benefits for individuals and institutions. In: P. M. O’Brien (Ed.), Accreditation: Assuring and enhancing quality (pp. 7–27). San Francisco: Jossey-Bass. Brophy, J. E., & Good, T. L. (1986). Teacher behavior and student achievement. In: M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 328–375). New York: MacMillan. Daigle, S. L., & Cuocco, P. (2002). Public accountability and higher education: Soul mates or strange bedfellows? Boulder, CO: Educause Center for Applied Research. Dewey, J. (1929). The sources of a science of education. New York: The Kappa Delta Pi Lecture Series. Ewell, P. T. (2008). US accreditation and the future of quality assurance: A tenth anniversary report from the council on higher education accreditation. Washington, DC: CHEA Institute for Research and Study of Accreditation and Quality Assurance. Gage, N. L. (1978). The scientific basis of the art of teaching. New York: Teachers College Press. Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago: The University of Chicago Press. No Child Left Behind Act. (2001). Available at http://www.ed.gov/policy/elsec/leg/esea02/ index.html
CHAPTER 2 EDUCATING THE EDUCATORS: ACCREDITATION AS A TEACHING AND LEARNING TOOL$ Ann Monroe-Baillargeon ABSTRACT This chapter chronicles the process of one division of education’s journey in achieving initial teacher accreditation from the perspective of the chair and author of the accreditation report. It was acknowledged early in the process that a collaborative self-study that had never been done before would be critical to a successful outcome. A committed faculty willingly participated in a study of themselves, their work and their collective work as a division. A deeper understanding of the complex role of teacher educators and authentic assessment in teacher education led to the development of a new assessment system resulting in valid and reliable data to support current claims and make planning decisions. Their shared belief in the power of education and understanding drove the faculty through various challenging, frustrating, invigorating, and exhausting experiences resulting in positive change and a clearer vision for the future.
$
This chapter is based on the experiences of the author and does not necessarily represent the views or opinions of the institution, individuals, or agencies identified in this work.
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 11–33 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012005
11
12
ANN MONROE-BAILLARGEON
UNDERSTANDING OURSELVES Welcome to Alfred! In April 2007 I was warmly greeted with these words as I accepted my contract as associate professor and chair of Alfred University’s division of Education to begin July 1. Soon after accepting the contract, I was asked whether I might accept a small stipend for my time in meeting with the retiring chair of the division to ensure a smooth transition. Although I had understood from my interviews that the division was seeking initial accreditation through TEAC and had participated in its first site review in March 2007, with an anticipated decision in November 2007, I certainly did not grasp the impact of the complexity of this component of my new job. The next two and a half years were ones I will always remember as educative. This process was educative to me as a new chair and educative to the faculty in our division on issues of teacher education, collaborative decision making, and learning outcomes. It was educative to our students – future teachers – as to a new method of assessing their learning and educative to the university as to the culture of teacher education accreditation. This chapter chronicles our divisions’ most recent experiences through this accreditation process and what I as chair see as the most valuable outcome, our new understandings. As we began in the fall of 2007, under the leadership of a new chair, and three of six (50%) new full-time faculty members we decided to begin by trying to better understand who we were, so we then might discuss our hopes, dreams, and plans for the future.
Alfred University Alfred University, founded in 1836 by liberal, independent thinkers, has always valued education for all citizens. About 2,000 full-time undergraduate and 300 graduate students are enrolled in graduate and undergraduate programs coexist while a traditional liberal arts education thrives alongside more professionally oriented programs. Each year, students pursue studies through 54 bachelors programs, 12 masters programs, 2 post-masters certificate programs, and 4 doctoral programs. These varied programs are united in establishing teaching excellence and concern for the individual student as primary goals. The Teacher Education program, situated in the Education Division, is part of the College of Liberal Arts and Sciences (CLAS) at Alfred University. The CLAS offering some 24 majors and over 30 different minors is the largest of Alfred’s four schools, with some 80 faculty and over 900 students.
13
Educating the Educators
The chairs of divisions and directors of programs report to the Dean of the CLAS. The Dean of the College reports to the Provost and Vice President for Academic Affairs.
The Division of Education Alfred University has had New York State approved undergraduate programs in Education for over 40 years and an approved graduate program in Literacy for 25 years. Before the New York State re-registration of teacher certification programs in 2004, Alfred offered undergraduate teacher certification programs in elementary education, secondary education, special education, and graduate certification programs in initial teacher certification and reading. At that time, the division of Education included Counseling. In January 2005, a reorganization of divisions resulted in the Division of Education becoming a single division in the CLAS and the Counseling program was relocated to join School Psychology in the graduate school. The teacher education program now includes program options leading to recommendation for the following New York State Teacher Certificates: Early Childhood/Childhood, Middle/Adolescent (various content areas), Visual Arts, Business and Marketing and Literacy as seen in Table 1. In 2007, Alfred University expanded to offer graduate teacher certification classes in New York City through a collaboration with the Center for Integrated Teacher Education (CITE). The AU Downstate program offered through CITE allows graduate students to take classes part-time to earn their master’s degree in Education with certification in literacy, birth through 6th grade. Classes are taught in Brooklyn and Long Island, full-time AU faculty and adjunct instructors implementing the same curriculum as on the main campus. The Chair of the Division of Education is program director for the downstate graduate literacy program, whereas a director of downstate programs oversees all Alfred University programs offered through CITE. The same quality control system oversees all teacher certification programs, including the graduate literacy program offered downstate. As the incoming chair of the division, it was my primary goal for us as faculty to have a shared understanding of our history, our present structure as a division, our interdependence with the CLAS and our relationship to the University as a whole. During this first semester together (fall 2007), we built this first layer of what I see as our foundation at the same time as receiving the challenging news that we were awarded ‘‘provisional’’ accreditation by TEAC, giving us a timeline of 2 years to the next review, rather than 5. This relatively
14
ANN MONROE-BAILLARGEON
Table 1.
State Certifications.
Certification Type
Program Level
Early childhood/childhood education (birth to grade 6)
Undergraduate
Middle/adolescent education (5–9 and 7–12) in the following areas: Biology Chemistry Earth science English French Mathematics Physics Social studies Spanish
Undergraduate
Special subjects (K-12) in the following areas: Business and marketing education Visual arts education Literacy education (birth to grade 6)
Graduate
‘‘new’’ division of faculty, under the leadership of their ‘‘new’’ chair, now had a significant challenge, a very short time line, and not a moment to waste.
AND SO WE BEGAN In many ways, we were unprepared as a division to embark on such a challenging task, but then again, who is? Less than confident that we understood what was being asked of us from TEAC and having only confirmed that each of us understood our relationship to the university, the CLAS, and the content of our own programs. We now looked to understand not only what we did but also what we believed. This second part of our foundation was critical in creating the basis upon which we would build an assessment system a core element in any accreditation. We agreed as the Division of Education faculty and staff that we embrace a strong commitment in preparing teachers to meet the diverse and everchanging needs of students within inclusive educational communities. However, from my perspective as chair, it seemed the faculty had very disparate views of the role of teacher educators in achieving this commitment. Therefore, I recommended we collectively read and examine Developing a Pedagogy of Teacher Education: Understanding Teaching and
Educating the Educators
15
Learning about Teaching by John Loughran (2006) to explore the complex nature of teaching and of learning about teaching. Loughran claims that the pedagogy of teacher education must go beyond the delivery of information about teaching. One compelling attribute of the text is the importance of teacher educators’ professional knowledge and how that knowledge must influence teacher education practices. Our shared reading of this text for the purpose of our own professional development allowed us to bring our diverse backgrounds in teacher education together and explore our collective pedagogical practices. In so doing, we examined our individual and collective curriculum and pedagogy, just as we ask of our students. We then shifted our conversation about teacher education in general to how our teacher education program specifically nurtures the development of teachers. We examined how content knowledge with pedagogical subject matter is integrated in our teacher education curriculum. In so doing, we shared specific struggles each of us faced in the development of teachers as they negotiated their confusing shifts in roles, which complicate these early teaching experiences as well as our struggles to navigate the tensions of teaching about teaching. As Loughran (2006, p. 4) states ‘‘not only must both teachers and students of teaching pay careful attention to the subject matter being taught, they must also simultaneously pay attention to the manner in which that knowledge is being taught, and both must overtly be embraced in a pedagogy of teacher education.’’ Reading followed by discussions helped us as faculty to articulate our shared beliefs, commitments, and struggles and to use these conversations as a springboard to agreement on what is being taught, a sharing of best practices in our teaching, and led directly toward a significant question in the quest for accreditation, how do we know our students are learning and our programs in teacher education are making a difference? As a professional community, we will continue to examine our practice of teacher education using professional literature for critical analysis, but in the meantime, we needed to move on to developing our assessment system.
OUR SYSTEM OF ASSESSMENT My previous experiences at two other Institutions of Higher Education (IHE) alerted me to the fact that one of the greatest tensions with achieving accreditation is not to create a system of assessment and data analysis that ultimately controls the program. But rather create a system that remains true to your beliefs about teacher development, while validating the success
16
ANN MONROE-BAILLARGEON
of your practices. We like others struggled with remaining true to our collective beliefs, as our discussion of Loughran’s (2002) text had allowed us to explore, while building a system of assessment. What I perceive as one of the benefits of the TEAC accreditation system is the opportunity to state your claims and then develop a valid and reliable assessment system that supports these claims. Sounds simple enough, however, in practice this opportunity for autonomy, so valued by individual faculty in higher education, is in fact very challenging to achieve collectively when faculty are required to create an assessment system that substantiates not what you claim individually about your teaching, but you as a division claim collectively about your teacher education programs. We began by agreeing upon our program claims and acknowledging that TEAC requires; we also demonstrate the presence of three themes: learning to learn, multiculturalism, and the use of technology are present across our teacher education programs. In addition to these claims and themes, we agreed that the professional teaching standards of the Interstate New Teacher Assessment and Support Consortium (INTASC) provide our undergraduate teacher certification programs a strong framework, whereas the International Reading Association (IRA) standards provided a framework for our graduate program in literacy. The Alfred University, division of education, makes the following claims: 1. Graduates of our program learn and understand subject matter they are certified to teach. 2. Graduates of our programs learn how to convert their knowledge of a subject matter into compelling lessons that meet the needs of all learners. 3. Graduates of our programs act on their knowledge in a caring and professional manner that leads to achievement for all learners. We understood at if we could demonstrate to ourselves, our students, and to the Teacher Education Accreditation Council (TEAC) that we have programs leading to teacher certification built upon a framework of professional standards, integrating the crosscutting themes of learning, multiculturalism, and technology that results in student learning data that substantiates our claims, we would have achieved our goal. And most importantly, could state with steadfast confidence to future administrators, teacher colleagues, and most importantly children and parents that teachers graduating from our program are caring and professional individuals who have the subject matter knowledge and pedagogical practices to be successful in their teaching career.
Educating the Educators
17
The Division of Education sought to create an authentic assessment system that we believed would provide evidence of student attainment of program claims and crosscutting themes demonstrated in the work they do for their courses building toward a culminating assessment product produced during their student teaching semester. We agreed that assessments were needed for both the evaluation of student progress and the evaluation of specific programs. The assessment system needed to be created to withstand the changes in faculty teaching assignments from semester to semester, while allowing the benefits that each individual faculty member brings to their teaching. This was a critical element in the development of our assessment systems. We needed to negotiate the tension of creating a valid and reliable assessment system, without having it automate our program in such a way that would not allow for faculty expertise, teaching innovation, and individually designed instruction to meet the needs of our students to happen. After a rich discussion on the theory and practice of learning assessments, we designed a system that incorporates both course key assessments and program key assessments. Course key assessments are those measures that provide evidence of student learning in one specific course. This assessment is approved by the division faculty and is a required assessment measure regardless of the faculty member teaching the course. Course key assessments are graded rubrics designed by individual faculty and approved by the division as a whole (see Appendix A). Any faculty member assigned to teach the course must use the standard rubric developed for the course key assessment. Students then must submit their key assessments through LiveText, an online data management system, where faculty then assesses the assignment using the pre-approved rubric. Data are then gathered through LiveText allowing for statistical analysis of student learning across courses, programs, and faculty. All undergraduate and graduate students in programs leading to teacher certification are now required to subscribe to LiveText upon entrance into the program. Program key assessments are those measures that provide evidence of student learning across the program (see Appendix B). Many of the course key assessments may be included in an individual students’ program key assessment. Like the course key assessments, these measures are approved by the division faculty and will not change without review of data, division discussion, and approval. Program key assessments come in various forms. They include cooperating teacher evaluations of the teacher education program, evaluations completed by faculty of student teachers, evaluations completed by cooperating teachers who host student teachers, students’ program portfolios, and New York State Teacher Certification Exams
18
ANN MONROE-BAILLARGEON
(NYSTCE). In the past, the assessment of student teachers varied from program to program and was revised by faculty based on their individual belief systems. With a new understanding of the importance of an ‘‘assessment system,’’ which allows for course and program evaluation across faculty, standardization was agreed upon for these assessment tools as well. Like the course key assessments, a students’ initial teaching portfolio completed during their student teaching semester is graded by their university supervisor using a rubric designed by the division faculty and used across all teacher certification programs. This culminating program portfolio will contain many of the course key assessment projects revised based on previous feedback. These assessments too are submitted through the LiveText data management system to allow for data collection and analysis. The process of creating our assessment system provided the division of education a rich content for conversations on teacher education, curriculum development, and program outcomes. Faculty currently teaching a specific course began the discussion by recommending key assessments that evolved into a discussion of how each course key assessment contributed to the culminating program assessments. Last, the criteria for assessment and the acceptable performance level of each course key assessment were determined with the division faculty agreeing that students may meet expectations in a course key assessment at a beginning level, whereas later in the program key assessment (portfolio) meeting expectations is considered a more advanced level of mastery. Division faculty agreed that assessment rubrics will be refined based on data analysis and that students’ level of proficiency will be assessed through inter-rater reliability testing. In the end, it was clear to us through this process of mapping the curriculum, developing course key assessments, program key assessments, and proficiency criteria, validated through division consensus, provide evidence that these measures assess what is covered across the program. We are confident that the assessment system we developed provides evidence that graduates of the Alfred University teacher certification programs are competent, caring, and qualified educators.
WHAT HAVE WE LEARNED? The two-year process of completing this inquiry brief has been one of development and growth for the Alfred University Division of Education. Because of several changes in faculty and leadership within the division, the faculty could not begin our work together until we had built a culture of teamwork and collaboration. It was upon this foundation we then
19
Educating the Educators
articulated our shared beliefs and established our claims. In hind sight, that was the easy part! Our creating a comprehensive assessment system that would produce the evidence to substantiate these claims that we wholeheartedly believe in and establishing a division practice of data driven reflection and decision making have been a grand adventure. Developing a Pedagogy of Teacher Education: Understanding Teaching and the Learning about Teaching by John Loughran (2006) has been one of many guides through this adventure. Chapter 1 opens with a quote that summarizes the uncommon treasure we hold in our professional collaboration: [B]eing a teacher educator is often difficult y in most places there is no culture in which it is common for teacher education staff to collaboratively work on the question of how to improve the pedagogy of teacher education. (Korthagen, 2001, p. 8)
Our two-year process of inquiry could not have been possible without this collaboration and the unending commitment on the part of the teacher education faculty to be lifelong learners of our own teaching to individually and collectively teach others about teaching and achieve what we believe in. This process is ongoing. We understand the endeavor of teaching and learning, and the ‘‘understanding about the teaching of teaching’’ (Loughran, 2006) is endless and dynamic. What we have come to know through this inquiry is that our future plans must focus on progress, partnerships, and programming. Progress As chair, I believe that through this process of self-study required for in seeking program accreditation, we as a division have made significant progress in understanding our institution, our programs, and the impact that our programs have on our students. In so doing, come to understand that although traditional means of evidence, including entering and exiting GPA’s, student course grades, and certification exams, all were available, we had not been systematically gathered for analysis. Analysis of this general completer data revealed some inconsistency in program admissions practices, however overall, it was clear that when the admission policies of 3.0 GPA across our foundations courses, and overall 2.75 GPA before student teaching were adhered to, students were successful across the program and in state education examinations. When this was not adhered to, students struggled throughout. However, we quickly understood that the statistical analysis of student learning and program outcomes was very limiting, fueling our motivation to create a more authentic assessment system that would allow us a closer look at actual student learning.
20
ANN MONROE-BAILLARGEON
As a faculty, we came to understand that assessing a student’s formative and summative learning about teaching through the professional education curriculum is a challenging task. This requires of us as teacher educators to live the claims we purport our program leads our students toward achieving. As teacher educators, we must learn and understand our subject matter, namely teacher education, and the ways in which one individual element of the curriculum within a program builds upon and builds toward the development of a teacher. With the limited time available while juggling a 12 credit teaching load, the faculty came together inside and outside division meetings to create a culture of professional collaboration. Exploring our curriculum in detail and how we collect evidence of student’s learning has allowed us to share ownership of the entire curriculum and see the areas in which we need to interconnect one course with another providing a more developmental process of learning about teaching.
Partnerships Partnerships have developed through this self-study in three areas, which will continue to expand and enrich the Division of Education in the future. Our first partnership has developed within the division. Programs are no longer seen as the specific responsibility of individual faculty members or curriculum. The Division of Education faculty now shares the development and outcomes of the entire teacher education program. Teaching of courses, advising of students, evaluation of program data, and decision making are now accomplished through a partnership of teacher education colleagues within the division. Specific responsibilities are designated based on faculty expertise and equity of load however these responsibilities are not bound by limitations. Second, we have and will continue to build partnerships across the University. Appendix A shows the numerous relationships the Division of Education holds between the colleges and programs across the university. Although relationships have been established through the integration of programs to achieve teacher certification requirements, partnerships may not have been nurtured. Several steps have been put in place to nurture these relationships into true partnerships with continued commitment necessary to sustain these efforts. Education faculty now participate in co-advising meetings with the School of Art and Design, specific advising sheets have been created for the major and minor degree programs in Education to share with advisors from all CLAS major programs, the School of Art and Design, and graduate admissions. These forms have facilitated clearer communication
21
Educating the Educators
with students and faculty for accurate program planning. Most recently each division in CLAS has been asked to designate an individual from their division who wishes to serve as a liaison to the Division of Education. This liaison will bridge communication between their division and the Division of Education and contribute to future program and curriculum planning in teacher certification. One area we seek to development further is to build on these partnerships with divisions to facilitate the inclusion of content area faculty as guest lecturers in pedagogical courses for the purpose of linking content and pedagogy in a more explicit way. Continuing to develop beyond relationships to partnerships across university programs will enhance our institutional learning of teacher education. Programming As we examined closely our claims, curriculum, students, schools, and faculty, we have come to several conclusions about our goals for future programming. If we claim to prepare teachers who meet the needs of all learners and impact the achievement for all learners, we believe we need teacher certification programs that lead to certification in teaching students with disabilities. In addition, Alfred University, located in a very rural area of Allegany County, NY, holds a strong commitment to this community and surrounding areas. In this time of unprecedented career and job change, we have seen an increased interest in teacher certification from individuals who have previously completed an undergraduate degree. At present, our only means of meeting these needs is having students present their transcripts to the state education department for individual transcript review of requirements toward certification. We must create a graduate, initial teacher program especially in the adolescent (gr. 7–12) certification area. Individual transcript review has provided a necessary alternative, however, a stronger pathway to teacher certification would be through a nationally accredited and New York State Education Department approved university program that monitors the quality of its programs toward preparing competent, caring, and qualified professional educators.
CONCLUSION In conclusion, the process of self-study resulting in an inquiry brief, then reviewed through an accreditation team audit, has been invaluable. This was accomplished through the commitment of the outstanding faculty who
22
ANN MONROE-BAILLARGEON
willingly contributed to a study of themselves, their work, and our collective work as a division. It has been challenging, frustrating, invigorating, and exhausting, but most of all educating. As educators, this is what we believe in most. Education can make change happen.
REFERENCES Korthagen, F. A. J. (2001). Teacher education: A problematic enterprise. In: K. F. A. J., Korthagen (with J. Kessels), B. Koster, B. Langerwarf, & T. Wubbels (Eds), Linking practice and theory: The knowledge of realistic teacher education (pp. 1–19). Malhwah, NJ: Lawrence Erlbaum. Loughran, J. (2006). Developing a pedagogy of teacher education: Understanding teaching and learning about teaching. New York: Routledge. Loughran, J. J. (2002). Effective reflective practice: In search of meaning in learning about teaching. Journal of Teacher Education, 53(1), 33–43.
Author summarizes his/her own school history and describes its impact on his/her beliefs about teaching and learning
Author describes the culture he/she hopes to create in his/her future classroom. Author supports his/her statements with specific examples derived from field notes, observation protocols, and/or position papers
Author succinctly summarizes his/her own school history and uses specific examples to illustrate the ways in which it has impacted his/her beliefs about teaching and learning
Author clearly describes the nature of the culture he/ she hopes to create in his/her own future classroom. Author uses detailed specific examples drawn from field notes, protocols, and/or position papers to describe how the culture he/she hopes to create will be similar to or different from those observed in the field. Reasoning is in depth and insightful
Classroom culture (2, 20%) NY-AUCLAIMS.3
Proficient (3 pts)
Summary of personal school history (1, 10%) NY-AUCCT.2
Exemplary (4 pts)
Author’s description of desired classroom culture is incomplete or superficial. The author fails to include examples from field observations to support his/her reasoning
Author summarizes his/ her personal school history. The connections between the history and the author’s current beliefs about teaching and learning are superficial or unclear
Partially Proficient (2 pts)
Author fails to adequately analyze the topic of classroom culture
Author fails to include an adequate summary of his/her personal school history
Incomplete (1 pt)
APPENDIX A. KEY COURSE ASSESSMENT: TEACHING NARRATIVE
Educating the Educators 23
Author describes the classroom management system he/she hopes to put into place in his/her future classroom. The system includes routines and procedures, a system of rules, rewards, and consequences, and room organization. Author uses detailed specific examples from field notes, protocols, and/or position papers to compare and/or contrast the management system he/she hopes to put into place with those observed in the field
Author describes a minimum of three effective and three ineffective instructional techniques that he/she has observed in the field. Author supports the use
Classroom management (2, 20%) NY-AUINTASC.5
Instruction (2, 20%) NY-AU-INTASC.3
Exemplary (4 pts)
Author describes two effective and two ineffective instructional techniques he/she has observed in the field. Author presents a convincing argument for
Author describes the management system he/ she hopes to employ in his/her future classroom. The description includes all required components and is supported by examples from field notes, protocols, or position papers
Proficient (3 pts)
Author describes a few effective or ineffective instructional techniques he/she has observed in the field. The analysis of why the techniques should
Author’s description of a management system lacks detail or is missing some of the required components. Supporting examples from the field are missing or not wellexplained
Partially Proficient (2 pts)
APPENDIX A. (Continued )
Author fails to adequately analyze instructional techniques
Author fails to adequately analyze classroom management systems
Incomplete (1 pt)
24 ANN MONROE-BAILLARGEON
Author recognizes and articulates the range of student needs, abilities, and behaviors. These statements about needs, abilities, and behaviors are supported with observational data
Narrative is written in the first person and clearly presents the author’s point of view. The piece is well-organized and contains few errors in spelling, grammar, and sentence structure
Author recognizes and articulates the range of student needs, abilities, and behaviors that affect student learning. Using observation data, the author supports his/her statements with specific examples of the differences in learner characteristics, student– teacher interactions, and the nature of student participation in the learning environment
Author’s voice can be heard distinctly within the narrative. The style is engaging and the content is clear, well-organized, and free from errors in the conventions of standard written English
Style, voice, conventions (1, 10%) NY-AUINTASC.9
why he/she would include or eliminate the techniques in his/her own future teaching
Understanding learners (2, 20%) NY-AUINTASC.2
or the elimination of the techniques with specific examples of their efficacy or failure derived from field notes, protocols or position papers
The narrative lacks a sense of the author’s voice, is boring, and/ or contains errors in spelling, grammar, and sentence structure that occlude meaning
Author’s observations about the differences in students’ needs, abilities, and behaviors are superficial and/or unconvincing and lack supporting observational data
be adopted or eliminated is missing, unconvincing, and/or not supported by examples from the field
The narrative lacks all personal voice and/or style. It is poorly organized and edited
Author fails to adequately recognize and articulate the differences in student needs, abilities, and behaviors
Educating the Educators 25
Falls Below Expectations (1 pt) Portfolio is missing elements, disorganized, repetitious, lacks visual appeal, and/or contains errors in standard written English that detract from its effectiveness
Artifacts either inadequately demonstrate pre-service teachers’ content and pedagogical knowledge, are missing, or the rationale for their use is implausible
Satisfactory (2 pts)
Portfolio is complete, organized, attractive, and the content has been edited for errors in the conventions of standard written English that do not detract from its effectiveness
Two artifacts demonstrate pre-service teachers’ knowledge in both their major discipline and/or area of concentration and in education pedagogy. Rationale statements link the artifacts to Standard 1 performance indicators
Target (3 pts)
All required portfolio elements are logically organized and presented in an aesthetically pleasing manner that enhances their effectiveness. All elements have been scrupulously checked for errors in spelling, mechanics, and the conventions of standard English
A minimum of two distinct artifacts demonstrate preservice teachers’ depth and breadth of knowledge in both their major discipline and/or area of concentration, and in education pedagogy. Artifacts are introduced by well-written rationale statements that clearly explain the connection to several performance indicators in Standard 1
Organization, aesthetics, conventions (1, 4%)
Standard 1: Content knowledge (2, 9%) NYAU-CLAIMS.1 NY-AUINTASC.1
APPENDIX B. KEY PROGRAM ASSESSMENT: INITIAL TEACHING PORTFOLIO
26 ANN MONROE-BAILLARGEON
A minimum of one highquality artifact demonstrates pre-service teachers’ abilities to provide developmentally appropriate learning opportunities that support students’ intellectual, social, and personal growth. The accompanying rationale statement is cogently written and clearly connects the artifact to several performance indicators in Standard 2.
A minimum of one highquality artifact demonstrates pre-service teachers’ understanding of individual approaches to learning and the ability to create instructional opportunities accessible to several types of learners. The accompanying rationale statement clearly connects the artifact to several performance indicators in Standard 3.
Standard 2: Knowledge of human development and learning (2, 9%) NY-AU-INTASC.2
Standard 3: Adapting instruction for individual needs (2, 9%) NY-AUCCT.2 NY-AUINTASC.3
Artifact either inadequately demonstrates pre-service teachers’ knowledge of developmentally appropriate practices, is missing, or the rationale is implausible
Artifact either inadequately demonstrates pre-service teachers’ knowledge of diverse learning styles and/or their ability to incorporate instructional opportunities that address them, the artifact is missing, or the rationale is implausible
One quality artifact demonstrate pre-service teachers’ abilities to provide developmentally appropriate learning opportunities that support students’ intellectual, social, or personal growth. The accompanying rationale statement links the artifact to Standard 2 performance indicators
One quality artifact demonstrates pre-service teachers’ awareness of different learning styles and modes and their ability to create instructional opportunities that address more than one in a single lesson. The accompanying rationale statement links the artifact to Standard 3 performance indicators
Educating the Educators 27
Artifact either inadequately demonstrates pre-service teachers’ use of various instructional strategies, is missing, or the rationale is implausible
Artifact either inadequately demonstrates pre-service teachers’ knowledge of classroom motivation and management skills, is missing, or the rationale is implausible
One quality artifact demonstrates pre-service teachers’ abilities to provide various instructional strategies that encourage critical thinking, problem solving, and performance skills. The accompanying rationale statement links the artifact to Standard 4 performance indicators
One quality artifact demonstrates pre-service teachers’ ability to create a learning environment that encourages positive social interaction, active engagement in learning, and self-motivation. The accompanying rationale statement links the artifact to Standard 5 performance indicators
A minimum of one highquality artifact demonstrates pre-service teachers’ ability to provide various instructional strategies that encourage critical thinking, problem solving, and performance skills. The accompanying rationale statement is cogently written and clearly connects the artifact to several performance indicators in Standard 4
A minimum of one highquality artifact demonstrates pre-service teachers’ ability to create a learning environment that encourages positive social interaction, active engagement in learning, and selfmotivation. The accompanying rationale statement clearly connects the artifact to several performance indicators in Standard 5
Standard 5: Classroom motivation and management skills (2, 9%) NY-AU-CCT.1 NY-AU-CLAIMS.3 NY-AU-INTASC.5
Falls Below Expectations (1 pt)
Satisfactory (2 pts)
Standard 4: Multiple instructional strategies (2, 9%) NY-AUINTASC.4
Target (3 pts)
APPENDIX B. (Continued ) 28 ANN MONROE-BAILLARGEON
Artifact either inadequately demonstrates pre-service teachers’ communication skills, is missing, or the rationale is implausible
Artifact either inadequately demonstrates pre-service teachers’ knowledge of instructional planning skills, is missing, or the rationale is implausible
One quality artifact demonstrates pre-service teachers’ ability to communicate through verbal, nonverbal, and media techniques so as to foster active inquiry, collaboration, and supportive interaction in the classroom. The accompanying rationale statement links the artifact to Standard 6 performance indicators
One quality artifact demonstrates pre-service teachers’ ability to The accompanying rationale statement links the artifact to Standard 7 performance indicators
A minimum of one highquality artifact demonstrates pre-service teachers’ ability to communicate effectively through verbal, nonverbal, and media techniques so as to foster active inquiry, collaboration, and supportive interaction in the classroom. The accompanying rationale statement clearly connects the artifact with several performance indicators in Standard 6
A minimum of one highquality artifact demonstrates pre-service teachers’ ability to plan. The accompanying rationale statement clearly connects the artifact with several performance indicators in Standard 7
Standard 6: Communication skills (2, 9%) NY-AUINTASC.6
Standard 7: Instructional planning skills (2, 9%) NY-AU-INTASC.7
Educating the Educators 29
Falls Below Expectations (1 pt) Artifact either inadequately demonstrates pre-service teachers’ assessment of student learning, is missing, or the rationale is implausible
Artifact either inadequately demonstrates pre-service teachers’ knowledge of professional commitment and responsibility, is missing, or the rationale is implausible
Satisfactory (2 pts) One quality artifact demonstrates pre-service teachers’ ability to assess student learning. The accompanying rationale statement links the artifact to Standard 8 performance indicators
One quality artifact demonstrates pre-service teachers’ ability to show professionalism. The accompanying rationale statement links the artifact to Standard 9 performance indicators
Target (3 pts)
A minimum of one highquality artifact demonstrates pre-service teachers’ ability to assess student learning. The accompanying rationale statement clearly connects the artifact with several performance indicators in Standard 8
A minimum of one highquality artifact demonstrates pre-service teachers’ ability to show professionalism. The accompanying rationale statement clearly connects the artifact with several performance indicators in Standard 9
Standard 8: Assessment of student learning (2, 9%) NY-AU-INTASC.8
Standard 9: Professional commitment and responsibility (2, 9%) NY-AU-INTASC.9
APPENDIX B. (Continued )
30 ANN MONROE-BAILLARGEON
Standard 10: Partnerships (2, 9%) NY-AUINTASC.10
A minimum of one highquality artifact demonstrates pre-service teachers’ ability to be a partner with schools. The accompanying rationale statement clearly connects the artifact with several performance indicators in Standard 10
One quality artifact demonstrates pre-service teachers’ ability to be a partner with schools. The accompanying rationale statement links the artifact to Standard 10 performance indicators
Artifact either inadequately demonstrates pre-service teachers’ knowledge of partnerships, is missing, or the rationale is implausible
Educating the Educators 31
32
ANN MONROE-BAILLARGEON
STANDARDS NY-AU-CCT.1 Learning how to learn NY-AU-CCT.2 Multicultural perspectives and understanding NY-AU-CCT.3 Technology NY-AU-CLAIMS.1 Graduates of our programs will learn and understand the subject matter they are certified to teach NY-AU-CLAIMS.2 Graduates of our programs will learn how to convert their knowledge of a subject matter into compelling lessons that meet the needs of all learners NY-AU-CLAIMS.3 Graduates of our programs act on their knowledge in a caring and professional manner that leads to achievement for all learners NY-AU-INTASC.1 STANDARD: The teacher understands the central concepts, tools of inquiry, and structures of the discipline(s) he or she teaches and can create learning experiences that make these aspects of subject matter meaningful for students NY-AU-INTASC.2 STANDARD: The teacher understands how children learn and develop, and can provide learning opportunities that support their intellectual, social, and personal development NY-AU-INTASC.3 STANDARD: The teacher understands how students differ in their approaches to learning and creates instructional opportunities that are adapted to diverse learners NY-AU-INTASC.4 STANDARD: The teacher understands and uses various instructional strategies to encourage students’ development of critical thinking, problem solving, and performance skills NY-AU-INTASC.5 STANDARD: The teacher uses an understanding of individual and group motivation and behavior to create a learning environment that encourages positive social interaction, active engagement in learning, and self-motivation
Educating the Educators
33
NY-AU-INTASC.6 STANDARD: The teacher uses knowledge of effective verbal, nonverbal, and media communication techniques to foster active inquiry, collaboration, and supportive interaction in the classroom NY-AU-INTASC.7 STANDARD: The teacher plans instruction based upon knowledge of subject matter, students, the community, and curriculum goals NY-AU-INTASC.8 STANDARD: The teacher understands and uses formal and informal assessment strategies to evaluate and ensure the continuous intellectual, social, and physical development of the learner NY-AU-INTASC.9 STANDARD: The teacher is a reflective practitioner who continually evaluates the effects of his/her choices and actions on others (students, parents, and other professionals in the learning community) and who actively seeks out opportunities to grow professionally NY-AU-INTASC.10 STANDARD: The teacher fosters relationships with school colleagues, parents, and agencies in the larger community to support students’ learning and well-being
CHAPTER 3 DECORATING FOR NCATE$ Richard D. Osguthorpe and Jennifer L. Snow-Gerono ABSTRACT The report from our recent accreditation visit indicated that the unit has an emerging framework for an assessment system that collects data at necessary transition points. However, the report also suggests that the unit does not analyze that data in an effective way to conduct meaningful program change. The events that led to this discovery (and the actions that have been taken since) have provided important lessons learned for our institution that relate to continuous program improvement and the accreditation process itself. This chapter details those events and lessons learned.
Similar to any institution facing an accreditation visit, Boise State University wanted to put its best foot forward for the NCATE Board of Examiners (BOE). To that end, cursory preparation for the visit included constructing a new College of Education display in the foyer to our building; placing new furniture, pictures, and displays on selected floors of the building; updating syllabi to reflect NCATE requirements; updating curriculum vitae to highlight
$
This chapter is based on the experiences of the authors and does not necessarily represent the views or opinions of the institution, individuals, or agencies identified in this work.
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 35–54 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012006
35
36
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
our strong attention to research and scholarship; purchasing bookmarks for students to parade around campus; and, of course, hanging countless 16’’ 20’’ framed editions of the unit’s Conceptual Framework throughout the building. The general perception of faculty in the unit was that each of our teacher education programs was on solid ground; most faculty felt it would only be necessary to ‘‘trim the tree’’ a bit, put on a good show during the visit, and then lay it to rest for another seven years. Thus, these ‘‘decorations’’ were an attempt to highlight our strengths and present our unit’s programs in the best light. Not surprisingly, the review from our visiting NCATE BOE confirmed many of the strong points that we paraded in front of them. According to NCATE standards, the BOE report acknowledged that the unit provides adequate opportunities for its candidates to learn the knowledge, skills, and dispositions appropriate for classroom practice (Standard 1); collaborates in committed, robust partnerships with local schools to create meaningful, quality field experiences for candidates that focus on the improvement of student learning – highlighting the unit’s numerous faculty who participate as school-university liaisons (Standard 3); exemplifies a commitment to diversity in assisting candidates to develop the capacity to help all students learn – pointing out the meaningful changes in the curriculum related to diversity, as well as the ever-increasing percentage of new faculty hires from different racial and ethnic backgrounds (Standard 4); and, finally, employs a faculty that embodies excellence in teaching, scholarship, and service – accentuating the scholarly productivity of faculty as a recognized strength of the unit (Standard 5). However, despite all the make-up we tried to apply, we were unable to conceal what turned out to be a glaring weakness – our assessment system (Standard 2). The BOE report acknowledged that the unit has an emerging framework for an assessment system in place that collects data at necessary transition points, in most cases. However, the report suggests that the unit does not analyze the data in an effective way to conduct meaningful program change. The events that led to this discovery (and the actions that have been taken since) have provided important lessons learned for our institution that relate to continuous program improvement and the accreditation process itself. In what follows we describe (a) background elements related to our accreditation experience, (b) events surrounding the build up to the initial accreditation visit, (c) surprising aspects of the accreditation visit itself, (d) the aftermath of the visit, and (e) the vision for the future – each with concomitant lessons learned.
37
Decorating for NCATE
THE BACK STORY In the five years that had passed since our institution’s previous visit from an NCATE accreditation team, we (the unit) had sought to update and improve our assessment system, and we spent a considerable amount of time deliberating the merits of various systems. Following these deliberations, and after receiving sales pitches from several companies, the unit decided to purchase one of the commercial systems on the market. Those involved with the decision were excited about the opportunity to have systematic data available for program analysis, and expectations were high that the new technology would help the unit overcome the organizational/monitoring issues associated with an ever-growing program (due to large increases in student enrollment and interest in teacher education programs). However, in the midst of transitioning to this system, the unit discovered that it would be unable to use the system due to internal constraints at the institutional (university) level that were primarily related to the use of student data. Forced to abandon the commercial system, the unit decided to create its own internal system with the help of an outside consulting firm. This last-second, surprising decision to build our own assessment system was the unit’s only recourse, and it proved to be a fateful one. Enlisting the help of a technology-consulting firm was very helpful from the standpoint of technological expertise, but it was also very time-consuming and complex because of the consultant’s lack of understanding related to NCATE standards and teacher education as a whole. With our upcoming accreditation visit just two years away, time was a primary issue, and it soon became apparent that it would be very difficult to create an adequate assessment system and populate that system with appropriate data in time for our impending visit from the accreditation team. Moreover, we recognized that we would need to couple our working assessment system together with the end product of a new system to show the comprehensiveness of our approach (as well as our use of data to make program changes), and such a coupling made it impossible to track our students in a systematic way (i.e., one would have to navigate multiple assessment databases to locate data on a given student). Then, in what at first appeared to be a stroke of good luck, the design and implementation of the system was seemingly aided by NCATE electing to postpone our visit for one year. Our state accreditation team made the request so that it could review multiple state programs in the same year, and the request was granted, giving us an extra year to create our assessment system and populate it with data. However, given the previous timeline,
38
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
many of the fateful structures of the system were already in place (e.g., the necessity to access multiple databases to access information), and the rush was on to ensure that the data were collected and used to improve programs. Thus, the extra year gave us time to catch our breath and take a step back from the sense of impending doom we were feeling, but it did not allow us to make systemic changes – because it was determined that scrapping the structural changes made to the assessment system would only serve to put us back in the same position that we had found ourselves in a year earlier. On the contrary, had we been working on the extended timeline from the beginning, we likely would have made different decisions regarding the design and implementation of our assessment system. Instead we were left to work with the system that was somewhat hastily created due to the aforementioned series of what we now deem unfortunate events. Additionally, this decision to push back our accreditation visit created a new problem for us when our institutional representative and NCATE coordinator went into a planned retirement the year of the postponed visit. This retirement led to a series of shifts in college/department leadership, creating a void in experience related specifically to NCATE accreditation. In a very short time period, a new associate dean (with responsibility for NCATE and teacher education programs) was appointed, along with new department chairs in four of seven departments, including the department with primary responsibility for teacher education. The new leadership had a wealth of experience and expertise but no time to do anything more than attempt to execute the game plan already in place.
Lessons Learned Our back story provides keen insights into the processes and actions necessary for successful accreditation review and continuous quality improvement. The first lesson learned from this phase is that institutional support is necessary at the highest levels, and that support is only gained by communicating needs long before those needs are required. We thought we were preparing ourselves early enough to have an effective, cogent assessment system in place. However, without the necessary and important institutional support in place from the very beginning of systemic change efforts, any last-minute efforts resulted in mere window-dressing. Being proactive in communication and systemic change efforts with central administration is imperative for sustainable and meaningful change (Lieberman, 2005; Senge et al., 2000). Having the understanding of not just key stakeholders but key supporters of any institutional
39
Decorating for NCATE
change from the outset of any programmatic efforts connected to accreditation not only eases implementation but provides important information about college and unit efforts for alignment to a conceptual framework and its implementation and evaluation. The second lesson learned from our back story is that technological expertise must be coupled with teacher education expertise to be effective. This caution was first unveiled in our failed attempts at an assessment system. However, as we implemented the new assessment system, we learned that we counted on in-house technological expertise more and more in streamlining assessment initiatives. There are experts in technology for a reason and teacher education units must include such personnel in all efforts to avoid moving backwards when ‘‘big ideas’’ become implementation failures due to a lack of technological expertise. Third, we learned that it is helpful to have a BOE-trained institutional representative directing the accreditation effort. Certainly, our extension for the review was appreciated due to the extra time allotted to preparation for the actual visit. However, the retirement of our BOE-trained leader in the year just prior to the accreditation visit certainly made the transition period for new leadership more complex. Thus, forward thinking would include potentially having multiple BOE-trained institutional representatives on faculty or staff, while still honoring flexibility in retirement and job responsibilities with accommodating workload policies and appreciation for long-term employees. Finally, from this back story phase, we learned that it is never too late to throw the baby out with the bathwater – better to start anew than to work with a flawed system. We are proud of our efforts to band together for a unified celebration of our programs and their de´cor. It is important that we were willing to begin completely anew when we realized our system for assessment was not working. We knew the decorations may not be enough, but we cared enough about our programs and their quality to move forward with what we determined would be best for the future. We knew the road ahead might be extended because of our choice to start from scratch, but we also knew the work would be worth it if we built a foundation for our frontline efforts in teacher education.
THE BUILD UP Given those circumstances and the lack of a systematic assessment system, it became necessary to fill this void of experience and to engage unit leadership
40
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
and faculty in a more extensive data-gathering role. A new position was created in the College of Education with a second Associate Dean being assigned the primary role of Coordinator of Teacher Education and Accreditation. This newly selected NCATE coordinator/associate dean admirably stepped into the fray and was immediately successful in identifying the most pressing needs, but it was quickly apparent that some components of the assessment system would require decorative filler because those components were not substantively populated with meaningful data for use in program improvement. In other words, we knew where the holes in our assessment system were, but we put our best foot forward by demonstrating as much data as possible in our emergent assessment system. Regardless of our attempts to accentuate the emergent nature of our assessment system, areas of concern related to the assessment system were identified and unit leadership was tasked with resolving the concerns. Most of this work was disseminated through the unit’s governance structure, consisting primarily of the Teacher Education Coordinating Council (TECC) and the Teacher Education Team (TET). As a longstanding committee on campus, TECC consists of teacher education program coordinators across the unit, who have varied understandings of the NCATE accreditation process, whereas TET is made up of the teacher education department chairs in the College of Education with responsibility for teacher education. The governance structure of the unit itself is certainly adequate and conducive to appropriate unit governance, but it is noteworthy that four of the seven department chairs were new and two others had served less than three years. Thus, there was very little leadership experience related to NCATE accreditation, and the time to engage the steep learning curve had passed. Thus, those pressed into leadership positions just months before the accreditation visit found themselves leading an effort that was, at times, the green leading the green and greener. Quite simply, there was an adequate governance structure in place with highly competent leaders directing the efforts, but there was little NCATE accreditation experience available at crunch time. During this time, most faculty were only involved in a superficial role, providing updates of required documents, such as syllabi and vitae, and providing various data points for the assessment system, such as exemplary work samples from courses and field experiences. The collection of these documents was subject to many of the problems associated with asking faculty to do anything in a uniform and expedited manner – most of the data were contributed in piecemeal fashion, as ‘‘data bins’’ (milk crates) were stuffed with hard copies of documents and computer file folders were
41
Decorating for NCATE
filled with electronic versions. Faculty provided documents but few were aware how their documents were being used to populate the assessment system, and even fewer understood the full purpose of the data collection as it related to NCATE. In short, many faculty (at no fault of their own) simply complied with requests from department chairs and program coordinators to submit documents, not knowing fully where those documents were going or how that data might be used to improve programs. Further complicating the collection of data (and due to the postponement of the visit), it was determined that all the data collected the first time around needed to be updated, so faculty were tasked with resubmitting data, resulting in some various degrees of grumbling and questioning related to the value of the accreditation process writ large. The degree of complaining was often directly related to a faculty member’s role in the collection and recollection of data, and a common refrain from concerned faculty was, ‘‘we don’t want to do this just for NCATE.’’
Lessons Learned We learned important lessons during the build-up phase to our site visit. The first lesson we learned was that it is difficult to do something on a short timeline with buy-in from multiple constituencies – even when leadership agrees that it needs to be done. Obviously, having so many new department chairs as leaders may have influenced some of the faculty concern over and grumbling about the collection of new data points and artifacts. However, we came to believe that any collaborative system would need more time than we initially planned to provide appropriate documentation and to develop conceptual understanding of the accreditation process (Fullan & Hargreaves, 1996). Although faculty members and leadership did unite in purpose (out of respect and admiration as much as decoration), faculty might have questioned what was happening at other levels of the unit, as there was insufficient time to make this phase of preparation transparent. The second lesson we learned was that unity and vision in leadership team is paramount to successful preparation for visit, but full faculty involvement is just as important. This lesson is directly connected to our first lesson. The leadership teams were completely unified around a shared vision. However, this team was also representing multiple constituencies of faculty and programs. The individual program coordinators and department chairs
42
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
were raising questions connected to the quality of their own programs as well as those from faculty. Although unity and vision might have been achieved in the moment, it was difficult to communicate in an effective and timely manner with all levels of faculty and stakeholders because of the time constraints related to the impending visit. Third, we learned that the governance structure in a unit is key, but also inherently complex due to the variety of constituencies who have interest in teacher education on any given university campus. Our TECC is an effective and collaborative governance body that makes decisions based in consensus and brings people from across campus together to share insights and policymaking responsibility for teacher education. Positioning shared decisionmaking and consensus-building as key aspects of our governance structure necessitates transparent communication and time-consuming sharing of information (Johnston et al., 1997). Given that there are four Colleges on Boise State’s campus that have teacher education programs, this process of communication and collaboration involves various levels of leadership as well as program structures across all education programs. The complexity cannot be understated. It must also be expected and embraced in order for it to work well on a sustainable basis. Finally, during this build-up phase of our NCATE experience, we learned that consistency in accreditation team membership/leadership is important to carry out the preparatory activities related to the accreditation visit. In particular, faculty and leadership with NCATE responsibilities must work in concert, and such harmony is difficult to achieve with everyone hitting the ground running (oftimes in different directions). Clearly, we did not have the ideal in consistent leadership with a brand new Coordinator of Teacher Education and several newly appointed department chairs in teacher education. That said, the leadership team worked remarkably well together to provide a shared vision for faculty – reminding us that regularly sharing such a vision with all faculty and participants in a teacher education enterprise provides sustainability no matter how leadership positions change throughout the never-ending cycle of the accreditation process.
THE VISIT The visit itself is best described as a whirlwind tour of our teacher education program, and, in some ways, merely an extension of the party for which we had been decorating for months. The visit kicked off with a welcome gala,
Decorating for NCATE
43
aimed at giving the BOE a broad overview of our unit programs. After the rush to create an assessment system, gather data, etc. – in short, to decorate – the welcome gala can best be described as nothing less than a huge success. The event epitomized our approach to the accreditation visit. Held in a newly constructed sky suite that overlooked the city, BOE participants had the opportunity to mix and mingle with faculty, candidates, school partners, graduates, constituents, etc. – discussing information presented on professional posters (some created for this gala and some reused after research presentations or other conference venues), greeting interested parties, and eating catered food. Every aspect of the event and program came off without a hitch, and, as faculty, it was rewarding to look out at the crowd and marvel at the coordinated effort on the part of those attending and their commitment to the College of Education. Following the successful welcome gala, the accreditation visit had nowhere to go but down, and the remainder of the visit was often marked by frenzy and confusion. It is worth noting that some of this frenzy was due to the simultaneous visit from our state accreditation team. Although it saved time and effort, it was difficult to meet what sometimes seemed to be competing demands, including requests to provide immediate documentation, explain assessments and requirements, respond to questions related to field experiences, navigate the assessment system, etc. More importantly, what was seemingly sufficient for one group would raise concerns and issues for another. As a participating faculty member, it was difficult to keep track of conversations, explanations, and responses and to build on prior discussion and consensus. That said, some of the tumult was of our own doing. In rote fashion, the BOE would ask us questions related to various standards, and we would happily respond by highlighting various program elements of which we were very proud. However, these discussions were always followed by a recurring set of questions from the BOE, requesting that we provide evidence/data to corroborate our conclusions about these program elements and then show how we had used this data to effect meaningful changes in our program. As it turned out, we had more conclusions than evidence – at least in the form that was needed – and we were unable to describe more than a few substantive programmatic changes based on such evidence. Thus, it became clear very early on that the BOE might take issue with our assessment system and the ways in which we were managing, monitoring, and improving programs. It was not enough to suggest that recent program changes aligned with current research and feedback from our constituents; we needed to provide documentation of that feedback and
44
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
show how it was used in deliberations that effected program change. In short, we had some, but it became apparent that we did not have enough. Moreover, our own visions of effective teacher education, for example, were not adequate – we needed to show how our decisions were data-driven (the same request, ironically, that we make of candidates in our programs). Finally, during the visit, when hard questions were asked, it also became obvious that various faculty hold very different perceptions of and visions for the unit’s programs. The Professional Educator conceptual framework guides the entire assessment system for initial and advanced programs at Boise State: Boise State University strives to develop knowledgeable educators who integrate complex roles and dispositions in the service of diverse communities of learners. Believing that all children, adolescents, and adults can learn, educators dedicate themselves to supporting that learning. Using effective approaches that promote high levels of student achievement, educators create environments that prepare learners to be citizens who contribute to a complex world. Educators serve learners as reflective practitioners, scholars and artists, problem solvers, and partners.
Thus, all programs are guided by this conceptual framework and, more specifically, by state and national standards with assessments aligned to those standards to monitor candidate performance and subsequently manage and improve unit operations and programs. But pointed questions from the BOE and discussions that followed suggested that program consensus and vision was loosely coupled. For example, our Advanced Program coordinators constantly expressed a desire to honor their innovation and uniqueness, resisting any unit-wide system of assessment. The commitment to program vision was abundant, but it was difficult, at times, to discern that commitment across programs and throughout the unit – providing ample evidence to the BOE that we needed to improve unit-level program monitoring.
Lessons Learned Perhaps the visit itself provided the most meaningful lessons learned in this process of accreditation. The first lesson learned was that a focus on accreditation offers a wonderful opportunity to work with colleagues across campus and produces a unity of purpose that stems from developing stronger ties with and understandings of programs across the unit. Those directly involved in the accreditation visit agreed that it was interesting and enjoyable to learn more about the various college programs in-depth. Certainly, the gala
Decorating for NCATE
45
showcased the many programs and innovations happening in the unit, and individual faculty and even department level administrators did not necessarily have these insights prior to the visit. Coming together to showcase our strengths helped program faculty collaborate during the visit as well as move forward after the visit to develop a more refined assessment system, for example. Second, we learned that the decorative Welcome Gala came off much better than we thought. It was impressive how our faculty came together. Not only for unit leadership but also for individual faculty members and community representatives, it was an invigorating experience to gather together simply to celebrate our successes and showcase what we were doing for a national board. It was particularly nice to have students/candidates involved. Although this may have been a sense of stress for some (considering the fact that reviewers would have open access to anyone related to the unit and that it would be difficult for the unit to control the message under such circumstances), there was an overall sense of pride and accomplishment throughout the evening. For us, this result demonstrates that bringing everyone together in celebration is worthwhile. Even if some stakeholders are nervous about the blemishes that might come to light, the act of every constituent pulling together to put our best foot forward was meaningful not just for the visit but also for creating a sense of unity afterward. The third lesson we learned was that accreditation from multiple boards (state and national in our case) can lend itself to confusion. Logistically and practically it was easier for the state to coordinate their accreditation visit with the NCATE BOE than to conduct a separate visit. And, while our unit prepared for these simultaneous visits, we may have emphasized the NCATE visit over the state visit, resulting in some frenzy due to the individual program details the state required. Obviously, this confusion could have potentially been alleviated through more thorough preparation, and perhaps it would have been fruitful to have a state leader and a national visit leader. However, the unified vision suggested earlier indicates the promise for having one coordinator for accreditation visits. Regardless, a deeper understanding of the differences in the requirements or levels of interest for each review is appropriate before the visit. Our fourth lesson learned from the visit phase was that many faculty only have knowledge of the program elements for which they are responsible, and it is equally difficult for college leadership to maintain in-depth understandings of other programs in the unit. Additionally, students have limited knowledge of program elements that often go unstated. This lesson is likely common among most large teacher education units with several programs.
46
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
Faculty members do not necessarily need to know about the full details of every program in the unit. Some are content to work within their own factions of a particular program. However, when an accreditation visit comes around, a larger awareness is required, and we certainly attempted to decorate everyone’s understanding with the conceptual framework, but the details of individual programs, both initial and advanced, might have gone largely misunderstood. Our fifth lesson learned is keenly significant in a program that emphasizes school partnerships and clinical experiences (Darling-Hammond, 2005; Teitel, 2003). Reflecting on the visit itself, we determined that relationships with local school partners benefited most dramatically, as school partners rose to the occasion and highlighted our various program strengths. It was a distinct pleasure to see how involved and interested school and community partners were in the success of our accreditation visit. For the most part, they were excited to share all the positive aspects of clinical experiences and advisory experiences. For unit faculty and administrators, the accreditation visit was a valuable reminder of the importance of partnerships to the effectiveness of our programs as well as a way to honor partner voices in review and evaluation toward continuous quality improvement. Finally, we learned that unit leadership (TECC) can seemingly be working together notwithstanding different visions of the governance structure and program vision. One of our largest lessons came during the NCATE interview with TECC. Although members were authentically collaborative and interested in what each representative was saying, it became clear there was some disagreement (Johnston et al., 1997) about the role of TECC. For example, who could overrule TECC, if anyone? Or, what would happen if consensus did not work? Different representatives shared insights from their personal experiences, but no one seemed to be able to clarify definitive responses to the BOE’s questions. Fortunately, this governance issue was immediately clarified and re-examined during the aftermath of the visit.
THE AFTERMATH Once we received the initial report from NCATE that detailed areas of improvement, we immediately formed a team to write the Rejoinder. For some of us, this writing process was the first opportunity to take a big picture view of the program and assess its strengths and weaknesses.
47
Decorating for NCATE
Although we wanted to take issue with each of the designated areas for improvement, it became clear that the BOE had uncovered some aspects of our programs that needed to be changed, and it was helpful to see those aspects in the broader context of the unit and then identify structures for engaging in those changes. It was also evident that meaningful change could not take place unless the changes were placed in context of unit improvement. Examining the unit as a whole while writing the rejoinder produced new understandings of governance structures, program purposes, impediments to change, faculty commitment, NCATE standards writ large and data-driven decision making. For example, the rejoinder task force quickly realized that some areas of improvement would require structural changes in unit governance and those structures were discussed by faculty and approved by leadership for immediate implementation.
Lessons Learned Lessons from the aftermath continue to emerge. One of the most important lessons we learned was that it is important to take a big picture view; it is helpful to step back – learning how to potentially benefit/improve individual programs by seeing others in a ‘‘new’’ light. Writing the rejoinder and participating on task forces allowed participants to see each program in the unit in its broader context – like that of an accreditation examiner. Even faculty who were not directly participating in these activities could see how program data and continuous quality improvement were revitalized missions of our unit. At each COE or unit meeting (including the Advisory Council), data were put forth to constituents and feedback was requested. For some, this may have been a new opportunity to see individual programs in a different light. For many, this was an opportunity to understand how the various unit programs work together and how each program may learn from every other. Second, we learned that re-emphasizing the need to connect to data-driven decision making – joining forces around data and quality – is a foundational step in program improvement. Again, experiences at subsequent meetings, formal and informal, demonstrated a specific dedication to making program decisions based on data. Task forces for individual programs, assessment for the entire unit, as well as governance for different aspects of the unit were initiated or re-energized. Efforts of these newly initiated task forces in the college are data-driven, and any curriculum change that has gone into effect
48
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
since the accreditation visit is based in some form of data derived for the visit or the emerging assessment system. Finally, we learned that it is difficult to fully understand the complex suite of courses/program elements outside one’s area of program involvement, and it is onerous to represent the unit’s perspective instead of just one’s own. This lesson is one that keeps on giving. Even generating the big picture view, it is difficult for anyone to have a handle on the needs of every program in the unit. Thus, it remains important to have clear governance structures where representatives from all factions are involved. Faculty in elementary education, for example, may not have a clear understanding of secondary programs; those who work solely in advanced programs may not understand (or necessarily care) about initial programs; and faculty who work in various disciplines may have enough to focus on in their own areas such that it is difficult to understand disciplines outside their expertise. Put another way, there must be one coordinator or a centralized place for the unit to come together, whereas shared leadership and collaborative governance remains key to cohesive unit structure and effective program implementation.
THE VISION Following the submission of the rejoinder, we determined that additional and immediate work was needed to address the weaknesses that we found in our assessment system. The Teacher Education Assessment Task Force was formed to begin some intensive efforts that have addressed this comprehensive monitoring since the initial accreditation visit. Primarily, changes have been made in connection to compiling and using professional dispositions assessment data, monitoring progress on the Professional Year Assessment, creating an assessment handbook for the unit, mapping and aligning the elementary education curriculum, and restructuring the assessment system for advanced programs. Professional Dispositions Assessment Initial programs have a unit-wide assessment system based on Professional Dispositions assessments as well as the Professional Year Assessment (conducted in connection to the final two semesters of student teaching). In preparation for our NCATE review, unit members realized teacher candidate assessments were not completed with 100% compliance. The unit’s technology data management system made it difficult to track data
49
Decorating for NCATE
collected. Likewise, the unit did not have evidence from every instructor, mentor teacher, or candidate. Therefore, the Teacher Education Assessment Work Group, in collaboration with the TET, TECC, and Professional Standards Committee, redesigned the Professional Dispositions assessment. In an effort to gather useful data and experiment with various approaches to dispositions assessment, all instructors in the required courses now have students self-assess themselves according to a slightly revised ‘‘technical dispositions’’ form related to communication skills, professional responsibility, and professional interactions. Assessors can then agree or disagree with the candidate self-assessments and submit a signed form to the unit. If any areas are marked ‘‘Unsatisfactory,’’ then instructors meet the candidate to address areas of concern. Additionally, there is an Area of Concern form completed that will remain on file in the Office of Teacher Education. Once a candidate has two concern forms on file, his or her progress in the program will be immediately frozen until the concerns may be addressed. These concerns are addressed on appeal to the Professional Standards Committee through a newly created process that requires a letter outlining the appeal with accompanying letters of support from program specific faculty. This new process is now in its second phase of implementation, and the unit is pleased to have 100% compliance. Most importantly, the unit has a structure in place for assessing dispositions on which it can build as it seeks continuous quality improvement.
Professional Year Assessment Professional year assessments are also being conducted in a more uniform manner. Supervisors receive training on the process in monthly supervisor meetings and then work with mentor teachers who are conducting the assessment. Every candidate has a professional year assessment completed by his or her mentor teacher and supervisor during the professional year, and compliance will continue to be closely monitored. Additional training occurred on the professional year assessment as it pertains to the assessment of dispositions. Those items on the professional year assessment that attend to dispositional issues were highlighted for supervisors in an effort to more closely monitor and develop dispositions during the professional year. Candidates remain aware of the integration of knowledge, skills, and dispositions assessment and progress as they self-assess throughout the program.
50
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
Assessment Handbook Drawing on recommendations from other universities, a new assessment handbook was created, detailing the entire assessment system. The handbook describes each assessment and its place within the broader context of the unit. It also discusses the purposes for collecting the data from each assessment and how it is used to further monitor and evaluate the program to bring about meaningful change. The handbook is available online for any interested partners, candidates, faculty, or professional staff.
Curriculum Mapping and Alignment In addition to creating an assessment handbook and redesigning assessment systems for validity and compliance, the unit has undergone improvements to demonstrate the use of data for managing unit operations and programs. In particular, the department responsible for elementary education seized the opportunity to build on its already strong program by bringing faculty together to discuss program improvement. An elementary education task force was created to engage in curriculum mapping and program alignment (Hayes Jacobs, 1997), using a Backward Design model (Wiggins & McTighe, 2005). The unit coursework has undergone alignment and faculty are working collaboratively to design instructional activities to lead to a more meaningful culminating activity, the professional portfolio. Data from partner schools and the COE advisory board indicated our candidates did not have enough practical experience with some state-wide education initiatives [e.g., response to intervention (RtI)]. Therefore, initial programs faculty invited experts in the areas identified as weak (e.g., RtI) and participated in seminars on how faculty could improve connections for students. Likewise, elementary education faculty are restructuring coursework so that two pedagogy courses are required to coincide with the professional year internship experience. This will allow for more coherence and collaboration with school partners.
Restructuring Assessment for Advanced Programs Advanced programs have also undergone a deep restructuring of their assessment process. The Teacher Education Assessment Work Group took on the task of responding to missing links in our advanced program assessment system. This group worked at a fast pace to implement several
51
Decorating for NCATE
new systems. For example, all advanced programs conducted alumni and employer surveys of graduates. Each program also conducted mid-point assessments on candidates in order to better monitor progress of candidate performance. Culminating assessments have been redesigned to better meet needs of students, according to candidate data. Three new statements were added to each course evaluation form (advanced and initial programs) and were aggregated across and within programs in order to better monitor performance according to key program standards connected to the unit’s conceptual framework, The Professional Educator. The three areas for improvement resulting from the BOE’s review have led to more integration and coherence in the unit assessment system so that each area for improvement serves as a guide to comprehensively monitor candidate performance, manage unit operations and performance. The management teams have a clear purpose and vision aligned to policy decisions and specified areas of responsibility.
Lessons Learned Indeed, creating a vision for the future has also taught us important lessons. First, we learned that we have a great program but that all stakeholders do not have the same perception, and we need to meet to bring that vision to everyone. For example, as faculty, we occasionally assume that we know more about other programs than is necessarily the case. There are several excellent programs across the unit, but some of us may be somewhat ‘‘stuck’’ in our own ideas and slightly resistant to seeing how well other programs function – and how an understanding of those well-functioning programs could subsequently improve our own efforts. The task forces and writing teams that grew out of the aftermath of our NCATE visit resulted in more purposeful communication across programs. Faculty also learned that they do not necessarily have the same understandings of the same data. Thus, meetings focused on interpretation and planning next steps are imperative to continuous quality improvement. At our institution, it is likely that some task forces may evolve into standing committees even after the rejoinder and action reports are complete. The second lesson we learned was that accreditation can bring about program change in relatively quick fashion. Change requires a lot of work and effort on the part of leadership and faculty, but the ‘‘threat’’ of losing accreditation expedites the process and helps the slow wheels of academe to move faster. It is true that higher education institutions are often
52
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
slow-moving creatures with multiple processes and approvers in place for any change. However, when one of our own is ‘‘threatened’’ by a potential loss of accreditation or even critiqued harshly, a system will support its own. In order to prove and improve ourselves, we could expedite change processes in the name of our external reviews while maintaining program integrity. Third, in this vision phase, we learned that program faculty are not as resistant to change when the preponderance of evidence shows that change is needed. It is difficult to argue with facts, data, and evidence that state the opposite of what we may feel or believe. In our case, the evidence seemed to demonstrate what many of us already knew – we did not have a unified assessment system (particularly at the advanced program level). Even though we wanted to argue different points, we opted to look for the best remedies and make compromises to identify key concepts to assess across programs. Also, we think that all the program coordinators and involved faculty would argue that evidence shows that any changes we are making to align our assessment system are key improvements. And, processes and procedures for the professional year assessments, for example, have been streamlined and have made our work more efficient and effective in documenting key outcomes for candidates. Finally we learned that accreditation can serve as a unifying force in departments, programs, colleges, and throughout the unit. Academics do not always appreciate criticism, but it can serve as a catalyst for rallying faculty around common goals and purposes. Again, when the evidence suggested gaps in our assessment system, faculty joined forces to gather appropriate and useful data. The united effort of college leadership, department chairs, program coordinators, and involved faculty was not only instrumental in making important changes, but also served to bring the unit together in purpose to effect future quality improvement based on assessment data. Currently, each department and program is using data for quality assurance and program improvement, if for no other reason than to be able to defend our program in the future, and this vision has given faculty a common foundation from which to build.
CONCLUSION During the build up to the accreditation visit and during the visit itself, the unit’s efforts to ‘‘decorate’’ for NCATE were perceived by faculty as an attempt to simply placate the BOE team at best, or an inauthentic endeavor to protect the unit at worst. However, in light of the lessons learned from
53
Decorating for NCATE
our accreditation experience in all its phases – including the aftermath of the visit and the vision for the future – we have concluded that ‘‘decorating’’ is not such a pejorative exercise after all. In fact, we view the very activities of decorating as an essential part of effecting meaningful program change and improvement. In other words, decorating for NCATE was not a useless exercise. Instead, it allowed us to see what we were trying to cover up and helped us identify areas that we likely knew were in disrepair (in our case, an assessment system that was not functioning at a capacity that allowed us to use the data in the most useful ways). We would argue that such garnishing is often a meaningful activity in that it helps institutions take the necessary steps to meaningful change. Without our emphasis on decorating, we would not have cleaned up our program in a way that permitted a close examination of everything we were doing. Thus, decorating for NCATE allowed us to dress up our areas of strength and simultaneously provided us the opportunity to examine our weaknesses in light of those strengths – helping us to recognize that these were issues that we could no longer sweep under the rug. In this way, decorating served not only as a tool to spruce up the appearance of our unit’s programs but also as a mechanism for identifying programmatic elements associated with assessment that no amount of tidying was going to make presentable. Early on, we perhaps assumed that we could simply decorate our assessment system, but we quickly determined that we needed to make substantive changes that were not going to be addressed with simple adornments. This decision, carried to its fruition in each stage of the accreditation process has proven to provide a solid foundation on which the unit can build to make quality improvements in every program.
REFERENCES Darling-Hammond, L. (Ed.) (2005). Professional development schools: Schools for developing a profession. Teachers College Press: New York. Fullan, M., & Hargreaves, A. (1996). What’s worth fighting for in your school? New York: Teachers College Press. Hayes Jacobs, H. (1997). Mapping the big picture: Integrating Curriculum & Assessment K-12. Alexandria, VA: ASCD. Johnston, M. with the Educators for Collaborative Change. (1997). Contradictions in collaboration: New thinking on school/university partnerships. New York: Teachers College Press. Lieberman, A. (Ed.) (2005). The roots of educational change: International handbook of educational change. Netherlands: Springer.
54
RICHARD D. OSGUTHORPE AND JENNIFER L. SNOW-GERONO
Senge, P., Cambron-McCabe, N., Lucas, T., Smith, B., Dutton, J., & Kleiner, A. (2000). Schools that learn: A Fifth discipline fieldbook for educators, parents, and everyone who cares about education. New York: Doubleday. Teitel, L. (2003). The professional development schools handbook: Starting, sustaining and assessming partnerships that improve student learning. Thousand Oaks, CA: Corwin Press. Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd ed.). Alexandria, VA: Merrill Education, ASCD College Textbook Series.
CHAPTER 4 TENSIONS, COLLABORATION, AND PIZZA CREATE PARADIGM SHIFTS IN A TEACHER EDUCATION PROGRAM Lynnette B. Erickson, Nancy Wentworth and Sharon Black ABSTRACT Brigham Young University has been consistently accredited by NCATE since 1954. Our accreditation reports of past years focused on input information – general goals, complicated organization diagrams, and clinical performance assessments. When NCATE moved from inputs to outcomes with evidence grounded in measurable data, we worked collaboratively among teacher education faculty, faculty from the arts and sciences colleges, and public school partners to overhaul our assessment system and design new instruments. Our current accreditation reports include course and clinical assessments aligned with specific program outcomes, statistical charts detailing the levels at which these outcomes are being met, and documentation of programmatic decisions based on the findings of our assessments. Moving from input descriptions to output evidence was a painful process. However, we have come to appreciate the usefulness and value of our experiences, the tools that emerged, and the new Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 55–68 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012007
55
56
LYNNETTE B. ERICKSON ET AL.
decision-making processes we now engage in. This chapter is a recounting of our frustrations and the lessons we learned as we moved toward a culture of data-based decision-making.
Brigham Young University (BYU) was initially accredited by NCATE in 1954 and has been consistently accredited since that time. Accreditation renewal was always stressful to some degree because of the detailed level of work required to meet the specifications entailed, but never had we felt much fear of rejection. Then came the NCATE review of 2002. NCATE had moved from a focus on program inputs to teacher candidate outcomes with an emphasis on evidence grounded in measurable data. Input evidence (course syllabi and faculty qualifications) had given way to evidence of student outcomes (observational scores and final projects). Our evidence of course outcomes and our means of assessment – apparently adequate in past reviews – failed to meet the still nebulous new criteria developed for the culture of accountability that was beginning to dominate the education profession. We found ourselves on probation with two years to make significant systemic changes in our teacher preparation endeavor. Change needed to be immediate and pervasive. Although we knew from experience that teacher candidates from this institution had high market value, there was little consistent evidence that the teacher preparation programs were actually advancing the knowledge, skills, and dispositions that were making them good teachers. BYU is a large school, and teacher education is a major cross-campus undertaking. We recognized that goals, performance evaluations, assessment language, and designated outcomes were inconsistent at best across early childhood, elementary, and special education licensure programs offered by the school of education and across secondary education content-area programs offered by colleges and departments across the university. Requirements for evidence and consistency prompted introspection and movement toward change. We needed to review the organizational structure of our program and make overwhelming adaptations in the assessment instruments used to measure candidate performance and program efficacy, our data collection processes, and the ways in which we were analyzing data to improve our program. Additionally, our board of trustees and university president made it clear that we would pass the next accreditation. We gathered the bravado to face each of these tasks. All three will be described, followed by the discussion of the lessons we have learned from the accreditation
Paradigm Shifts in a Teacher Education Program
57
process. We hope that those who recognize their own challenges in ours will find our experience helpful and encouraging.
ORGANIZATIONAL STRUCTURE Our first major consideration was the breadth and size of our teacher preparation program. The number of teaching majors at BYU was larger than at many universities: 1,200–1,400 licensed teachers each year from early childhood, elementary, and special education programs, along with secondary education certifications. Each of these programs had varying expectations and standards for coursework and clinical experiences that needed to be aligned and focused if they were to be assessed and accredited. To create common goals, outcome expectations, and assessment measures among these units would require collaboration of 22 secondary education content area departments housed in 7 colleges on campus. The task from the onset was daunting. We gathered our courage, launched massive numbers of emails, and established relations with a nearby deli that delivers. First, we realized that the definition of the unit was inefficient and in some ways misleading. The School of Education alone did not and could not prepare over a thousand teachers a year; we had neither the resources nor the energy. To emphasize that teacher preparation is a campus-wide endeavor, not a school of education program with cross-campus outreach, the accreditation unit was changed from college based to university based. A university-wide Educator Preparation Program (EPP) was established consisting of 8 colleges and 19 departments, only 3 of which were under the umbrella of the School of Education (undergraduate and graduate licensure programs in early childhood education, elementary education, and special education). All 19 departments housed in the 7 colleges collaborated as part of the EPP to prepare undergraduate secondary teaching majors and minors. Many of our fears about their willingness to collaborate proved to be unfounded; we made new friends, established new alliances, and expanded our orders to the deli. The formation of the EPP led to additional organizational structures to support the new unit. Several committees were set up to unite programs behind a shared vision, to promote trust, and to improve the licensure programs across the university. For more than 25 years, BYU has maintained a partnership with five nearby school districts, and during recent years, the partnership has been extended to include arts and sciences departments and college
58
LYNNETTE B. ERICKSON ET AL.
throughout the university. With the NCATE imperative for alignment, preexisting partnership structures and relationships were called forth, and a number of tripartite councils and committees were regrouped and re-formed. Some of the names suggest functions; the acronyms suggest our need for pneumonic devices:
The University Council on Teacher Education (UCOTE) Secondary Education Design Team (SEDT) Secondary Education Partnership Advisory Council (SEPAC) Special Education Partnership Advisory Council (SPEDPAC) Early Childhood and Elementary Partnership Advisory Council (EEPAC)
University Council on Teacher Education With the EPP bringing responsibility for teacher preparation to the university level, a university-level committee was established to govern and sustain it. The UCOTE is chaired by a university associate academic vice president, and no one ignores the suggestions of an associate academic vice president – even when they are the same suggestions made by the education dean. The committee also consists of representatives, usually associate deans, from the seven colleges on campus that have teacher licensure programs. This body reviews program data and recommends program-wide changes and implementations. UCOTE does not dictate all unit outcomes, transition points, assessment instruments, data management systems, and reporting formats, but it helps facilitate the process of collaborative development. The individual licensure programs discuss issues, and then final decisions are ratified at the UCOTE level. Any unit changes must be approved by UCOTE as well.
Secondary Education Design Team The SEDT consists of a faculty representative from each content area department that offers a secondary education licensure program. Monthly meetings are held to coordinate common course goals and objectives and to review common assessments. Where once there was initial discomfort, team members now comment that they feel more connected to this group of teacher educators than to faculty in their own departments. They discuss issues related to pedagogy, teacher growth, classroom issues, and student
Paradigm Shifts in a Teacher Education Program
59
development – topics with which colleagues in their content areas are not concerned. Recognizing that in their roles preparing teachers they have more commonalities than differences, they regularly share ideas concerning their candidates’ field experiences and otherwise come together in productive and collaborative ways. For example, recently they collaborated in a research project for integrating content area literacy into their methods courses and presented together at the National Reading Conference.
Partnership Advisory Council Three tripartite councils were formed with representatives from teacher education faculty, arts and sciences faculty, and public school educators: SEPAC, SPEDPAC, and EEPAC. The councils meet every other month with an executive meeting on the alternate months to review program goals, structure, courses, and assessments. These committees served several important functions such as reviewing licensure program data and making recommendations for program improvement. In addition, a Student Advisory Council (SAC) was established to meet twice each semester to represent candidate concerns, contribute to program development, address student concerns, suggest solutions, and provide meaningful feedback about various aspects of the licensure programs and otherwise remind us of their needs and our weaknesses.
INSTRUMENT DEVELOPMENT With structures, councils, and committees more efficiently aligned, the next challenge was to find or develop the instruments that would assess what we valued. But we did need to decide what we valued – or should value. As tools used to assess the performances of teacher candidates must be rooted in generally accepted standards for teaching, faculty members from all licensure programs reviewed the widely accepted Interstate New Teacher Assessment and Support Consortium (INTASC) Standards; after all, everyone else seemed to be using them, and our state office was mandating them energetically. With participant consensus, we selected the INTASC Standards to be the foundation of our assessment tools. We agreed that our assessment instruments must not be limited to academic performance in university settings, but must include demonstration of individual candidate performance in authentic settings. We had seen
60
LYNNETTE B. ERICKSON ET AL.
our share of gifted classroom teachers with less than stellar grade point averages. And we realized that the real strengths of our program were not necessarily A, B, C matters. We agreed with Darling-Hammond (2006) that performance assessments not only provide evidence for individual performances, but help to represent program goals, create a common language, focus understanding, and provide information about program strengths and weaknesses. However, finding or creating reliable and valid instruments to provide data appropriate and meaningful for individual programs can be challenging – time and labor intensive, at best. Several measures were developed or selected from outside sources to assess candidates unit wide: Candidate disposition scales are constructed to provide information about candidates’ commitment to teaching, their views regarding diverse students, their locus of control, and their values. Clinical practice assessment system (CPAS) ratings, which mirror the INTASC standards, can be used to provide formative and summative data about the content knowledge of teacher education candidates in all field and clinical settings. Teacher Work Sample (TWS) (Renaissance Group, 2001), the capstone assignment for each licensure program, demonstrates the candidates’ instructional decision making, analysis of student learning, and reflection, with assessment by faculty teams using rubrics adapted from the Renaissance Partnership for Improving Teacher Quality (2001). The professional and interpersonal behavior scale (PIBS), completed by both candidates and instructors, addresses behaviors related to personal integrity, flexibility, initiative, etc. Each instrument was accompanied by rubrics or scoring guides and a schedule for administration. Program designers agreed on transition points consistent across programs at which the assessment instruments would be administered to provide critical evidence of candidate performance. Consensus did require time and tact; we added pizza delivery to the deli number on our speed dials. As the instruments were being developed, faculty were being trained on their meaning and uses. The interactivity of development, training, and further development served three very important purposes (in addition to helping us develop our tolerance for frustration and talent for tact). First, training sessions provided settings to collect valuable feedback on the development and refinement of the instruments. Second, the focus of training centered the participants firmly in the content of the
Paradigm Shifts in a Teacher Education Program
61
instrument – the INTASC Standards – and gave them common ground upon which to base their ratings of candidate performance, thus providing a foundation for establishing reliability in using the instruments. Third, training discussions led to the development of an electronic version of the clinical assessment forms that could be used during classroom observations to provide a summary of classroom visits that would contribute to candidates’ final ratings, as well as facilitate analysis and summary of data when downloaded to our electronic assessment system database. (Technology is certainly a benefit for those who can handle it!)
DATA COLLECTION AND ANALYSIS A unit assessment system (UAS) was created to identify the structures for collecting, analyzing, reporting data and using results for program improvement. The systematic collection of data from multiple sources has provided essential information for unit planning and decision making. Through data analysis, courses, practica, and programs are evaluated and modifications implemented to improve candidate performance and unit quality. Wide dissemination of the assessment data keeps faculty informed and involved, generating ownership of program changes and unit improvements. Technology has been crucial in collecting and analyzing data for accuracy and consistency. The data collection system provides the functional capability to compare candidates’ performance on assessments with other indicators of their competence. In addition it has capability of reporting inter-rater reliability across any combination of faculty members who are using a particular rubric. Regular examination of these data has enhanced our ability to provide fair, accurate, and consistent results to our candidates as well as to program administrators, faculty, and accreditation committees.
LESSONS LEARNED FROM THE ACCREDITATION PROCESS The overhauling of the BYU assessment system and designing of new instruments was done under pressure of accreditation requirements, but we admit that the process moved us toward a culture of data-based decision-making. Formerly our assessment presentations had involved complicated diagrams of teachers and students with lists of general goals
62
LYNNETTE B. ERICKSON ET AL.
and observational tools. These have been replaced with detailed statistical charts that specifically address the extent to which measurable goals are being met. We now use the common language of accountability based on national teacher development standards to describe our methods of systematic data collection and analysis. The process was often painful, but as we developed the tools and learned to use them, we have now come to recognize their value.
Paradigm Shift to University-Level Unit The first paradigm that had to shift was in the relationship of our many licensure programs: from independent to interdependent. Independence does not produce the common goals, outcomes, assessment instruments, or language of accountability required by accreditation bodies. Interdependency comes from a base of interdependent units; so we started by creating some. When the governance and accreditation unit for teacher education had been the School of Education, teacher preparation had been viewed as the work of the School of Education. Depending on the importance they attached to preparing teachers in their area – influenced by numbers of majors and department traditions – other units had designed and maintained programs with varying standards and practices. The dean of education could suggest common assessment processes to deans of colleges with licensure programs – the significant words being could and suggest. But when the EPP was developed and labeled as a university-wide unit, the paradigm shifted and the individual programs had to answer to more than ‘‘the dean of another college.’’ The larger institutional unit changed the political landscape of accountability across our campus. Gaining university-level approval requires data and evidence in ways that implementing college-level decisions did not. And the decisions made by a committee chaired by an associate academic vice president carry the necessary weight to be implemented across the EPP. With this paradigm shift, the dean of the School of Education was relieved of the responsibility (and frustration) of trying to get some degree of unity amid unit independence. By taking on the requirement to collect and main supporting data, we have gained important opportunities for using that data constructively. Although a majority of the School of Education and the cross-campus faculty and administrators have embraced or at least tolerated the shift to a campus-wide perspective on teacher education, some have felt that the shift to a larger university unit has taken away their right to define what they value specifically in their programs. And they were vocal about this.
Paradigm Shifts in a Teacher Education Program
63
A secondary art educator stated that to her the changes felt ‘‘top down.’’ She acknowledged value in the processes of instrument development, but resented the change to a common assessment as an ‘‘accreditation demand’’ (which it was) and an administrative decision rather than an individual choice (also true). A participant from the English Department explained that the unit assessment instruments did not provide data on which decisions about the specific English Education program could be based. (INTASC includes ‘‘communication, but not necessarily thesis statements and rhetorical structures.) He was willing to use the unit assessment instruments to meet national accreditation requirements; however he continues to use other means to evaluate candidates and will continue to use them along with the common instruments used unit-wide. Similarly, a number understood the importance of national accreditation but felt that they have sacrificed some unique program elements. However, relationships of respect and trust among participants were developing enabling open discussions during the forging of the common EPP assessments.
Paradigm Shift to Professional Standards and Data-Based Reporting Before changes, the licensure programs used the same form for final student teaching evaluations – primarily a written narrative of candidate performance. Each licensure program used its own additional instruments to assess candidates’ content knowledge, but teaching skills and dispositions were not regularly assessed in any programs. Shifting was necessary – not only in the paradigm of independence but also in the paradigm of acceptable evidence. The EPP created, implemented, and conducted reliability and validity testing on three common instruments with the (more or less enthusiastic) support of all licensure programs. Data were produced that were to become the basis of decision making in all units. These instruments have been used to assess candidates’ knowledge, skills, and dispositions. Data reports are currently provided to all departments at the end of each semester by the data assessment team. The paradigm shifted further to use professional standards as the foundation of the databases. The INTASC Standards were successful for NCATE and for the state office of education, but when it was necessary to use them as the basis for the CPAS instrument, many programs did not experience such a natural ‘‘fit.’’ Content knowledge was still at the forefront of their concerns, and it was not necessarily easy to shift the scope
64
LYNNETTE B. ERICKSON ET AL.
of assessment to focus on candidates’ attention to student development, assessment, multicultural needs, etc. Reactions varied, of course. A faculty member in a secondary foreign language program was grateful for the shift, commenting that he felt he knew much more specific information about his candidates as teachers after he had used the new instruments. However, other content area faculty members expressed their concern that the language in the CPAS instrument and TWS requirement did not reflect vocabulary and methodology specific to their content area. (Aesthetic applies well in fine arts areas, but INTASC does not care to use it.) Faculty and administrators with these concerns requested that they be allowed to modify indicators and prompts to better reflect what they wanted their candidates to know and be able to do. For example, the early childhood education faculty included the language of their accrediting agents, NAEYC, in documents they provided to their candidates to underscore connections between these familiar requirements and the INTASC Standards. The EPP has responded to the concerns of the individual licensure programs by encouraging them to include content area language in their instruments. However, the EPP reviews the modified documents to insure that the additions have not compromised the integrity of the instruments and that the data collected still provide the information required of all programs within the unit. Although alignment of expectations and required data for all programs has not been an easy process, a high number of faculty have now gone beyond mere compliance with accreditation requirements to make meaningful applications that improve the function of their programs and of the EPP overall.
Paradigm Shift to Consistent Expectations and Conceptualizations Along with shifts involved in valuing, gathering, and using data, many involved in our programs have had to shift traditional expectations of their candidates and ways of using the instruments to become consistent within the EPP unit. Introduction of the clinical assessment instrument caused a paradigm shift among university field supervisors (and later mentor teachers) concerning observations and ratings of practicum students. Prior instruments had been centered on the candidates’ developmental level of skill, making it was possible for a candidate to receive the highest mark on an observation scale for a level of experience and performance that was only in developing stages.
Paradigm Shifts in a Teacher Education Program
65
In contrast, the CPAS instrument was designed to allow for observation and evaluation of growth over time, rating candidates’ progress on a scale from novice to veteran teacher level. But our classroom partners who model and mentor for candidates during their practica, pointed out an unrealistic expectation for upward movement. Thus although the ‘‘novice to veteran’’ designation pattern is still the basis of the scale, the top end has been dropped, making it much more possible for candidates in their capstone field experience to achieve the top mark on the scale. It is still difficult for our supervisors to limit themselves to the lower end of the scale for candidates in early practica and candidates who have not progressed as well as others. We are still working to ensure that all users of the instrument use the elements of the rubric accurately in giving feedback to candidates. (Tissues are less costly than inconsistency.) With greater consistency in data gathering has come more usability of the data we have gathered. When expectations are common across programs, and language is common enough to assess consistency, there is a basis for evaluation; thus a history of teaching performance can be monitored and data can guide programmatic decisions and adjustments. For example, when we found consistently low scores across programs in our candidates’ understanding of diversity, we not only strengthened courses concerning diversity but provided seminars and nationally recognized guest speakers to instruct university faculty as well as candidates regarding issues of diversity in the classroom.
Paradigm Shift to Reliability and Validity Our earlier student teaching evaluation form had been accepted somewhat uncritically. It included items that were not based on national professional standards, and data were accepted and filed without being analyzed to determine the instrument’s validity or reliability. Those who used the instrument were seldom trained concerning the meaning of the items on the instrument – another source of inconsistency. But reliability and validity of all instruments have been important to the EPP. Data that are to drive decision making must be accurate, and a paradigm of accuracy and consistency must drive instrument development and use. Teacher education and content area faculty, clinical faculty, public school teachers, and district leaders – constituting a wide range of those using the new assessments – reviewed the indicators and prompts used in the instruments. Revisions of the language for specific programs were compared
66
LYNNETTE B. ERICKSON ET AL.
against the general indicators by EPP members to insure common meaning. Reliability has been assessed using the Rasch Cronbach’s Alpha, calculated for each subscale of the CPAS (ranging from 2 to 6 items). The data indicate that raters in elementary education show acceptable reliability on the INTASC Standards, although the first two Standards (content and learning) show the lowest levels. The data, however, can also be interpreted to indicate that the teacher preparation programs, not the indicators, are less secure in these areas than in others and thus need more attention in coursework and candidate preparation. Additional data from the Rasch analysis adds information about the difficulty levels of the indicators and use of the full extent of the scale.
Paradigm Shift to Electronic Data Collection and Management Traditionally our teacher education programs have collected data in hard copy format; some has been maintained in hard copy and some transferred to different electronic programs depending on use and users. Some faculty and administrators are more at home with technological systems than others; thus the imperative to efficiently archive and analyze data collected by semester and program, as required by NCATE, forced many into paradigm shifts that took them far beyond their comfort zones. We needed to find or develop a single electronic data system, one capable of handling the number and size of our programs. Leaders explored enterprise-level solutions that would meet the needs, but these were beyond the time and financial resources we had to work with. After investigating commercial data management programs we adopted LiveTextt as our primary data gathering and management system. Then came the hardest part – we had to learn to use it. Both faculty and candidates were used to paper, and the shifts involved with trusting the new system and attempting to use it have taken some time (as well as more pizza). A special corner was set up in our technology lab, with large screens, plenty of white board space, and the most comfortable seating we had available. We hired a group of geeks from the ‘‘tech generation’’ to teach their parents’ peers as well as their own how to turn on the computers and use the system. After several semesters and some rather tipsy paradigm discomfort, it is now running efficiently and effectively (except when something goes wrong with it). Now that the electronic data paradigm is more secure, even the more insecure faculty members are acknowledging its effectiveness and relinquishing
Paradigm Shifts in a Teacher Education Program
67
some of their paper consumption. Because instrument scores are now gathered in LiveTextt, data can be moved electronically from the classroom to the university, and candidates receive more timely (and in many cases more legible) feedback. Next an electronic version of the assessment form was created in Excel to be used on a laptop during the actual observation; today the system is fully electronic. As with most electronic systems, issues and modifications are daily matters. Fortunately our computer support staff members are knowledgeable, friendly, tactful, and VERY patient.
CONCLUSION It is difficult to label something as a conclusion when the process described is not concluded – and shouldn’t be. We continue to look beyond the ritual of accreditation toward how we can best meet the needs of our students. Fortunately, NCATE is no longer the only game in town. We have just completed a TEAC accreditation review using the instruments developed for NCATE. We found that the TEAC process asked us to audit our program for quality control as well as to report student outcome data. We are also aware that NCATE is making changes to its accreditation process allowing for two different paths. What will these mean to us if we decide to remain accredited by both NCATE and TEAC? Paradigm shifts have as many aftershocks as earthquakes, and in some cases they seem to require about as much tearing down as rebuilding. Passing NCATE and TEAC accreditation reviews have been significant victories for us (having required late nights, soup runs, and a good deal more pizza). However, rebuilding and recrafting our programs have only begun and perhaps will never end. The teacher education programs across our campus now share many aspects of the vision of what, and we have some workable structures in place. But we realize that a lot remains to be filled in, and a lot will still need to change. One of the most significant movements involved with the paradigm quake was the underlying conception of teacher education as being a group of independent programs, each doing what its faculty and administrators do best. This has shifted to a view of teacher preparation as a campus-wide responsibility, with cross-campus efforts all contributing to a unified, consistent whole. Required re-construction has included creation of a campus-wide ‘‘unit’’ governed by a committee with cross-campus representatives, chaired by an associate academic vice president. Loss of
68
LYNNETTE B. ERICKSON ET AL.
traditional structures and territory has been painful, but the new unifying structure has been effective. We still struggle with language, personal sensitivities, and some differences in goals and perspectives, but participants are positive and tactful – even when they do not agree. Another significant paradigm shift was the move from experience/ observation-based assessment that was passed on as feedback and largely forgotten, to data-driven assessment with decisions based on data analysis. Despite this, there are some faculty with many years of teaching experience who feel they have compromised what they have valued and continue to express concerns about the direction our program has gone to satisfy accreditation agencies. However, the majority of the faculty has adapted well to the unification of the unit and expressed their satisfaction with the new accountability system. It is clear to us that this process is never really over. Now we are re-examining instruments and systems – asking if our assessment criteria really assess what we think they assess – and if what they assess is what we really value. We’re still making adaptations and adjustments based on data (and apparent lack thereof). Data are dynamic, not static, and we are coming to realize this. Given the fluid nature of the education profession, we will probably always be assessing, adapting, and adjusting. The only things that change faster than paradigms and programs are the situations and needs of the people who must use them. What is ahead? More shifting, more evaluation, more data, and more re-consideration of instruments, storage, analysis, and applications. That means more meetings and therefore more soup, sandwiches, and pizza.
REFERENCES Darling-Hammond, L. (2006). Assessing teacher education: The usefulness of multiple measures for assessing program outcomes. Journal of Teacher Education, 57(2), 120–135. Renaissance Partnership for Improving Teacher Quality. (2001). Project overview. Available at http://www.uni.edu/itq/. Retrieved on September 4, 2009.
CHAPTER 5 INTERNATIONAL PERSPECTIVES ON ACCOUNTABILITY AND ACCREDITATION: ARE WE ASKING THE RIGHT QUESTIONS? Brenda L. H. Marina, Cindi Chance and Judi Repman ABSTRACT This chapter focuses on accountability and accreditation policies and practices in teacher education in the United States, England, Wales, and China. Despite the differences between countries, issues and problems of teacher education from country to country are remarkably similar. As a profession, we must examine where we are and where we need to be to meet the needs of our global society. We can begin by defining quality teaching and the essential skills for 21st-century teachers and students. As part of a global profession, teachers and educators must not work in isolation. It will be up to the leaders in the profession to educate political and accreditation bodies by sharing models that will meet the needs of our changing world. Can we give up the nostalgic notions of education and provide assistance to education preparation professionals to move toward new rapid-change models?
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 69–86 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012008
69
70
BRENDA L. H. MARINA ET AL.
This chapter focuses on accountability and accreditation policies and practices in teacher education in the United States, England, Wales, and China. Common issues and challenges, positive and negative impacts of the processes on educator preparation faculty, and the direct and indirect impact of accountability and accreditation policies and practices on the profession are discussed. To accurately examine these issues, we must first revisit the global purposes of education.
GLOBAL PURPOSES OF EDUCATION Internationally, and without exception, the overall purposes of education fall into two broad categories – social and economic. Each nation relies on educators at all levels to socialize youth into productive citizens of that country. In democratic nations, the goals often include phrases such as ‘‘shape youth into democratic citizens, and caring and responsive adults.’’ Likewise, other nations have similar outcomes with language appropriate for that country’s political and social structures. The outcomes associated with the economic purposes are more consistent across nations, including phrases such as ‘‘develop productive citizens with the knowledge and skills necessary to sustain self and the nation.’’ The internationally accepted purposes of education suggest a need to examine existing teacher education policies and practices and to further question their appropriateness for meeting the challenge of preparing educators with the knowledge, skills, and dispositions necessary to meet the social and economic challenges of the flat 21st-century world. Although social and economic change is constant and normal, what schools and the world are experiencing now is not change as any previous generation experienced. Rapid changes and demands resulting from the ‘‘flattening’’ world are placing unprecedented demands on schools and educators (Zhao, 2009). School administrators and teachers, teacher preparation programs, and educational policy makers are perceived by many external to the education community as ‘‘just jogging along’’ as though these were normal change times, instead of times of rapid change that demand major reform in the operation of teacher preparation units. There are some who question the policies and processes of accreditation of teacher preparation programs. Is accreditation of teacher preparation programs supporting the development of high-quality teachers who can teach effectively for the rapid pace at which the world is changing? Is it possible that accreditation actually hinders the progress of teacher preparation programs and their candidates,
International Perspectives on Accountability and Accreditation
71
thus restricting their ability to respond to the rapid and radical educational changes needed to address the learning needs of 21st-century students and societies? With this in mind, we question whether countries such as China, with no formal accreditation processes, are able to more quickly assess and meet the educational needs of their students more rapidly than countries such as United States and the United Kingdom, where accreditation and approval are more structured? This chapter grew out of a collaborative effort among faculty sharing an interest in the culture of education policy and who were actively involved in international educational research. It does not provide all the answers; rather, this chapter identifies and describes key policy issues and practices that merit the attention of teachers, educational leaders, and policy makers. Social, economic, and educational factors, especially historical antecedents, shape educational systems and make us more aware of factors impacting on our own system (Earley, 2009). While many authors have developed extensive lists of these factors, virtually all agree that technology and globalization are issues of critical importance for educators and students in the 21st century (Berry, 2008; Friedman, 2005; Kissock, 2002; Zhao, 2009). None of the issues in this chapter can be analyzed in isolation but must be examined in the larger context of the forces shaping them. Despite the differences between countries, issues and problems of teacher education from country to country are remarkably similar. According to Howard B. Leavett (1992, p. xi): An examination of other countries’ issues brings an international perspective to decision and policy making in one’s own country. This is likely to produce a broader, more detached viewpoint toward one’s own problems. An international awareness can reveal that the problems faced in one country are shared by others, even though their resolution may differ.
Vignettes from interviews conducted in 2009 by authors Marina and Chance provide an international perspective and snapshot of the issues and challenges related to accreditation and accountability. These conversations revealed the varied meanings of ‘‘21st century learning’’ while classroom observations augment these descriptions. As educators and researchers, we believe that teaching and teacher education is a global profession (Kissock, 2002; Zhao, 2009) and have come to understand the value of learning from colleagues in other settings. As educators and researchers, we recognize the commonality of issues and proposed solutions. We share this information because we value learning from each other to inform practice.
72
BRENDA L. H. MARINA ET AL.
INTERNATIONAL PERSPECTIVES ON ACCREDITATION AND STANDARDS Many schools have not prepared students to be college and citizenshipready because standardized tests are shaped by outdated notions of academic rigor and by political and financial considerations. Rigor has long been defined as having the right answers to academic content, but the rigor that matters most for the 21st century and for the world of work, learning, and citizenship is the ability to ask the right questions and to problem solving (Cochran-Smith, 2009; Wagner, 2008; Zhao, 2009). To answer the question ‘‘Are we meeting the needs of 21st century learners with 20th century policies and procedures?’’, authors Marina and Chance spent several weeks engaging in dialogue and gathering information in the United States, England, Wales, and China. Learning walks (classroom observations) determined what ‘‘rigor’’ may look like in a classroom. Interviews and focus group discussions told a broader story of the impact and importance of educational policy on teaching and learning for the new millennium. Schools selected for this project were considered high-performing schools and had recently gone through major transformations. Observations in the schools in the United States, England, and Wales demonstrated an even distribution between teacher-centered and studentcentered instruction. In contrast, the schools in China were strikingly teacher-centered. With teacher-centered instruction, the teacher is the center of the learning activity, whereas with student-centered instruction, the students’ activities are the center of learning. While teaching styles varied, there was a clear indication of high expectations for all students. Behavior rules and expectations permeated the walls of these buildings and exuded the ethos of what may be considered 21st-century learning. Some buildings were newer, whereas others leaked when it rained. Some schools had new ‘‘stateof-the-art’’ technology, whereas others were able to obtain only a few upgrades in equipment. There is evidence that the provision of new buildings or the renovation of old ones can have positive effects on motivation and engagement (Berry, 2002). A number of studies (Cash, 1993; Earthman, Cash, & Van Berkum, 1995; Edwards, 1992) found evidence that asserts a general link between building condition and achievement. Earthman (2004) found that while inadequate school buildings contribute to poor student performance, there was not convincing evidence that schools need be anything more than adequate. Stricherz (2000) notes that student achievement lags in inadequate
International Perspectives on Accountability and Accreditation
73
school buildings but suggests there is no hard evidence to prove that student performance rises when facilities improve well beyond the norm. The importance of teacher quality in relation to school buildings is highlighted in PricewaterhouseCooper’s (2000) study that suggests that good teaching takes place in schools with a good physical environment. The evidence suggests that poorly designed buildings can impact on the morale and motivation of both staff and pupils. However, the relationships between teaching and learning; student performance; and buildings are complex and there is not enough evidence to give firm guidance to policy makers on priorities for funding. According to Woolner, Hall, Higgins, McCaughey, and Wall (2007, p. 48): The relationship between people and their environment is complex and therefore any outcomes from a change in setting are likely to be produced through an involved chain of events. It is the defining and understanding of these mediating chains that is key and must take account of issues relating to ownership, relevance, purpose and permanence.
The interviews and focus group sessions were the beginning point to reconcile such issues related to teaching and learning. Interviews in the United States revealed the impact of No Child Left Behind Act (NCLB) on state and local level decisions (Department of Education (DoE), 2002). One teacher indicated that ‘‘as a result of NCLB, we have less freedom and creativity has been lost; there is simply too much testing.’’ In general, the US principals suggested that NCLB was instrumental in school reform and has been ‘‘a good attempt.’’ However, state level graduation and assessment policies were an alarming concern as well as local policies that affected funding for enrichment activities for teachers’ professional development. Zhao (2009) asserts that much of the rationale behind NCLB is faulty and that our policy focus on defining achievement using a narrow range of subject-specific tests works against our ability to prepare students for the technology-focused global workplace of the 21st century. As will be seen below, the accountability model at the heart of NCLB is having a significant impact on teacher accreditation as well (Cochran-Smith, 2009). James Cibulka (2009), the current president of NCATE, recently identified four comprehensive changes that must be implemented in the accreditation process. Three of those changes include phrases such as ‘‘increase student learning,’’ ‘‘impact on y P-12 student learning,’’ and ‘‘improve P-12 student learning’’ (p. 45). Given that P-12 student learning is currently defined by the measures specified in NCLB, educator preparation units will be required/expected to establish narrowly focused measures of accountability.
74
BRENDA L. H. MARINA ET AL.
These new measures mark a significant change from placing reform of educator preparation at the state or local level toward a model linking the effectiveness of teacher education directly to classroom student performance (Cochran-Smith, 2009; Hallinan & Khmelkov, 2001; Zhao, 2009). Interviews in the England and Wales revealed that the ‘‘Every Child Matters’’ national policy was viewed as a relatively positive initiative that changed classroom practices. One head teacher (principal) indicated that ‘‘the students have benefited because the national policy pushed them to redefine rigor.’’ Several teachers noted that the National Curriculum brought about personalized learning and agreed that the Assessment for Learning policy had a positive impact because the teachers and the students see where the students are and know what they need to move to the level. Interviews in China brought about an unanticipated discovery. The focus was on the family’s expectation and not on the accreditation or program approval polices. The country’s one-child policy has had a significant impact on both teachers and head teachers (principals). One teacher indicated ‘‘I feel great pressure and responsibility because each family depends on their one child to get a good education so they can enter a good college; I am responsible for helping a whole family, not just a child.’’ One head teacher (principal) also noted the significance of the one-child policy: ‘‘Each child is the hope, so the principal has the responsibility to make productive citizens.’’ Several teachers suggested that they are not aware of all the education policies: ‘‘We don’t know all the policies; we just do what we are told.’’ The examination is the major criterion for university admissions. A poor performance on this college examination almost always means giving up on that goal. Overall, teachers and teacher leaders, school heads (principals), and school governors (school board members) felt that regardless of the mandated policy, classroom practice based on the policy is partially implemented in the classroom. The educators explained that it has become increasingly difficult to keep up with the rapid rate of changes in policies; this explains the partial implementation of new policies. Finding the balance between 20th-century policies and 21st-century teaching and learning is a moving target; yet, it is clear that ‘‘[t]he challenge our schools must face is to begin teaching the skills and knowledge needed for the virtual economy’’ (Zhao, 2009, p. 131). In all the countries identified, the policy makers believed that they were listening to the appropriate audiences to make decisions about education. In all cases, educators (teachers and principals) seem to think otherwise.
International Perspectives on Accountability and Accreditation
75
Collectively, the observations and interviews established approaches that could be used to stimulate creative thinking and promote academic rigor in the midst of budget constraints, older equipment, and even leaky roofs. So, what mindset might be helpful to meet the challenges of our rapidly changing world? The teachers suggested that their leadership (principals, head teachers) must create a positive culture for teaching and learning. A teacher from England stated, ‘‘our head teacher allows us the autonomy to be creative with lessons, so we can meet the needs of our students and maintain compliance with the National policies.’’ While visiting a school in Wales, one teacher passionately explained, ‘‘We have a Tri-level reform, the classroom, the school and the local authorities. The tri-level works because of the leadership is networking with others – there is more intervention and support for empowerment and accountability. It’s systems thinking!’’. A teacher from China indicated, ‘‘The new style of testing for college entrance is designed by the government and teachers must design new teaching ways to adapt.’’ Ardently spoken by a teacher in the United States, it was exclaimed, ‘‘There is pressure to do more with less, so the administration needs to get creative!’’. The principals (head teachers) suggested that the teacher training, from both the university and the ‘‘in-house,’’ is critical to the ethos of the organization. A head teacher in Wales emphatically suggested, ‘‘if we grow our own head teachers, we know what our leadership will think and do regardless of the ever-changing accountability policies.’’ According to the British author, mathematician and philosopher, Bertrand Russell (2007), we should always entertain our opinions with a measure of doubt. The authors of this chapter contend that use of narrow measures for accountability has both pedagogical consequences and practical implications on the profession. In the United States, this policy work includes little input from stakeholders such as teachers or college of education faculty members. As Cochran-Smith (2009) points, out, ‘‘relying almost entirely on student test scores to evaluate teacher preparation is highly problematic’’ (p. 14). Many creative educators are working at schools of all levels to effect needed changes in a vacuum, without the support of ‘‘revised state policies for recruiting, preparing, developing, and supporting teachers and school leaders to reflect the human capital practices of top-performing nations and states around the world’’ (National Governors Association, Council of Chief State School Officers, & Achieve, 2008, p. 27). With these thoughts in mind, we further discuss the challenges of our pedestrian policies through the lens of teacher preparation.
76
BRENDA L. H. MARINA ET AL.
INTERNATIONAL PERSPECTIVES ON ACCREDITATION Before we can begin to address the rapid-change global education needs of the 21st century, we must first drill down into the concept of teaching as a global profession (Kissock, 2002; Zhao, 2009), the curricular issues associated with these demands, the impact on these new demands on educator preparation faculty, the training and development resources needed to address these demands, and flawed attitudes toward the changes. Teaching as a profession has been debated for decades in the United States without resolution, perhaps because we have limited the debate to the United States. It is amazing that although we have national accreditation models, we lack a commonly accepted definition for the core of our work – quality teaching, quality schools, and essential skills for 21st-century learners. Each state has varying definitions of a ‘‘highly qualified teacher.’’ Because the profession lacks the strength of common definitions, most state definitions were politically inspired in response to the NCLB legislation (Earley, 2009). Likewise, as a profession, we have failed to respond to the reality of challenges such as an internationally mobile teaching force that has resulted from teacher shortages in some countries, the need to socialize and train youth to be members of an interconnected global economy, the rapid development and use of new technologies, and the realization that we are preparing youth for jobs not yet invented (Zhao, 2009). Economists predict that today’s learners will have more than 10 jobs by age 38 (International Networking for Educational Transformation [iNet], 2009). In 2008, Bills et al. completed an international survey of education policy related to quality in initial teacher education. Although documentation of policies in much of the world is difficult to locate, the authors were able to reach three areas of agreement across countries: partnerships between teacher preparation units and local schools is critical; quality assessment is important but is not always clearly defined; and the United States is somewhat alone in relying on the concept of the unit’s conceptual framework as a key benchmark of quality. An additional challenge in the United States is related to the lack of centralization of education policy. As Earley (2009) notes, ‘‘The teacher education system is in the unenviable position of being accountable not just to college and university policies, but also to state and federal K-12 policies even if there are conflicts between them’’ (p. 92). The strength of the teaching profession may be enhanced if we can agree on a global body of knowledge related to teaching and learning to meet the demands of the flattened world, a definition of ‘‘quality teaching,’’
International Perspectives on Accountability and Accreditation
77
international certification agreements, and commitment to provide a quality teacher and education for all children worldwide (Kissock, 2002; Zhao, 2009). Absent these, we will fall short of meeting one commonly held purpose of education – socializing youth into productive (global) citizens. More importantly, when we leave our professional decisions, policies, and procedures to those less qualified, we relinquish our rights and responsibilities as a profession. A global profession is not a new discussion. Many countries have made movement toward the professionalization of teachers, by requiring common standards for entry, such as a Bachelor of Education or similar requirement, as defined by university faculties of education (Young, Hall, & Clarke, 2007). In 2002, the American Association of Colleges for Teacher Education (AACTE) Committee on Global International Teacher Education developed a white paper on the topic – An International Perspective: Professionalization through Globalization. According to the Committee (2002, p. 3): By expanding the emphasis on education beyond local and regional borders, credence is give to the concept that educators are members of a global profession. This allows the pedagogical body of knowledge to be professionalized – that is emphasized beyond the model of local knowledge and application currently enforced by regulating agencies.
The AACTE Committee gave multiple examples of existing globally shared influences – standardized testing, Montessori kindergartens, sitebased management, multiple instruction approaches, and so on – and called for specific national standards revisions to address globalization of the profession. We can no longer afford to think parochially, as though each institution or country works in isolation and as a part of a local trade. Instead, we must begin to think globally as a profession with common beliefs and education rights for all children (Berry, 2008; Zhao, 2009).
ISSUES AND CHALLENGES An international profession, or at the very least a global response to rapidchange demands on education, requires a commonly held set of knowledge, skills, and dispositions related to teaching and learning. Before the authors’ extensive involvement in global educator preparation and work in schools in the past five years, they would have declared this an insurmountable obstacle. However, after multiple research trips and extended stays in China, England, and Wales, as well as extensive involvement in an International Learning Community and the International Networking for
78
BRENDA L. H. MARINA ET AL.
Educational Transformation (iNet), the authors now believe that there are more commonalities than differences in programs, beliefs, and expectations. In focus groups, both U.S. and U.K. colleagues discussed with some envy about the Chinese students’ high scores on standardized tests, whereas the Chinese groups discussed with some envy about the U.S. and U.K. students’ creative and communication abilities. A model that reflects action toward addressing the need for international educator training and school models is the Xiwai International School in Shanghai. The school has been founded to fully integrate the western and the Chinese cultures – socially, and educationally – with the goal of maintaining the high academic standards expected of Chinese students with the creativity and communication skills that result from western teaching strategies. Imagine our surprise when we walked into the Shanghai school and heard ‘‘southern voices,’’ saw bulletin boards decorated for Halloween, and heard children singing western songs. The three teachers from Georgia on staff there, along with other American and European teachers, bring western teaching strategies and cultural, language, and educational experiences to the organization. Zhao (2009) describes a similar program – an International Kindergarten in Beijing. The school embraces bilingualism, biculturalism and duo-pedagogy, with students dividing their time between a child-centered Western pedagogical approach and a teacher/knowledge-centered Eastern approach. The increasing exchange of teachers and teacher educators among countries is further evidence of institution-based actions to address the need for shared expertise and commitments (Berry, 2008, Kissock, 2002, Zhao, 2009). While author Chance lived and worked in China for two months on a Fulbright Specialist Exchange, 36 classroom teachers and administrators from U.K .and U.S. schools and universities were present at Central China Normal University, attending education conferences or participating in other professional in-school and teacher education experiences. These were in addition to the two education professionals and multiple students who were there for long-term work and study. Experts external to education are entering this rapid-change global debate. At the iNet (2009) annual international conference in Birmingham, England, three international figures representing the profession’s external and internal communities – Thomas Friedman, author; Yong Zhao, professor, Michigan State University; and Robert Compton, business man and entrepreneur – agreed on the concept of ‘‘death of distance’’ and its impact on education to meet the rapid-change global economic and social demands (iNet, 2009). Although they did not agree on the models of education needed to meet these demands, several student outcomes emerged
International Perspectives on Accountability and Accreditation
79
from their multiple presentations and debates over the course of the conference. These common global student outcomes fell into the following categories (iNet, 2009):
Content competence (but not at the expense of the critical soft skills) High aspirations/goals (teachers and students) Sense of community Leadership skills Enterprising minds Creativity Passion for learning Ability to grow with change Confidence Global perspective Risk takers/reach for seemingly unattainable goals
If the profession accepts the global student outcomes suggested by these experts – consistent with those of the broader community – accreditation bodies, policy makers, and educator preparation faculty must respond. Although the focus on content would remain and increase, the ‘‘soft skills’’ that are difficult to assess would have a more prominent position at all levels – preschool through graduate programs (National Governors Association, Council of Chief State School Officers, & Achieve, 2008; Zhao, 2009). Accreditation bodies, policy makers, and school and university faculties would be called upon to change expectations and practices related to teaching, learning, and assessment strategies (Berry, 2008; Cibulka, 2009; Kissock, 2002; Wise & Leibbrand, 2000). To meet the challenges of the global profession and rapid-change demands on education, two preconditions are needed – a collaborative voice that includes both school and university faculty, and professional development models to support schools and universities during the transition (Hallinan & Khmelkov, 2001). The authors’ research in China, the United States, England, and Wales suggests that educators often feel they are not heard. Each group (teachers, principals, teacher preparation faculty) felt that one of the other groups ‘‘had the ear of policy makers,’’ but their group was not being heard. There are successful models that deserve our attention. Groundbreaking work in educational policy was carried out in Scotland during the 1980s, which to some extent gave lie to the notion that somehow education policy was formed in a more democratic way in Scotland than elsewhere (Humes, 1986; McPherson & Raab, 1998). This policy development model included all the players – representatives from accreditation and
80
BRENDA L. H. MARINA ET AL.
approval representatives, unions, national policy makers, and most importantly a very strong voice from school and college of education leaders (Furlong, Cochran-Smith, & Brennan, 2009). With this strong voice and leadership, initial teacher preparation became firmly embedded within higher education in Scotland. The alternate routes experienced by the United States, England, and other western countries did not occur (Furlong et al., 2009). This collaborative voice and leadership fostered the needed change without politically initiated ‘‘diversification of routes of entry’’ based on the belief that the teacher education faculty are reluctant to change. The need for rapid change in K-12 schools has resulted in alternative models in the United States, England, and Wales. These new models allow the schools to disregard many policies and procedures mandated to traditional schools. These ‘‘chartered’’ or ‘‘autonomous’’ schools can respond to ‘‘disruptive innovations’’ such as changing demographics and new technology with flexibility (Christensen, Horn, & Johnson, 2008). These new architectures for learning give the school the freedom to step outside the boundaries of mandates, accreditation guidelines, and state minimum standards, which allows the schools to quickly match school types to fit the students’ needs and circumstances. Unlike in the United States and the United Kingdom, Chinese schools and educator preparation programs operate as site-based institutions; thus, they have had no need to create alternative or ‘‘chartered’’ models. Chinese teachers and teacher education faculty expressed almost complete autonomy. ‘‘We don’t know what the policies are. We develop our curriculum based on the needs of our schools and the expertise of our faculty. Faculty and campus approval is all we need to change our programs.’’ Although the school and university changes must be submitted to the China education governance body, the Ministry of Education, the faculty felt it was just a matter of routine approval. In the United States and the United Kingdom, much of the policy development happens under the auspices of standards developed and implemented by NCATE (United States) or in the United Kingdom, the Office for Standards in Education, Children’s Services and Skills (Ofsted). The concept of an educator preparation unit setting its own course, based on a locally identified conceptual framework or school need, appears to be a powerful force supporting diversity and innovation in both models. However, the analysis of Bills et al. (2008) found that this belief is in error, finding that NCATE expectations instead lead to a ‘‘compliance culture,’’ ‘‘as much about coherence of curriculum, pedagogy, assessment and evaluation y .as it is about giving providers the freedom to set their own priorities’’ (p. 18).
International Perspectives on Accountability and Accreditation
81
Global policy research findings suggest that educators’ perceived reluctance to change might result from a lack of voice rather than resistance to change. The U.K. and U.S. teacher preparation faculty expressed concerns about constraints resulting from policy. This policy is often tied to accreditation procedures and practices. Educator preparation faculty in China did not express these concerns. They expressed an openness to change, and because approval is limited to program approval only, they ‘‘owned the faculty-led changes’’ and were able to make the changes quickly. However, all three groups’ educator preparation faculty, teachers, and school leaders expressed concerns that more vertical communication systems must be developed to ensure that the rapidly changing needs of elementary and secondary school students are being met. All three groups’ preschool through university faculty expressed concern that neither policy nor resources supporting their needs to meet the professional development demands of the fast-change curriculum are being provided. All expressed a need for additional resources to develop the curriculum and teaching strategies required for these significant changes. Expressed needs included workshops, resident experts in schools, funds to develop collaborative international partnerships, and most important, a model of continuing, not ‘‘one-off’’ professional development opportunities. Another critical concern that must have the voice of educators at all levels is the definition of a quality teacher, quality schools, and essential skills for 21st-century learners (Berry, 2008; Cibulka, 2009; Friedman, 2005; Hallinan & Khmelkov, 2001; Zhao, 2009). There is a growing concern that the alternative routes into the profession and some accreditation and approval criteria are leading us to develop ‘‘teachers as technicians’’ rather than teachers as reflective practitioners. The rationale for these ‘‘chartered’’ routes seemed legitimate and reasonable at one time. These included a need for smaller class sizes to address national testing schemes, teacher shortages as we try to educate more children worldwide, and a growing number of individuals choosing to teach as a second career (Berry, 2008). These ‘‘good enough teachers,’’ as they were described by one American political leader, are trained to follow directions, prepare students for the upcoming tests, and to be low-cost technicians in the classroom (Furlong et al., 2009). According to a principal in China ‘‘The government is trying to help teacher trainees, but it is still very score based-there’s no test for social skills. Principals have little power to influence university ‘teacher’ curriculum, so they need to be retrained when they come to this school. To change teachers, start from the heart. They spend too much time on science, chemistry; they don’t have time to think for themselves.’’ Educators at all levels included in the focus
82
BRENDA L. H. MARINA ET AL.
groups expressed varying levels of concerns about these alternative ‘‘chartered’’ routes models. As a profession, we need the best teachers in this time of rapid change, when human capital is to be our most valued resource for the future of our nations and the world (Zhao, 2009). These rapid-change challenges call for a movement toward learnercentered reform and away from focusing efforts on preparing teachers to use a scripted curriculum and to use what noneducators see as ‘‘scientifically based’’ (but are in reality ideologically driven) teaching methods (Allington, 2002). Educators at all levels support the call to examine new models, including the learner-centered models, with a focus on 21st-century essential skills, and for highly trained professionals in every classroom (Berry, 2008; Zhao, 2009). One summary of knowledge, skills and dispositions required of a highly qualified teaching professional states, [one] who critically observes, assesses, and act through inclusive pedagogies and practices with an understanding of student and child development, learning methodologies, subject knowledge, and knowledge of pedagogies. (Collins & Tierney, 2006)
CONCLUSION The rapid-change and flattening world issues and challenges have resulted in major new demands on education at all levels and have impacted teacher education programs and educators. While educators look to others for guidance in making curricular decisions, U.S. and U.K. programs ultimately make choices within the context of the policy, rules, or standards, required by licensing and accreditation bodies. Education policy and practice must reflect and ensure that educators are prepared to teach the world’s 21st-century children (Kissock, 2002). Zhao’s lifetime of experience with education in the United States and China led him to caution western educators and policy makers ‘‘what is needed is a diversity of talents rather than individuals with the same competencies’’ (p. 158). Given that the process of policy revision requires a significant amount of time and effort, it is not surprising that policy change lags behind the realities of educational experiences and our changing society. As part of a global profession, faculty at all levels must not work in isolation. Educators must have both the opportunity and the responsibility to learn from each other and benefit from the interactions of different societies as they address the ‘‘disruptive innovations’’ facing the profession (Christensen et al., 2008; Kissock, 2002; Zhao, 2009). After reflecting on the observations and the
International Perspectives on Accountability and Accreditation
83
interviews from our project and revisiting countless articles and books on 21st-century learning, we assert that the ‘‘culture of schooling’’ is the gap between the 20th and the 21st centuries that must be addressed. To prepare teachers and leaders deal with the ‘‘disruptive innovations’’ impacting our schools, we suggest that the following critical areas must be addressed through flexible, easily changed accreditation policy: global awareness, digital literacy, high content standards, self-confidence, and creativity. Globally, productive 21st-century citizens will require the soft skills that are both not easily measured and those presently assessed. Without flexible policies governing educator preparation programs, we predict that ‘‘chartered’’ programs will experience a more rapid growth and ‘‘good-enough’’ technicians in the classrooms with our children may become the norm. As we have examined questions related to globalizing teacher preparation and organizational change to meet the needs of our rapidly changing and flattening world, key questions include as follows: How do we begin to look past our local and state political expectations and beliefs, and accreditation and program approval expectations to address the disruptive innovations and global needs facing all levels of the profession? Who will/can take the lead in defining, quality teachers, essential skills for the 21st century and global accountability in educator preparation? As politicians, the media, and others criticize schools and educator preparation programs around the world, we must accept responsibility for the symptoms to which they refer. However, as proud and dedicated professionals, we should respectfully ask for assistance in addressing the root cause of the symptoms. Most schools and college of education are filled with dedicated professionals who are doing exactly what has been mandated. In fact, they are too busy trying to survive today’s mandates to give much thought to the future. While one teacher in Georgia indicated, ‘‘Not getting paid for professional development has made it more difficult as the years go on and low teacher morale is not good for our kids’’; a principal from another school district indicated, ‘‘To keep up, we still must continue professional development in spite of budget cuts. In-house training has been effective training for now.’’As a profession, we must examine where we are, and where we need to be to meet the needs of this rapid-change world. Then, we must take back the decisions related to schools and educator preparation. We can begin by defining quality teaching, quality schools, and the essential skills for 21st-century teachers and students. Additionally, we must take the lead in creating new models that meet those needs based on internal and external experts like those referenced throughout this chapter. The school and educator preparation models will vary but will likely be consumer-led;
84
BRENDA L. H. MARINA ET AL.
acknowledge and accept competition; enable social mobility, renewal, and multicultural heterogeneity; and provide a global perspective and full democratic participatory access. A vision for education in the 21st century (and beyond) requires a reconsideration of the purpose of education and the values by which it will be driven, the ways it is funded, and the demands of the system for social capital and human capital. Our present models often confuse the ‘‘custodial,’’ ‘‘sorting,’’ and ‘‘developmental’’ roles expected of education at all levels. Although many politicians and others want education ‘‘like I remember,’’ those models no longer meet the needs of our changing world (Berry, 2008; Friedman, 2005; Hallinan & Khmelkov, 2001; Zhao, 2009). As a society and profession, we are experts in the past, have variable expertise in the present, and are usually inexpert about the future. Leaders in the profession must educate political and accreditation bodies to this fact and most importantly provide models that will meet 21st-century needs. Without such models, we will continue to be asked to respond to ‘‘older’’ models that worked in the past. Can we/you give up the nostalgic notions of education and provide assistance to school and education preparation professionals to move toward new rapid-change models? The next generations will experience global interactions and other ‘‘disruptive innovations’’ that most of us cannot imagine. They will need to understand and accept diversity and change at deep conceptual levels. Economic success for our country will demand it. This is our role, and that of accreditation agencies internationally, for the future of our children and the profession.
REFERENCES Allington, R. (2002). Big brother and the national reading curriculum: How ideology trumped evidence. Portsmouth, NH: Heinemann. Berry, B. (2008). The teachers of 2030: Creating a student-centered profession for the 21st century. A TeacherSolutions 2030 Product, CTQ (The Center for Teaching Quality) and TLN (Teacher Leaders Network). Available at http://206.130.109.205/node/4307 Berry, M. (2002). Healthy school environment and enhanced educational performance: The case of Charles Young elementary school. Washington DC: Carpet and Rug Institute. Bills, L., Briggs, M., Browne, A., Gillespie, J., Gordon, J., Husbands, C., Phillips, E., Still, C., & Swatton, P. (2008). International perspectives on quality initial teacher education: An exploratory review of selected international documentation on statutory requirements and quality assurance in: Research Evidence in Education Library. London: EPPICentre, Social Science Research Units, Institute of Education, University of London. Available at http://eppi.ioe.ac.uk/cms/Default.aspx?tabid ¼ 2377
International Perspectives on Accountability and Accreditation
85
Cash, C. (1993). A study of the relationship between school building condition and student achievement and behavior. Doctoral thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA. Christensen, C., Horn, M., & Johnson, C. (2008). Disrupting class: How disruptive innovation will change the way the world learns. New York: McGraw Hill. Cibulka, J. (2009). Improving relevance, evidence and performance in teacher preparation. The Education Digest, 75(2), 44–49. Cochran-Smith, M. (2009). The new teacher education in the United States: Directions forward. In: J. Furlong, M. Cochran-Smith & M. Brennan (Eds), Policy and politics in teacher education (pp. 9–20). London and New York: Routledge. Collins, J., & Tierney, T. (2006). Good to great and the social sector. New York: Bridgestar. Department of Education (DoE). (2002). No Child Left Behind act. Available at http:// www.ed.gov/nclb/landing.jhtml. Retrieved on August 21, 2009. Earley, P. (2009). Instrumentalism and teacher education in the United States: An analysis of two national reports. In: J. Furlong, M. Cochran-Smith & M. Brennan (Eds), Policy and politics in teacher education (pp. 83–94). London and New York: Routledge. Earthman, G., Cash, C. & Van Berkum, D. (1995). A state-wide study of student achievement and behaviour and school building condition. In: G. Earthman (Ed.), The impact of school building conditions, student achievement, and behaviour in OECD. Paper Presented at Council of Educational Facility Planners International Conference, New York, NY. Earthman, G. I. (2004). Prioritization of 31 criteria for school building adequacy. Baltimore, MD: American Civil Liberties Union Foundation of Maryland. Edwards, M. (1992). Building conditions, parental involvement and student achievement in the D.C. public school system. Washington DC: Georgetown University. Friedman, T. (2005). The world is flat: Moving from the information age to the conceptual age. New York: Farrar, Straus and Giroux. Furlong, J., Cochran-Smith, M., & Brennan, M. (2009). Policy and politics in teacher education: International perspectives. London and New York: Routledge. Hallinan, M. T., & Khmelkov, V. T. (2001). Recent developments in teacher education in the United States of America. Journal of Education for Teaching, 27(2), 175–185. Howard, B., & Leavitt, H. B. (1992). Issues and problems in teacher education: An international handbook. New York: Greenwood Press. Humes, W. (1986). The leadership class in Scottish education. Edinburg: John Donald. International Networking for Educational Transformation (iNet). (2009). Annual conference. Available at http://www.ssat-inet.net/online_conferences.aspx Kissock, C. (2002). Author for the AACTE (American Council of the Colleges for Teacher Education) committee on global and international teacher education. An international perspective: Professionalization through globalization. Available at http://ducis.jhfc.duke. edu/archives/globalchallenges/pdf/kissock.pdf McPherson, R., & Raab, C. (1998). Governing education: A sociology of policy. Edinburgh: Edinburgh University Press. National Governors Association, Council of Chief State School Officers, & Achieve. (2008). Benchmarking for success: Ensuring U.S. students receive a world-class education. Washington, DC: National Governors Association. Available at http://www.achieve. org/files/BenchmarkingforSuccess.pdf
86
BRENDA L. H. MARINA ET AL.
PricewaterhouseCoopers. (2000). Building performance: An empirical assessment of the relationship between schools capital and pupil performance (DfES Research Report 242). London: Department for Education and Skills. Russell, B. (2007). Bertrand Russell Quotes. Available at http://www.quotationspage.com/ quotes/Bertrand_Russell. Retrieved on December 14, 2009. Stricherz, M. (2000). Bricks and mortarboards. Education Week, 20(14), 30–32. Wagner, T. (2008). The global achievement gap. New York: Basic Books. Wise, A., & Leibbrand, J. (2000). Standards and teacher quality: Entering the new millennium. Phi Delta Kappan, 81, 612–621. Woolner, P., Hall, E., Higgins, S., McCaughey, C., & Wall, K. (2007). A sound foundation? What we know about the impact of environments on learning and the implications for building schools for the future. Oxford Review of Education, 33(1), 47–70. Young, J., Hall, C., & Clarke, A. (2007). Challenges to university autonomy in initial teacher education programmes: The cases of England, Manitoba and British Columbia. Teaching and Teacher Education, 23(1), 81–93. Zhao, Y. (2009). Catching up or leading the way: American education in the age of globalization. Alexandria, VA: ASCD.
CHAPTER 6 LIVING WITH ACCREDITATION: REALIZATIONS AND FRUSTRATIONS OF ONE SMALL UNIVERSITY$ Judith A. Neufeld ABSTRACT Meeting accreditation requirements provides challenges for any size institution. One small, state-supported university found three solutions to problems associated with gaining accreditation: the creation of an accreditation ‘‘data pantry;’’ the use of common technological formats and technologically savvy faculty members to ‘‘work smarter, not harder’’ with accreditation tasks and data; and the participation of faculty members in new ways to revise curriculum, forge stronger relationships among faculty from different departments, and generate strong learning experiences for students. Two frustrations regarding the accreditation process remained: university responsibility for the performance of former education students long after they leave campus and competency testing that forces schools into reactionary leadership that may place gaining accreditation ahead of meeting students’ needs. Finally, the milieu of $
This chapter is based on the experiences of the author and does not necessarily represent the views or opinions of the institution, individuals, or agencies identified in this work.
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 87–103 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012009
87
88
JUDITH A. NEUFELD
‘‘teststeria’’ has a deleterious effect on faculty innovation, pulls faculty focus away from the students whom they desire to serve, and decreases the value of the teaching profession.
As much as we want to believe that accreditation is an authentic assessment of an academic unit, the reality of the situation is probably somewhat different. Participating in accreditation processes can be trying. Institutions are placed in the awkward position of showing that they meet all the requirements of various accreditation agencies with the appearance of doing so through their preferred modus operandi. Meeting accreditation requirements is a challenge for any size institution. The university must process the way in which records will be collected and kept related to various items including student information, student performance data, faculty qualifications, decision-making processes, university resources – the list goes on and on. As the faculty and staff members of Lander University have undergone the processes to earn accreditation from various organizations, three realizations and two frustrations related to the accreditation process have become clear. Lander University is a small state-supported university in Greenwood, South Carolina. The university serves roughly 3,000 students, approximately 550 of whom are education majors. Education is the second largest major in the university. Many of Lander’s students are first-generation college students (41%). Most education majors were born and raised in the Piedmont region of South Carolina, where Lander University is located, and often choose to live there after graduation. Lander University is also located within Greenwood County, which has a relatively high unemployment and adult illiteracy rate. In 2009 Greenwood County’s unemployment rate was 14% compared to an overall unemployment rate of 9.4% in the United States (United States Census Bureau, 2010). Unemployment has increased markedly in this region of South Carolina in the last year. According to the United States Census Bureau (2010), in 2000 14.2% of households in Greenwood County with school-aged children were categorized as at poverty level; in 2009 22% of households with school-aged children in two of the three school districts in Greenwood County were categorized as having ‘‘poverty level’’ income and 15% of households in the most affluent of the three districts reported poverty-level incomes. Greenwood County has an adult illiteracy rate of 16%, which is much higher than the national adult illiteracy rate of 1% (United States
Living with Accreditation
89
Department of Education, 2010). According to United States 2000 census data (United States Census Bureau, 2010), the majority of Greenwood County residents are white (65%), a large number are Black or African American (32%), and a small number are Hispanic or Latino (3%). However, in the past five years, the Hispanic or Latino percentage of the population has grown dramatically causing an influx of English as Second Language learners in Greenwood County schools (United States Department of Education, 2010). The education level of adults who are 25 years and older in Greenwood County ranges from those with less than a ninth grade education (10%) to those with graduate or professional degrees (6%). The presence of a university in the county significantly increases the number of adults holding graduate or professional degrees. Slightly more than half of the public school faculty members in the surrounding county are Lander University graduates. Lander University prepares students for initial teacher certification and master’s level work in 12 areas. Three of these areas (early childhood, elementary education, and special education) are housed in the Department of Teacher Education (DTE) within the College of Education (COE). A fourth area, physical education, is housed in another department within the COE. The others (art education, English education, math education, and music education) are housed in two of Lander’s other three colleges. For purposes of National Council for Accreditation of Teacher Education (NCATE) accreditation, the dean of the COE is seen as the unit head for all education degrees because the majority of education majors graduate from programs within the DTE (Early Childhood, Elementary Education, and Special Education) and because the administration of teacher education degrees (e.g., recommendation for certification, gathering data on program completers, filing governmental reports) works more smoothly with this approach. Grappling with expectations of the Southern Association of Colleges and Schools (SACS), two state-level governing bodies [the South Carolina Commission on Higher Education (CHE) and the South Carolina Department of Education (SCDOE)], local public schools, faculty members across three colleges, and NCATE – whose approval must be attained by all institutions offering teacher certification programs in South Carolina – has been a challenge for the eleven full-time faculty members in the DTE. Added to this, the COE has had four deans with quite diverse backgrounds (science education, exercise science, graphic art, and elementary education) over the course of the past six years. Faculty members have been given ample
90
JUDITH A. NEUFELD
opportunities to rethink accreditation and program assessments during the current six-year accreditation cycle.
THREE REALIZATIONS As the DTE began to prepare for the current accreditation cycle, including an anticipated NCATE site visit in fall of 2012, the current dean of the COE realized that she needed a new paradigm of functionality. There had to be a better way to comfortably support the ongoing activities of the DTE while participating in a continuous assessment process. Although the DTE had many good things in place that allowed it to function well (e.g., collegial, well-qualified faculty members; supportive administrators), there was still some lack of clarity in the way faculty members viewed their roles in the accreditation process, the way the university viewed its role in the accreditation process, and the way in which the DTE could best position itself to efficiently complete accreditation processes. While pondering a better overall approach for more gracefully accomplishing accreditation, the current dean came to three realizations.
Realization 1 – Adopt the Pantry Metaphor For years various departments and offices at Lander University gathered information related to education majors. The Financial Aid office kept statistics on scholarship awards earned by education majors. The Student Affairs office had information on extra-curricular activities in which education majors participated. The Registrar’s Office had information about grades education majors earned in various key courses. However, information collected by various departments and offices across campus was not readily available to all who needed it. For example, similar information was needed to prepare a variety of accrediting reports [i.e., SACS selfstudies, Specialty Program Areas (SPAs) reports, Professional Education Data System (PEDS) for the American Association of Colleges for Teacher Education (AACTE) reports]. Although data were gathered on each student moving through education-related programs of study at the university, there was no central ‘‘warehouse’’ for the data. Faculty and staff members who were creating reports or contemplating decisions about changes in programs had to search for data in a wide variety of sources, request reports or lists of data from other sources, or, in many cases, hand-enter information
Living with Accreditation
91
provided on paper reports into electronic spreadsheets because the various digital storage systems could not interface effectively. Additionally, few faculty and staff members on campus knew who else on campus might have or might need similar information. For example, the Dean’s Administrative Assistant might be asked for similar data, such as PRAXIS IIt subscores, from three different faculty members who were creating reports for various reasons. Staff and faculty members wasted valuable time recording and locating accurate and usable data. In addition, the data requirements and report formats of various agencies constantly changed. In some years agencies might request data related to student gender. In other years agencies might request data related to candidate ACTt or SATt scores. Faculty and staff members were constantly scrambling to find appropriate data. To further compound this issue, students entered Lander University with a social security number which was translated into an ‘‘L’’ number for student identification purposes. Lander University used this ‘‘L’’ number in various ways such as tracking students as they moved through their programs of study. Although professors had access to student ‘‘L’’ numbers, faculty members could not gain access to the students’ social security numbers. Unfortunately, many reports required DTE faculty members to report information on individual students that could only be confirmed by matching social security numbers to student names. Another problem faced by faculty and staff members completing reports was the accuracy of information regarding the race of the candidates. Lack of available official information about race led to interesting discussions among faculty members such as, ‘‘Is she Latina? I thought she was from Indonesia!’’ Racial data reported by the DTE needed to match that self-reported by the students and needed to be available for use by report writers. Clearly the lack of easy access to data and the poor dissemination of accurate data were significant problems. How might the DTE address these issues? As the dean pondered this question, fall turned to winter. When she began stocking her home with food in preparation to entertain numerous relatives over the holidays, she found one answer to the problem. Like her well-stocked kitchen, she thought, ‘‘We need a data pantry!’’ In preparation for relatives to visit for several days over the holidays, the dean stocked her pantry with as many different types of food and supplies as she thought she might need, because stores were not reliably open during the holidays and she did not know who of her relatives had become a vegan, who might be diabetic, or who was avoiding carbohydrates. With a well-stocked pantry, all her guests could be easily accommodated. Good household pantries are
92
JUDITH A. NEUFELD
identified by several characteristics. Good pantries contain ample supplies and include staples (and a few special items). Good pantries are also readily accessible to all who need access, well-organized, and well-maintained. The DTE needed a good data pantry. The DTE needed a pantry with ample information about education majors. If enough accurate information was available, then questions could be more accurately and easily answered when the CHE wanted to know how many women were in the program during a certain academic year, or when SACS wanted to know the ways in which special education coursework supported Lander University’s goals, or when the SCDOE wanted to know how many first-generation college students were currently education majors at Lander. This meant that all kinds of information would be needed, both information currently requested and information that might prove helpful in the future. Thus, the DTE pantry would need to contain staples (information most agencies wanted to know, such as an education major’s certification area) and special items (information which was not required for current accreditation processes, but which might be important later, such as the number of education majors who completed all coursework at Lander versus those who transferred credit hours from other institutions). The DTE pantry would have to be readily accessible to all those who might need access. Efficient use of information by various faculty and staff members across the university would require easy access to the pantry, so that they could easily add or access data sets. If the pantry was difficult to reach or if access required several levels of authorization beyond some type of initial vetting, then the pantry would not function well. The data pantry would need to be well-organized. Just as a good, wellorganized household pantry allows users to easily shelve or retrieve supplies, a good data pantry should allow users to easily post and retrieve data in a way that is both useful and useable for all. This means that someone must be responsible to maintain the pantry, because some users are more inclined than others to ‘‘shelve’’ items correctly. Just as the supplies in a household pantry have to be rotated to be used in a timely fashion, old data also must be rotated or retrieved and stored in a non-cumbersome way. A similar useful metaphor for data maintenance and storage might be that of a well-maintained kindergarten classroom where all items are labeled, each item has a space, and all members of the classroom are expected to help with maintenance. A good data pantry would have well-labeled areas of data, a space for each type of data, and a person who helped others with maintenance. The DTE would benefit greatly from a ready source of reliable and accessible information, but it would not be alone. Non-DTE entities on
Living with Accreditation
93
campus would also benefit, because a wide variety of accurate information about students and programs would be helpful in a number of ways. For example, Student Services could request and obtain appropriate information related to education club membership recruitment or could identify students who might benefit from various tutoring opportunities. Grant writers could easily gain access to important necessary information in this model of data storage. Those preparing university-wide reports could save time and energy through access to a common information pantry. Once the need for a data pantry had been established, the question of ‘‘How do we stock and maintain a pantry?’’ needed to be addressed in a way that did not make more work for those already involved in assessment and accreditation processes. The need for a well-stocked, easily accessible ‘‘pantry’’ of information led to a second revelation regarding accreditation and program assessment.
Realization 2 – Work Smarter Not Harder The DTE, as well as Lander University, was blessed with many hard-working highly committed faculty members. Faculty members worked hard to provide high-quality learning opportunities for students in university classrooms and in clinical sites. Most were open to and embraced change. Rarely did anyone say ‘‘but we’ve always done it that way!’’ Faculty members were not adverse to hard work and were willing to continue to revise and improve programs. However, there was some sense that these efforts were not appreciated or supported university-wide. For example, technology systems supported by the university provided minimal support for report writing. Finally, no single person on campus knew who needed what particular piece(s) of information to complete program reviews, file required governmental reports, or create departmental reports. Fortunately, circumstances fell into place to address these issues and help the DTE to work smarter. In August 2008, there was a change in the Vice President for Academic Affairs (VPAA) position at Lander. A former dean of the COE – who appreciated the necessity of collecting and disseminating all types of aggregated and disaggregated data – was chosen to fill the VPAA position. Also, during the fall of 2008, the DTE hired a new faculty member who previously had been the instructional technology coordinator at another large institution. In addition, the university shifted an existing staff member to a newly created position, Director of Assessment and Institutional Effectiveness. This position was located in the VPAA’s office and provided
94
JUDITH A. NEUFELD
an opportunity for one person on campus to have a good understanding of the assessment needs of all departments, academic and otherwise, on campus. This ‘‘tri-mergence’’ of events helped those participating in university-wide assessment processes to address issues of technology support, providing a knowledgeable ‘‘assessment’’ point person. These three personnel changes provided a foundation for working smarter and quickly brought about positive results across the institution. Although the university had several helpful platforms for collecting data (i.e., BANNERt, LiveTextt, and a home-grown Bearcat system), the new faculty member soon realized that these platforms did not communicate well with one another. For example, when faculty members wanted information about students to prepare reports (e.g., social security numbers, Lander identification ‘‘L’’ numbers, gender, place of birth), only hard copy reports were provided by the institution. Often a faculty or staff member had to enter this information into a new data base or spreadsheet to make it usable. This waste of time also created opportunity for error, given the increased number of human interactions in the data entry process. The new faculty member began talking with staff members, articulating the DTE’s needs in terms of data gathering, storage, and access. He asked questions about the various technology systems and the ways in which surveys and databases could be designed for information to be uploaded directly from one system to another. Because of the faculty member’s inquiries and efforts, new electronic forms for student evaluations are now posted in a way that allows data to be more easily transferred from one data system (e.g., BANNERt) to another (e.g., Blackboardt). For example, the DTE launched a newly designed Dispositions and Summative Evaluation form as an online survey that can now be completed by students (selfreflection), cooperating classroom teachers, and university supervisors at the end of each of four clinical experiences. Information about candidates (e.g., name, ‘‘L’’ number, race, gender, social security number) is now automatically uploaded. Thus, at the end of the semester, meaningful data about candidates’ dispositions and an evaluation of their clinical performance is easily made available to all program coordinators and SPA report writers. Meanwhile the VPAA’s office hired a technology support person to actively sustain the technology needs of academic affairs (e.g., gathering and storing data related to faculty members, preparing and storing university reports to SACS). This meant that three members of the VPAA’s office (the VPAA, the Director of Assessment and Institutional Effectiveness, and the technology support person) were knowledgeable and conversant about issues related to ongoing program assessment. Realizing that the faculty
Living with Accreditation
95
members across the COE, College of Arts and Humanities, and the College of Science and Mathematics needed to store and access common pools of data, the technology support person began to create an appropriate forum. His solution was relatively simple: Microsoft Office Excel spreadsheets housed and maintained on a Microsoft Office SharePoint site. Among the many benefits of having a faculty member conversant in ‘‘technology’’ partnered with a participating member of the VPAA’s staff was that other technology support personnel began to better understand the technology needs of each of the teacher preparation programs. Questions were asked such as ‘‘Will LiveTextt be able to talk with BANNERt in this way?’’ or ‘‘Can BANNERt upload information to Blackboardt in this area?’’ Ways were sought for data technology systems to effectively interface. When new technology programs were considered, sophisticated questions related to the ways in which the new technology could or could not meaningfully interface with existing technology systems became increasingly more commonplace. Another benefit of this coordination was a better university-wide understanding of the benefits of collaboration. Faculty and staff members were more aware of and responsive to opportunities for working well together. This sense of collaboration has become so evident that candidates interviewing for positions have mentioned the level of collegiality at the university. Willingness both to be proactive in dealing with issues of time and data management and to approach experts outside the COE has led to new ways for COE and non-COE faculty and staff members to work smarter.
Realization 3 – Respect Colleagues across Disciplines Lander University is a relatively small institution. There are, however, very few structured opportunities for faculty members from different departments to meet with one another. There are several ways the accreditation and program assessment process has built camaraderie and respect among faculty members across the university. The experience of the elementary education faculty is an informative example of this process. In the fall of 2006 DTE faculty members began to discuss changes in the various SPA requirements that were being proposed by the Association of Childhood Education International (ACEI). During this process, the elementary education faculty realized that one of the weaknesses of Lander University’s program was the way in which elementary education majors learned to effectively infuse the fine and performing arts in regular classroom
96
JUDITH A. NEUFELD
settings. According to the newly revised ACEI Standard 2.5, elementary education candidates were expected to ‘‘know, understand, and use – as appropriate to their own knowledge and skills – the content, functions, and achievements of dance, music, theater, and the several visual arts as primary media for communication, inquiry, and insight among elementary students’’ (ACEI, 2007). As part of the general education requirements for their degree, Lander University elementary education majors completed six hours of fine or performing arts content in two of four areas: art, dance, music, and theatre. At the time, elementary education majors also took one three-credit art methods course and one three-credit music methods course, both of which were housed in the College of Arts and Humanities. After some discussion, elementary faculty members determined that the array of coursework needed to change if the revised ACEI standard was to be met. This was not the only issue of concern for elementary education faculty members. They were concurrently dealing with a mandate from university administrators to reduce the number of credit hours in the elementary education program of study from 133.5 to 120. When both issues came to light, faculty members sought help from the Dean of the COE to facilitate discussions related to necessary changes in the program of study. In December 2006, the dean of the COE, the chair of the DTE, and the elementary education program coordinator met with the dean of the College of Arts and Humanities to raise concerns about proposed changes in coursework and the impact of those changes on faculty members housed in the College of Arts and Humanities. This meeting was not without its tense moments. The dean of the College of Arts and Humanities took the revised ACEI requirements back to the fine and performing arts faculty members to see what they might recommend. The fine and performing arts faculty’s recommendation was brilliant. The art education coordinator (who later became Interim Dean of the COE) facilitated a discussion among dance, theatre, art, and music faculty members who envisioned a new paradigm for fine and performing arts coursework. In this new structure, students would take four one-credit hour method courses in the fine and performing arts. Each of these courses would be linked to one of the elementary education courses (Table 1). In the proposed changes to the program, the elementary education and fine or performing arts faculty members would teach two linked courses, working together to help elementary majors make connections between the two content areas. For instance, elementary education majors would take math methods during the same semester as dance methods. The professors of
97
Living with Accreditation
Table 1.
Configuration for Elementary Education Majors’ Fine and Performing Arts Methods Coursework.
Semester in Program of Study
Linked Courses Fine or performing arts course
Junior year semester 1 Junior year semester 2 Senior year semester 1 Senior year semester 1
Theatre methods (1 credit hour) Dance methods (1 credit hour) Music methods (1 credit hour) Art methods (1 credit hour)
Elementary methods course Language arts methods (3 credit hours) Math methods (3 credit hours) Science methods (3 credit hours) Social studies methods (3 credit hours)
these two courses would collaborate to help candidates learn methods of teaching math and dance in a way that supported learning in both discipline areas. The original implementation plan was to require candidates to present lessons featuring the fine or performing arts during their senior-level clinical experiences in area public schools. Surprisingly, junior-level candidates began to spontaneously include interdisciplinary lessons when teaching in their public school placement classrooms, because they were excited about these learning experiences. For example, one candidate developed and delivered a lesson in which she used the Macarena (dance) to teach her fifth-grade students about mathematical translations. The students then developed dances that also included translations at high-, medium-, and low-level body positions. Candidates regularly developed learning experiences such as this one in which they employed cross-disciplinary instruction. They also discovered that classroom management was much easier when they taught in this way, because all pupils were actively engaged in learning experiences. Candidates have begun to regularly develop integrated learning experiences to take advantage of the rich possibilities that cross-disciplinary instruction affords. As one might expect, initial changes in coursework, course credit hours, and faculty load credit were neither easily nor smoothly made. Some fine and performing arts faculty members resented the loss of instructional time. Administrators in the College of Arts and Humanities found it difficult to juggle fine and performing arts instructors’ teaching loads to accommodate a one-credit course. All faculty members bemoaned the additional,
98
JUDITH A. NEUFELD
unrewarded time needed to work with faculty members across campus. The elementary education program coordinator bore an additional load of administration for the linked coursework and took on added responsibilities as instructor of one of the four paired elementary education courses. Faculty had difficulties scheduling common work time to move forward with this initiative. No textbooks existed to support such an endeavor. Nevertheless, three factors contributed to the success of this innovation in the elementary education program. First, three of four elementary education faculty members involved in this program change had degrees in the fine or performing arts or had taught fine or performing arts subjects in public schools. This background gave them a vision for what might be accomplished through arts education. Second, the art education coordinator applied for and received a Lander Foundation Faculty Grant (nearly $5000.00) that provided funding for professional development among faculty members. Finally, the art education coordinator became the Interim Dean of the COE in fall of 2007. Her relationships with colleagues in the College of Arts and Humanities, her former academic home, forged new connections among colleagues in both colleges. Concurrent participation in professional development over the course of the first two years of implementation was particularly important to the success of this coursework change. Funding provided by the grant allowed participating professors to attend state-wide Alliance for Arts Education conferences that led to three significant outcomes. Concurrent conference attendance of this group of professors helped elementary education professors with less arts teaching experience become more knowledgeable about the fine and performing arts in elementary education. Attending the conference together inspired fine and performing arts professors with additional ideas about ways they could involve elementary education majors in arts education. Also, participating professors became better acquainted with one another and developed working relationships while traveling to and from the conference and attending conference sessions together. As the new coursework moved into its third year of implementation, several benefits of this program change have become apparent. Most obviously, faculty members have had the opportunity to get to know faculty from other departments and colleges. Since they have been given the opportunity to become better acquainted, faculty members have learned more about the knowledge their peers possess and have developed a new level of respect for one another. Faculty members in both colleges have enjoyed the increased level of collegiality. In fact, in a recent meeting of the
99
Living with Accreditation
eight faculty members involved in this interdisciplinary venture, the music professor stated, ‘‘This was the best thing to happen to my teaching career in years.’’ The instructors of the paired courses have appreciated the opportunity to learn from one another, make acquaintances, and gain the mutual respect of their colleagues between colleges. Addressing the expectations of multiple accrediting bodies has provided many opportunities for growth at Lander University. Colleagues who would otherwise rarely meet have had opportunities to work closely together. Education majors have benefited from seeing collegiality modeled by professors. Students have benefited from the interdisciplinary approach to instruction and from exposure to the expertise of multiple instructors. Several issues, however, continue to frustrate faculty members.
FRUSTRATIONS Despite many positive developments that have risen from program evaluations (both self-evaluations and assessments conducted by outside groups), several persistent frustrations remain part of the accreditation process. In a culture that values data-driven decisions and observable and measurable goals determined by professional standards, concern for individual students may be easily lost. People who are five- to eighteenyears old are the bottom line. Attending to the well-being and education of people should be the focus of attention. Often the focus on students and the importance of helping prepare each child for healthy adulthood shifts to the documentation needed to prove that teachers (both K-12 and college) are well-prepared and that institutions of learning (K-16) are of high quality. The ease with which the focus of education shifts from people to proof has exacerbated several frustrations related to the accreditation process.
Frustration 1 – Playing Games that Cannot be Won To meet current accreditation requirements, universities are responsible for the performance level of their education majors long after the candidates leave campus. In South Carolina, school district personnel rate novice teachers as they move through a three-year provisional contract process. Certifying institutions must have a 95% or higher pass rate for all novice teachers as rated by district evaluators during their first three years of teaching. University programs with a novice teacher pass rate lower than
100
JUDITH A. NEUFELD
95% are labeled ‘‘at risk.’’ If a program’s pass rate for novice teachers is 90% or less, the program is considered to be ‘‘low performing.’’ Thus, in Lander University’s case, if, in one academic year, 4 of 81 employed graduates completing South Carolina’s teacher induction program struggle to achieve permanent certification status or perform poorly in the judgment of a district evaluator, the university’s education program as a whole is rated ‘‘at risk.’’ The tenuous nature of this evaluation is further compromised by a rating system that it is based solely on the districts’ ratings of these novice teachers, who may or may not be teaching in the subject area in which they were originally certified. Hence, the university program is rated through an evaluation process from which the institution has been excluded. Additionally, some candidates struggle to pass PRAXIS IIt tests, which are required for certification in South Carolina. Some candidates fail the test, because they are unwilling to document and receive accommodations granting appropriate testing conditions. Some candidates fail because they graduated 10 years ago, but are now entering the teaching field and taking appropriate tests. Some test takers fail PRAXIS IIt having never attended Lander University, but erroneously list Lander University as the institution that prepared them for teaching. All failed test iterations count against Lander University’s rating as a teacher preparation institution. It is reasonable, ethical, and responsible for a university to own the failures of new graduates and to address weak components of its teacher preparation programs. There are a few ways to eliminate some of these ‘‘outlier’’ graduates from the record. However, the South Carolina state certification and testing policies and procedures do not recognize nonuniversity factors that have an impact on teacher induction. One frustrating example seems to illustrate this point. The university should not be held accountable for a student who failed the PRAXIS IIt examination 15 times – literally 15 times – and would not accept help from the university to better prepare for the test. At what point should the student be liable for this failure, rather than the university? Program evaluators must take into account non-university related factors that contribute to the success and failure of novice teachers such as a willingness to seek help, the conditions of the schools in which they teach, and the qualifications they hold for their current teaching positions. The ethical ramifications of who is responsible for teacher preparation program outcomes and the larger question of the ways in which society as a whole must take ownership of issues related to providing quality educational experiences for all K-16 learners must be addressed.
Living with Accreditation
101
Frustration 2 – Focusing on Learners in the Midst of ‘‘Teststeria’’ DTE faculty members have chosen to teach at Lander because they want to help prepare good teachers for service in K-12 public schools. They believe in the necessity of a healthy public education system and have dedicated their lives toward this end. But faculty members did not enter university teaching – often taking a pay cut – to help maintain rigorous content standards. Faculty members chose to teach at Lander University because they care about people. In the midst of accreditation requirements for K-12 and higher education institutions, it is easy to lose track of the importance of making personal connections with students; of meeting students where they are and helping them move ahead; of following the learners’ interests, not just those of an accrediting body. This is an era when principals in public school faculty meetings tell their faculty members to ‘‘teach the standards – only the standards and nothing else.’’ This teach-only-the-baseline-standards strategy will not raise test scores when it comes time for students to take the mandated state-wide proficiency test (Popham, 2007, 2009). This type of reactionary leadership approach seeks to maintain federal funding, rather than address students’ needs. A perspective such as this misemploys the content knowledge and proficiency baseline that standards have appropriately set for highquality learning. Many highly successful master teachers have students who prove proficient on mandated tests, but are driven away from teaching, because of this ‘‘standard-centric at the expense of student-centered’’ philosophy of education. Responsive, meaningful, forward-looking curriculum and learning experiences that address students rather than baselines of information and skills are far more successful in improving test scores (Lalley & Gentile, 2009; Stiggins, 2002; Stiggins & Chappuis, 2005; Stiggins & Duke, 2008). When a teacher’s or a professor’s job rests on his or her students’ ability to prove their competence relative to a baseline level of knowledge or proficiency by passing a test, and passing the test becomes the goal of education, innovation becomes scary and difficult. In this sort of milieu teachers evaluate innovative ideas first with the question, ‘‘If I teach this new way, will my students pass the proficiency test?’’ instead of ‘‘If I teach this new way, will my students learn better?’’ When institutions of learning treat teachers and professors as trainable automatons in an effort to guarantee that only the right things will be said and done in classrooms, rather than treating teachers and professors as capable, valuable professionals,
102
JUDITH A. NEUFELD
we can hardly expect more respect for teachers and professors from the general public. Let us hope that some healthy degree of academic freedom will always remain in institutions of higher learning, not only for the sake of maintaining and growing the knowledge base related to teaching and learning, but also for the sake of flexibility to develop new programs and classes in response to student interest, inservice teacher interest, professor interest, and societal need. The inequality of educational opportunities for K-16 learners in South Carolina and the United States needs to be addressed, but prescribing a onesize-fits-all, standards-based system that limits what can be taught, rather than setting a baseline for what should be taught, is not the answer. Quality learning experiences are developed by teachers who are wellprepared, learner-focused, and well-supported in a healthy learning community (Boldt, Salvio, & Taubman, 2009; Graham, 2008; TschannenMoran & McMaster, 2009). Teachers and professors must be given the responsibility for teaching and learning in their area, but they must also be given the necessary support to teach in the best way for students to learn.
SO WHAT? At the end of most class sessions I ask my students to complete a quick write activity during which they answer ‘‘So what? So what will be the impact of what we did in class on your teaching practice?’’ It seems appropriate to ask that question related to Lander University and accreditation. ‘‘So what? So what do the realizations and frustrations of accreditation mean for Lander University and professional practice therein?’’ In light of common wisdom that encourages the idea of building on strengths, it would seem prudent to concentrate on the many things that accreditation processes have helped to create or clarify. Lander University has a dedicated professional faculty that willingly works together to accomplish common goals. Administrative and support personnel are becoming increasingly aware of and responsive to the various resources necessary to maintain healthy academic programs. Lander University’s teacher education program completers are highly successful at passing required proficiency exams and state-mandated teacher induction programs. Unless some type of change occurs in national accreditation processes, the frustrations of accreditation are likely to remain. In fact, increased United States Department of Education requirements related to the ways in which universities track education majors beginning in the 2009–2010 academic
103
Living with Accreditation
year may exacerbate some of these issues. On the contrary, shifts in the ways in which SPAs and NCATE view adequate data and data reporting (e.g., requiring only two iterations of a measurement, encouraging program reviewers to maintain a ‘‘glass half full’’ perspective) may ameliorate others. Under the circumstances, it would seem wisest to concentrate on Lander University’s mission of ‘‘Building the Future y One Student at a Time’’, carefully consider the results of data collected on students and learning, and build on existing assets.
REFERENCES Association for Childhood Education International. (2007). 2007 ACEI/NCATE elementary education standards and supporting explanation. Available at http://acei.org/education/ncate Boldt, G. M., Salvio, P. M., & Taubman, P. M. (Eds). (2009). Classroom life in the age of accountability. Occasional paper series 22. New York: Bank Street College of Education. Graham, P. (2008). Improving teacher effectiveness through structured collaboration: A case study of a professional learning community. RMLE Online: Research in Middle Level Education, 31(1), 1–17. Available at http://www.nmsa.org/Publications/RMLEONline/ Articles/tabid/101/Default.aspx Lalley, J. P., & Gentile, J. R. (2009). Classroom assessment and grading to assure mastery. Theory into Practice, 48, 28–35. doi:10.1080/00405840802577577. Popham, W. J. (2007). Instructional insensitivity of tests: Accountability’s dire drawback. Phi Delta Kappan, 89(2), 146–150. Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory into Practice, 48, 4–11. doi:10.1080/00405840802577536. Stiggins, R., & Chappuis, J. (2005). Using student-involved classroom assessment to close achievement gaps. Theory into Practice, 44, 11–18. doi:http://dx.doi.org/10.1207/ s15430421tip4401_3 Stiggins, R., & Duke, D. (2008). Effective instructional leadership requires assessment leadership. Phi Delta Kappan, 90, 285–291. Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83, 759–765. Tschannen-Moran, M., & McMaster, P. (2009). Sources of self-efficacy: Four professional development formats and their relationship to self-efficacy and implementation of a new teaching strategy. Elementary School Journal, 110, 228–245. doi:http://dx.doi.org/ 10.1086/605771. United States Census Bureau. (2010). Profile of selected economic characteristics: 2008. Available at http://www.census.gov United States Department of Education. (2010). National assessment of adult literacy: State and county estimates of low literacy. Available at http://nces.ed.gov/NAAL/estimates/ StateEstimates.aspx
CHAPTER 7 IS THIS DATA USEFUL? THE IMPACT OF ACCREDITATION ON THE DEVELOPMENT OF ASSESSMENTS Sam Hausfather and Nancy Williams ABSTRACT Accreditation demands from both state and national bodies have influenced the development of major assessments at Maryville University of St. Louis. Three key assessments used in all teacher preparation programs are described: practicum assessments of candidates in field experiences, program portfolios, and the student work sampling project. A review of the impact of accreditation on the development and analysis of these assessments reveals a constant tension between the use of qualitative and quantitative data. Although the assessment system allows for data collection, analysis, and use, it is perceived more as a burden than illuminating. The qualitative conversations growing from the use and analysis of individual data often lead to more program change. Involvement in the use of assessments with candidates, observations of candidates in the field and in their presentations, and review of candidate comments appear to provide the best data sources.
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 105–121 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012010
105
106
SAM HAUSFATHER AND NANCY WILLIAMS
The secretary of education for the United States, Arne Duncan, has accused ‘‘many if not most’’ university-based education programs of ‘‘doing a mediocre job of preparing teachers for the realities of the 21st century classroom’’ (Duncan, 2009). He characterized most teacher education programs as unconnected to practice and of ignoring the import of using data to inform teaching. Although there was a rush to support universitybased teacher education by those closely tied to its practices (Robinson, 2009; Cibulka, 2009), the criticism presents a telling perspective of what has existed in teacher education across many programs in this country. Those who defend university-based teacher education consistently point to the role of accreditation in strengthening outcomes as measured through various performance assessments. Although the research base is weak in supporting the effects of specific aspects of teacher preparation (Zeichner & Conklin, 2005), accreditors have clearly moved forward with performance assessment as the basis for accreditation standards and decisions. This chapter looks at one example of the impact of accreditation on the development of assessments within a teacher education program. Maryville University is a small private university located in suburban St. Louis, Missouri, USA. Teacher preparation has a long history in the university from its inception as a Catholic academy for young women in 1872. Its modern history includes long and close involvement in area schools, both suburban and urban. Maryville’s School of Education has been nationally accredited by the National Council for Accreditation of Teacher Education (NCATE) since 1979 and, in partnership with several school districts and a local historically black college, was accepted in 1994 as the 16th setting of the National Network for Educational Renewal (NNER). This organization of school–university partnerships across the United States was founded by John Goodlad to promote the simultaneous renewal of public schools and teacher education around their essential roles in a democracy (Goodlad, Soder, & McDaniel, 2008). The School of Education currently has undergraduate and post-baccalaureate teacher certification programs in early childhood, elementary, middle school, and art education. Secondary education programs in math, science, English, and social studies require the undergraduate major in the content area followed by a fieldbased 14-month masters plus certification program. Graduate programs in gifted education, reading education, teacher as leader, and educational leadership are available at the master’s level and educational leadership for the doctor of education degree. Although we are a small program, we have four formal professional development school relationships along with ongoing partner relationships with various other area schools. Involvement
The Impact of Accreditation on the Development of Assessments
107
of faculty, students, and renewal projects with area public schools remains a high priority for the School of Education.
THE DEVELOPMENT OF MAJOR ASSESSMENTS Over the years, accreditation demands from both state and national bodies have clearly influenced the development of major assessments used in all teacher preparation programs at Maryville. Although accreditors rarely required specific assessments (with the exception of a state required portfolio for a time), Maryville’s major assessments drew much from national work related to accreditation movements. Practicum assessments of candidates in field experiences were significantly modified based on both state and national accreditation standards. Portfolio development and use was state mandated for several years, although Maryville’s use of portfolios predated their required use and has survived after the mandate expired. The student work sampling project (SWSP) grew out of the perceived need for a student learning assessment and was patterned on the Renaissance project efforts (Denner, Norman, Salzman, & Pankratz, 2003). Yet Maryville has made each of these its own, unique to the context of our programs and frameworks. It is in this work that Maryville has taken the influence of accrediting bodies and incorporated our own perspectives to make assessment useful to our aims.
Practica Assessments Maryville University’s greatest strength, and uniqueness, is the extensive and intensive field experiences in schools (practica) it requires of its preservice teacher candidates. Candidates are in schools more than just about any other program nation-wide, beginning as freshmen but ramping up considerably through sophomore and junior years into a modified yearlong student teaching/internship experience as seniors. These experiences are intensive with significant teaching requirements for candidates linked to blocks of education courses and supervised on-site by both full-time and part-time faculty. The depth of these experiences has allowed for the concurrent development of assessment instruments linked to students’ practica experiences. The pre-service programs at Maryville have been highly clinical as long as any faculty can remember. Indeed when changes have been made, time was added to clinical obligations, not subtracted. Maryville provides teaching
108
SAM HAUSFATHER AND NANCY WILLIAMS
load for faculty and credit for students in practica, a factor that has allowed the clinical emphasis to survive. After their freshman year, candidates spend between 90 and 150 hours each semester in schools teaching lessons connected to the methods courses they are taking. Faculty supervise candidates in schools, with 8–10 supervisees considered the equivalent of a 3-credit course load. Intrator and Kunzman (2009, p. 518) discuss the need for grounded practice in teacher education programs where teacher educators who teach the methods courses extend their practice into the K-12 setting, suggesting that this needs to be ‘‘seen as a legitimate, central commitment’’ if we are to improve both teacher educator and candidate practice. Our full-time faculty supervise in the schools as their methods courses are delivered and, at times, model instruction in front of actual K-12 students. Although we do not have laboratory schools, our professional development school partnerships have kept us deeply engaged in authentic practice and P-12 student learning. Teacher candidates begin their practicum experiences early in their tenure with us and continue working in classrooms each semester. Candidates design and teach lessons, complete case studies on students, complete observational evaluations of students, tutor, and assist the cooperating teacher with small group instruction. University faculty get the opportunity to observe the candidates multiple times each semester and follow observations with feedback/coaching sessions. Each practica has a clinical assessment instrument that gives feedback and data on multiple skill-sets and dispositions. These instruments are aligned with program outcomes as well as state program standards. Many of the individual items on the assessment forms were added to specifically respond to accreditation standards, in particular state and specialized professional association elements. For instance, the need for assessment of elementary education candidate proficiency in teaching various content areas resulted in specific items added to the appropriate forms in each of eight content areas to match both national and state lists of standards required of those seeking elementary school certification. This led to complex and long assessment forms requiring classroom teachers to score candidates on anywhere from 26 to 90 items. The data are sometimes more helpful in the individual feedback for a candidate than they might be as program data, as again the aggregated program data sometimes tells us little. Data have illuminated some areas in our program that need strengthening (i.e., working with ELL students), but these areas were not a ‘‘surprise’’ to us. At times, the language on the assessments needs refinement. Partner school faculty helped with the refinement of these instruments. Indeed the
The Impact of Accreditation on the Development of Assessments
109
quality indicator language was modified a number of times to attend to the developmental nature of candidates, but also to help us identify those who are struggling and need attention. We had, for instance, a yearlong debate in 2004 on the use of certain terminology in the competency ratings on our clinical assessment forms. We found, by reviewing as a faculty the forms of students who over time were not successful in the program, that cooperating teachers and some faculty supervisors were not willing to rate a student as ‘‘marginal’’ (a third level of rating behind exemplary and proficient) but were willing to rate a student as ‘‘progressing,’’ a term that could be defined using the same description as ‘‘marginal.’’ Teachers and faculty just hated to use the term ‘‘marginal’’ when describing the performance of a candidate with whom they had developed a professional relationship. We also found that adding a global rating at the end of the assessments was helpful. In this rating, we asked the evaluators to suggest whether the candidate was doing the work in the practicum independently (without supervision or guidance necessary); in a developing manner with minimal supervision or guidance; or whether they were emergent or limited in their ability to perform without substantial guidance and close supervision. This rating helped us when evaluators suggested that a candidate was ‘‘proficient’’ or ‘‘developing’’ but only with substantial help. This leads to ‘‘care teaming’’ of candidates and goals set for the next practicum. The purpose of the ‘‘care team’’ is to help the candidate through a personal or professional concern. Candidates who do not display initiative, commitment, follow-through, planning that is inclusive, thoughtful and standards driven, etc. go through a ‘‘care-team’’ process where the issue/problem is identified with a team of Maryville faculty (advisor, supervisor, and associate dean) and specific steps are outlined for remediation. Again, these data are most helpful immediately and for individual candidate assessment as decisions are made regarding individual remediation or removal from the program. The majority of substantive decisions about the potential of our candidates originate from observations of practicum experiences and our assessments of the candidates in the field. It is here that we are able to more accurately assess the knowledge, skills, and dispositions of the candidates as they interact with students, school personnel, parents, and supervisors. It is here that we can coach and determine if the coaching leads to improvements in actual performance. Indeed because of our clinical placements, some candidates self-determine that teaching is not a fit for them. It is through practica experiences that we determine if candidates know their content and can translate their knowledge into lessons that help students understand concepts; it is here we assess a candidate’s ability to form relationships with
110
SAM HAUSFATHER AND NANCY WILLIAMS
students, engage students in active learning, diagnose learning difficulties and make adjustments in instruction, manage the classroom environment, perform within professional norms in the school setting and make high quality professional judgments. Indeed, as stated by Linda DarlingHammond (2006, p. 129) ‘‘Although it is very helpful to look at candidates’ learning in courses and their views of what they have learned, it is critical to examine whether and how they can apply what they have learned to the classroom.’’ The strength of our program is our ability to parlay faculty supervision in schools with practica assessment instruments to evaluate candidate performance in authentic settings. On a day to day basis, individual candidate data are most important for providing evidence to help with candidate to candidate decisions. At the same time, the practica assessment instruments give programmatic data, which we depend on for program evaluation for accreditation. Program data have been illuminative in a few areas such as classroom management and working with second language learners where we see consistently average scores by all our candidates. At the same time, program data have not been illuminative in many areas due to the lack of variability in scores we see across rating items. Some items are often not rated by the classroom teacher due to the lack of exposure our students are given to some curricular areas. This has become more pronounced as American schools slash time spent on science, social studies, and the arts as the testing mandates of NCLB have increased. With average practica assessments requiring 50–60 items rated, being able to effectively tease out trends in aggregated data has proven unwieldy and time consuming. The other benefit of the clinical experiences is in what our full time faculty who supervise the practica gain from partnering with the schools in the design and delivery of the practica. Close work with schools demonstrates for us new innovations in school based assessments (i.e., use of Tungsten data) or technologies (i.e., new software, updated Smartboards) that keep us current with the real world of teaching. We believe strongly in an authentic clinical model of preparation, a model that relies on strong partnerships with schools. This culture of authentic work with school partners has framed our assessments, in that real work in real schools with real students appears to be an accurate and reliable predictor of a candidates’ future effectiveness as a teacher. As research has shown, ‘‘Program designs that include more practicum experiences and student teaching, integrated with coursework, appear to make a difference in teachers’ practices, confidence and long-term commitment to teaching’’ (Darling-Hammond, Hammerness, Grossman, Rust, & Shulman,
The Impact of Accreditation on the Development of Assessments
111
2005, p. 411). More than any in-course assessments or external standardized measures, our ongoing assessment of candidates in practice, both through onsite supervision and through summative practica assessment instruments, has allowed us a window into their ability to apply what they learn about teaching within complex authentic environments. Although accreditation demands have led to our expanding the number of items and the content represented in these assessments, their ongoing use and modification is integral to the very nature of our approach to teacher education.
Teacher Education Portfolio A teacher education portfolio is required of all preservice candidates at Maryville University. The process encourages a candidate to look at his or her development personally and professionally while a student at Maryville (and hopefully throughout his/her teaching career). It provides the opportunity for candidates to see themselves as lifelong learners and developing teachers throughout their professional preparation. The School of Education began the development and use of portfolios in 1996, before they were required by the Missouri Department of Elementary and Secondary Education and before electronic systems were available for their management. Using the research of Tierney, Carter, and Desai (1991), the portfolio was designed to facilitate candidate reflection and self-evaluation by requiring a systematic collection of work over time. Candidates must select and defend through reflections artifacts that demonstrate the strength of their performance as a teacher. Portfolio artifacts and reflections are structured to clearly connect to the unit’s ten conceptual framework outcomes. Candidates begin the development of their portfolios during their first semester of education block course work and maintain their portfolios through student teaching and hopefully throughout their careers. Candidates participate in a required portfolio conference with their faculty advisor in January and May each year where they receive formative feedback. At this time, a candidate engages in a dialogue with the faculty advisor about his or her growth and development as reflected through portfolio artifacts and reflections. Before student teaching assignments are confirmed in the spring prior to the student teaching year, candidates must have had their portfolios ‘‘scored’’ by their advisor. Candidates must reach an acceptable level of performance on their portfolio at this time. At the close of the student teaching semester, the candidate makes a final presentation using
112
SAM HAUSFATHER AND NANCY WILLIAMS
the total portfolio to their faculty advisor. The candidate will present a profile of her/himself as a teacher, reflecting on his/her growth and development over time. The portfolio will be ‘‘scored’’ again at this time. The candidate’s certification application is not processed until the candidate demonstrates proficiency on all outcomes in the portfolio. A number of significant changes in the portfolio process have been made over the past several years. Although we used to score portfolios multiple times, they are now scored only at pre-student teaching and post-student teaching. Before this change, the process was somewhat burdensome on advisors. Faculty were spending an inordinate amount of time reviewing and formally scoring portfolio entries. We found that beginning candidates became overly anxious about initial low scores, not understanding the concept of ‘‘mastery.’’ Recently, we have cut back on requirements for candidates early in their program, recognizing they are not yet ready to contribute reflections for all outcomes. After extensive discussion by the faculty, it was decided the outcomes related to working with parents, developing rich assessments, and using inquiry were more appropriate for reflection later on in the program. These skills and experiences were more extensively developed within methods courses candidates took later in their program or when they could experience more depth within their field experiences and student teaching. Candidates now compile portfolio sections on seven of the ten outcomes earlier in the program and all ten outcomes later in their program. Research suggests that the benefits of portfolios depend on their construction, and portfolios that are just a random collection of artifacts or which involve no opportunities for guided reflection may not be valuable. Using portfolios requires careful consideration of ‘‘purpose, criteria and implementation’’ (Darling-Hammond et al., 2005, p. 426). Portfolios provide our candidates a clear link between their program components and the program outcomes. Used as a forum for goal setting, it allows for rich discussion between the candidate and the faculty advisor. The process provides a vehicle for systematic goal setting by candidates. These goals may emerge from experiences or through conferencing with academic advisors. Candidates refine their portfolio each semester, adding artifacts that reflect their growth and development. The process also allows our faculty to continually reinforce the meanings behind the outcomes, which has led to modification of the outcome language over time to ensure common interpretation. Recently faculty discussed feedback from teachers and candidates and revised several outcomes to use language more common to .
The Impact of Accreditation on the Development of Assessments
113
school settings, such as classroom management, and to refine expectations for parent and community involvement based on the available experiences in which our candidates are able to participate. Using portfolios as an assessment of student learning and ability has its limits. Despite the clear link between portfolio components and program outcomes, portfolios sometimes measure more about candidates’ abilities to write and reflect than their ability to impact P-12 student learning, though artifacts may suggest performance levels in that area. Some candidates with exceptional teaching instincts in the classroom have difficulty discussing their decisions in writing, while some excellent writers perform poorly when faced with the complexity of classroom practice. Since students are required to reach a certain score on their reflections, the data themselves from the portfolios do not sufficiently discern competency levels, although we do learn about candidate dispositions in their ability to follow through with the portfolio requirements. Reflections can sometimes reveal other dispositional strengths or concerns, but these usually surface in courses or practica first. As faculty, we would like to see more depth and integration of reflection to enable the candidate to examine his/her growth over time. The scores from the rubric that we record as programmatic data measure the depth of reflection on the outcomes. Determining reliability among the faculty on what constitutes a substantial reflection remains difficult and a work in progress. More problematic is our ability to make use of aggregated data for program accreditation. Although portfolios supposedly enable us to report that our students leave the program having fulfilled program outcomes, the data itself reveal little of use in making programmatic decisions. The pressure to allow students to pass the ‘‘gates,’’ either entry to or exit from student teaching makes the scoring of portfolios a high stakes decision. Although it allows faculty to push students to perform at proficient levels, it also creates data that tend toward proficiency for all. The conundrum again is within an instrument that promotes extensive candidate growth but at the same time results in little accreditation data that can be seen as worthwhile. Although the state no longer requires teacher education portfolios as part of program accreditation, it has become a component of our program that forces student–faculty discussion about student progress and goals, and ultimately results in greater student understanding of the program goals’ place in their education as teachers. The data continues to reveal little for programmatic accreditation except that all candidates are proficient.
114
SAM HAUSFATHER AND NANCY WILLIAMS
Student Work Sampling Maryville created and began using a SWSP in 2000, after sending a faculty member to a Renaissance Partnership workshop to study the Teacher Work Sampling process. Maryville used the Oregon model (McConney, Schalock, & Schalock, 1998) informed by the Renaissance design for teacher work samples (Denner et al., 2003) as the model from which it designed what was at first called the Assessment Project and has evolved into the SWSP. Candidates are required to complete the SWSP using backwards design to design, teach, analyze, and present on a standards-based unit of instruction during their student teaching semester (Wiggins & McTighe, 2005). The unit emphasizes differentiation of instruction and data-driven decision making. The project requires candidates to create eight components. Context factors, initial learning outcomes, planning for assessment, and design for instruction are completed before teaching the unit. After teaching, candidates complete sections on instructional decision-making, analysis of student learning, and learning reflection before giving a final presentation to all other candidates in the program, including juniors, seniors, and the entire faculty. The faculty member teaching the seminar accompanying student teaching leads candidates through the entire process. This assignment requires candidate accountability for (and evidence of ) student learning. Candidates report that this assignment more than any other helps them understand the complete teaching/learning cycle. In their presentations, they talk specifically about student learning and the impact of their instructional decisions. They speak to the difficulty of aligning assessments with targets and using data to drive instruction. This led us to modify our program at various points to better prepare our candidates to see teaching as a continuous cycle of assessing, modifying teaching, and reteaching. Often candidates’ experiences in teacher preparation lead them to see teaching as the sum of a series of discrete lessons. As we identified the need to sufficiently prepare our candidates for the SWSP, we began building multiple opportunities in our program for candidates to use formative data to build student understanding over time. As conditions have changed in schools, so the SWSP has adapted to include important new emphases for our candidates. Several years ago, the sub-group analysis in the project began emphasizing the achievement gap. As Depth of Knowledge became widespread in Missouri schools (Webb et al., 2005), it was incorporated into the outcomes language. Technology use has been more obviously infused into the instructional design section.
The Impact of Accreditation on the Development of Assessments
115
More instruction on differentiation has been scaffolded into our program as a whole due to the implications of this assignment and the presentations faculty have viewed. Faculty as well have seen the importance of understanding the learning experience as being more than the delivery of one discrete lesson and the accompanying importance of supporting our candidates with their expected work through Understanding by Design (Wiggins & McTighe, 2005). Although there is numeric scoring of the components of this assignment, the presentations and conversations with candidates about their projects have been most informative. This might be because few faculty teach the seminar course or are involved in the grading of the project. Another issue is that this assessment comes at the end of the program and is thus summative for the candidate, not formative. At this point in candidates’ programs, there is little opportunity for reteaching or remediation. Our expectation is all candidates will reach mastery yet there is clear variation in performance. It has, however, been formative data for the program as we made use of our observations to then make changes in our programs. This assignment aligns easily with the five unit outcomes measuring curriculum, instruction and assessment and gives us evidence of candidates’ abilities to impact student learning, as well as their ability to inquire into practice. It does not, however, clearly align with outcomes related to moral purposes, working with families, professionalism, and development. We added a component in 2007, which required the candidates to address the achievement gap as they analyzed their data. This requirement added transparency to the discussions about how we serve children of color and those from poverty in the classroom. As a response to accrediting agencies’ requirement for data on candidate impact on student learning, the SWSP provides a strong case that our candidates have explored carefully the learning of their students. Once again, however, it is not through a numeric analysis of data that we ensure this impact but through the process of candidates doing and sharing their work that our program is able to analyze our collective influence on students in schools. Overall, we are find the Student Work Sampling provides the candidates with an authentic summative evaluation. As stated by Shepard, Hammerness, Darling-Hammond and Rust (2005, p. 317), Assessing student work in this way helps student teachers develop an appreciation of how learning unfolds over time, how different students learn, and how these students respond to their instruction. It can also strengthen the teachers’ commitment and capacity to see it as their responsibility to develop student understanding and proficiency.
116
SAM HAUSFATHER AND NANCY WILLIAMS
ASSESSMENT DATA: DAY TO DAY AND RETREATS Not a day passes in the School of Education when faculty are not making decisions about candidates based on data. These decisions range from making small classroom modifications for a candidate to reinforcing clinical expectations, to considering program expulsion. Because we have a small faculty, we often call quick meetings to compare notes on the performance of a candidate. Often that performance relates to clinical work but may involve data from course work as candidates also prepare units and do case studies on students. Occasionally this work gives us knowledge of the candidate’s ability to plan or diagnosis student understanding. When necessary, we call a ‘‘care team’’ meeting with the candidate, the faculty advisor, faculty supervisor, and associate dean, using the data to outline concerns and set goals. This is one of the strengths of small teacher education programs: we know our candidates and can track them easily. We are constantly asking about whether a candidate will make an effective teacher and whether we would like a candidate to teach our own children. We use various data in decision-making related to individuals, including university classroom and practica assessment data, narrative data, and cumulative data. The data help us understand the particular needs a candidate may have and the goals we need to set and monitor to ensure their progress. While an imperfect process, this has led us to remove candidates from the program if we are convinced they clearly lack the academic or professional skills we expect from our graduates. Program decisions are made, however, following a review of aggregated and disaggregated data from multiple assessment instruments. These reviews are done both by committees and by the faculty as a whole at the bi-annual assessment retreats. These full one- to two-day retreats involve the entire faculty collectively reviewing assessment data and discussing possible program or course modifications derived from the data. Various different assessment-related activities occur during these sessions, many driven by accreditation mandates. Opportunities have been provided for coming to common understanding of scoring of candidate practices through video analysis. Components of some assessment instruments have been analyzed and modified. Discussion over the wording of outcomes and indicators has resulted in changes being made. As the amount of data collected for accreditation purposes has grown, we have found it necessary to structure retreats so small groups can focus on particular assessment data, analyze it, and report back to the whole group for recommendations.
The Impact of Accreditation on the Development of Assessments
117
Over time, we have moved from reviewing and modifying assessment instruments to more review of the data from those instruments. Data from graduate and employer surveys, student teaching assessments, portfolios, and end-of-program surveys are regularly reviewed for trends and patterns. SWSP, practica assessments, case study, and other data are reviewed on a yearly basis. Individual candidate data, for those doing well in the program and for those struggling, are pulled to determine what data were most useful and, if problems could have been spotted earlier, through what measure. We also attempt to determine what might be missing in our data collection or what might be redundant. Repeatedly, the faculty find themselves drawn to the qualitative data over the quantitative data. Although specific scoring or survey numbers allow us to focus quickly on a possible issue, it is in the comments that we understand the concerns and successes of our candidates. Quantitative data, while consistently required by accreditation agencies, just does not give us the information we need to make the hard decisions about programs and courses. This is particularly true given that a relatively small number of faculty may be involved in scoring any particular instrument. Those who score an assignment may appreciate differences in quantitative scores. Those who do not score the assignment are able to find richness in comments given or presentations viewed. Accreditation has clearly moved us to develop and expand our ongoing analysis and discussion of assessment data. At the same time, accreditation has forced upon us a quantitative emphasis that can derail our focus on meaningful data derived from the rich contexts of our practice. Our assessment retreats continue to struggle with this balance as we fulfill accreditor’s demands while remaining true to the actual lives of our candidates.
CONCLUSIONS AND RECOMMENDATIONS Accreditation creates significant strains on teacher preparation programs to create, implement, and sustain systematic assessment systems. The burdens of maintaining such living systems of data collection appear more difficult than maintaining candidate assessments. Clearly the complexity of such systems of assessment grows with the size of teacher education programs. Our advantage has been being small where the system itself does not overwhelm our ongoing candidate assessment.
118
SAM HAUSFATHER AND NANCY WILLIAMS
Our day to day focus is on our candidates. Data about individual candidates help us move them forward as we use individual data to make decisions about individuals. Accreditation has influenced the development of the individual assessments we use in ways that ensure our candidates fulfill not just our local outcomes but state and national standards. These represent our most immediate and frequent use of data. Program review data, though systematic, is not grounded in daily experience and often appears more abstract than real. As data are aggregated and removed from individual narratives, it becomes seen as merely required for accountability rather than being useful in addressing the daily issues of preparing individuals to be effective teachers. At the same time, the requirement to review and analyze data does promote an environment where we are exploring the underlying meanings within our assessments. It provides the impetus to have discussions that go back to our conceptual framework and unpack its meaning. A review of the impact of accreditation on the development of the assessments reviewed herein reveals the constant tension between the use of qualitative and quantitative data. We have an assessment system in place that allows for data collection, analysis, and use, but more often than not, it is seen as a burden rather than illuminating. The qualitative conversations growing from use and analysis of individual data often lead to more change. Involvement in the use of assessments with candidates, observations of candidates in the field and in their presentations, and review of candidate comments continue to be the best data sources even for the key assessments discussed earlier. Despite the impact accreditation has had on our assessments, we are still challenged to get at what is most important: how do our students perform as teachers after they leave us. Some, such as Allington (2005, p. 201), say this should be the major program assessment: ‘‘Perhaps we would be satisfied to let the marketplace determine our success y do our candidates get hired and are school districts satisfied with their performance.’’ Our ability to get good survey data back from our graduates and their employers continues to be extremely difficult. Yet some of the best data we have is qualitative feedback from principals and administrators who have hired our graduates. These data are, to a large extent, anecdotal, the telling of stories of our graduates and their successes or challenges within the varying contexts of schools. Honest relationships with our school partners provide this essential feedback, as our faculty participate as equals in the lives of schools. Maintaining these all-important partnerships continues to be essential yet difficult work
The Impact of Accreditation on the Development of Assessments
119
within the multiple demands of accreditation and the lessening of institutional support. If the stories of our graduates are indeed important, accreditation will need to more explicitly value and formalize the qualitative data along with the quantitative data. This seems against the grain of the move to numbers, tables, and easy measures we see all around us, both in teacher education accreditation and K-12 accountability. It is clearly the effectiveness of our graduates in classrooms that truly measures our impact as teacher educators. ‘‘Although it is very helpful to look at candidates’ learning in courses and their views of what they have learned, it is critical to examine whether and how they can apply what they have learned to the classroom’’ (DarlingHammond, 2006, p. 129). We need to emphasize less what our candidates believe (often the work inside university classrooms) and emphasize more what our candidates do in actual P-12 classrooms. What this requires is ‘‘more – and more deliberate – opportunities for novices to practice the interactive work of instruction’’ (Ball & Forzani, 2009, p. 503). Reducing this to aggregated numbers devalues the essential outcomes we desire. Maybe it is time accrediting agencies push us to use some of the same conceptual approaches to assessment we use with our candidates. Our review of assessment data would be more powerful if we took a SWSP approach to our own candidates. We need to be continually bringing to life the assessment data we collect by enriching it with case studies of the candidates behind the data. Data sets could also have narrative examples from a struggling candidate, an average candidate, and an advanced candidate to bring depth to the data. We should look at data from our graduates in the same way, choosing narrative cases that can enhance and deepen understanding in ways missed by looking exclusively at quantitative data. Narrative and qualitative research orients us to data analysis as a complex transaction with assertions that are both provisional and fallible (Coulter & Smith, 2009). This mirrors the complexity underlying the varying contexts of schools, places where multiple and unpredictable variables intersect. This is the world our candidates enter and for which we must prepare them. Arne Duncan’s worldview, where teacher education programs are separate and disconnected from classroom practice, includes too much that is based on past practices that have long ago given way to more extensive immersion in school settings. At the same time, accrediting agencies continue to focus on a plethora of assessment data that do not necessarily connect to actual practice in classrooms. Accreditation has influenced us to modify and extend how we assess our candidates’ work in classrooms, but
120
SAM HAUSFATHER AND NANCY WILLIAMS
without challenging us to look deeper into that data. It is through the careful marriage of qualitative and quantitative data that we can tell our story in ways that we and others can learn from it.
REFERENCES Allington, R. L. (2005). Ignoring the policy makers to improve teacher education. Journal of Teacher Education, 56(3), 199–204. Ball, D. L., & Forzani, F. M. (2009). The work of teaching and the challenge of teacher education. Journal of Teacher Education, 60(5), 497–511. Cibulka, J. (2009). A message from NCATE president Jim Cibulka on secretary Duncan’s speech at Teachers College, Columbia University. Available at www.ncate.org/public/ 102309_Duncan.asp Coulter, C., & Smith, M. L. (2009). The construction zone: Literary elements in narrative research. Educational Researcher, 38(8), 577–590. Darling-Hammond, L. (2006). Assessing teacher education: The usefulness of multiple measures for assessing program outcomes. Journal of Teacher Education, 57(2), 120–135. Darling-Hammond, L., Hammerness, K., Grossman, P., Rust, F., & Shulman, L. (2005). The design of teacher education programs. In: L. Darling-Hammond & J. Bransford (Eds), Preparing teachers for a changing world (pp. 390–441). San Francisco: JosseyBass. Denner, P. R., Norman, D., Salzman, S. A., & Pankratz, R. S. (2003). Connecting teaching performance to student achievement: A generalizability and validity study of the renaissance teacher work sample assessment. Paper presented at the Annual Meeting of the Association of Teacher Educators, Jacksonville, FL, February 17, 2003. Duncan, A. (2009). Teacher preparation: Reforming the uncertain profession – Remarks of secretary Arne Duncan at Teachers College, Columbia University. Available at www.ed.gov/news/speeches/2009/10/10222009.html Goodlad, J. I., Soder, R., & McDaniel, B. (2008). Education and the making of a democratic people. Herndon, VA: Paradigm Publishers. Intrator, S. M., & Kunzman, R. (2009). Grounded: Practicing what we preach. Journal of Teacher Education, 60(5), 512–519. McConney, A. A., Schalock, M. D., & Schalock, H. D. (1998). Focusing improvement and quality assurance: Work samples as authentic performance measures of prospective teachers’ effectiveness. Journal of Personnel Evaluation in Education, 11(4), 343–363. Robinson, S. (2009). An AACTE response to education secretary Arne Duncan’s address on teacher preparation Teachers College, Columbia University. Available at www.aacte. org/index.php?/Press-Center/Feature-Articles/ Shepard, L., Hammerness, K., Darling-Hammond, L., & Rust, F. (2005). Assessment. In: L. Darling-Hammond & J. Bransford (Eds), Preparing teachers for a changing world (pp. 275–326). San Francisco: Jossey-Bass. Tierney, R., Carter, M., & Desai, L. (1991). Portfolio assessment in the reading-writing classroom. Norwook, MA: Christopher-Gordon Publishers.
The Impact of Accreditation on the Development of Assessments
121
Webb, N., et al. (2005). Web alignment tool. Wisconsin Center of Educational Research. University of Wisconsin-Madison. Available at www.dese.mo.gov/divimprove/sia/msip/ DOK_Chart.pdf. Retrieved on February 2, 2006. Wiggins, G., & McTighe, J. (2005). Understanding by design. Alexandria, VA: ASCD. Zeichner, K., & Conklin, H. (2005). Teacher education programs. In: M. Cochran-Smith & K. M. Zeichner (Eds), Studying teacher education: The report of the AERA panel on research and teacher education. American Educational Research Association. (pp. 645–735). Mahwah, NJ: Lawrence Erlbaum Associates.
CHAPTER 8 MAKING STONE SOUP: TENSIONS OF NATIONAL ACCREDITATION FOR AN URBAN TEACHER EDUCATION PROGRAM$ Carolyne J. White, Joelle J. Tutela, James M. Lipuma and Jessica Vassallo ABSTRACT In her children’s book, Stone Soup, Heather Forest (1998) recreates a popular European folktale about people wanting to make soup but lacking the typical ingredientsy . As the story unfolds, they discover the possibilities that are available when individuals come together to make soup out of a stone and with the contribution of each member – a carrot, a potato – and ‘‘a magical ingredient y sharing.’’ Within this chapter, we tell our story of seeking national accreditation for the Urban Teacher Education Program (UTEP) at Rutgers University-Newark (RU-N)y . This story is crafted through personal experience narratives that illuminate the contribution of each author toward making our UTEP soupy . As in
$
This chapter is based on the experiences of the authors and does not necessarily represent the views or opinions of the institution, individuals, or agencies identified in this work.
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 123–136 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012011
123
124
CAROLYNE J. WHITE ET AL.
the story of Stone Soupy, we lack the typical ingredients to achieve accreditation, and we continue to make it happen. Our journey is fraught with uncertainty punctuated with stressful moments of intense anxiety about whether we will achieve accreditation. The long periods of rigorous effort to create a shared sense of our program goals, and what we, as a community, do to prepare students to be urban teachers seems insignificant when dwarfed by the Herculean tasks of gathering, analyzing, and then making sense for others the vast array of data needed for a Brief to prepare for a visit that sets the stage for a report intended to allow others to make a final decision about the worth of the program. When there are only a handful of people to help and even fewer who have an idea of what needs to be done and how it should be carried out, this uncertain task grows even more menacing and daunting. All this is compressed into a few moments – milestones set along the road to accreditation. Dates on a calendar signify submission deadlines, dates of auditors’ visits, committee meetings, and in the end, a vote to determine our fate. Even as we prepare to show what we are to others, a part of us knows that all the effort may be for naught as it might only serve to highlight where we are missing those typical ingredients that others are looking for to designate our program as worthy. The milestones are as follows: Designing a plan for gathering program data Preparing the Teacher Education Accreditation Council (TEAC) Inquiry Brief TEAC auditors’ visit Response to auditor’s questions Final auditor report Hearing of Initial Review for UTEP’s National Accreditation Final decision It is the story of these milestones that this chapter tells so that the reader can gain a sense of what we have faced and how we have worked to create a collective soup, and a new beginning for our program. The story of our journey begins just before the hearing for Initial Review for UTEP’s National Accreditation Five members of TEAC are positioned to vote about whether our petition for accreditation will be rejected, granted provisionally, or fully accepted on a five-year trial basis. Three members of our community, Joelle Tutela, Alan Sadovnik, and Jessica Vassallo, arrive in Philadelphia on Tuesday, May 26, 2009, prepared to argue our case.
Making Stone Soup
125
Joelle: It is 22 minutes before the panel meeting that will determine the fate of UTEP. One of the TEAC auditors comes to the waiting area of the Doubletree Hotel in Philadelphia to greet the team for our final meeting. Instead of the warm-fuzzy welcome I anticipate, she introduces herself and asks, ‘‘Did you have time to review the email?’’. What email? The last email I received was in early May confirming the time and location of this meeting. Numerous thoughts flood my mind. One puts me into a sweat: Oh, NO–that’s IT! Our future has been decided and – we didn’t even get a chance to rebut. She quickly leaves the room, saying in passing, ‘‘I’ll email it to you now.’’ My teammates and I huddle. Did anyone receive the email? Before we have time to say more, she returns with, ‘‘I just emailed it to you. You can use the business center’s computers to review it before our meeting.’’ We rush to the computer with 12 minutes to read and respond. We glance over the message, and our eyes stop at the chart that displays the vote of the audit team. There it is in black and white, the final vote: two for extended review, which basically entails rewriting the brief, two for denial, and one undecided. ‘‘Oh, no.’’ Alan sighs. There is no time for remorse. As panic calls me and I begin to shake, I am clear – we have to strategize about how to persuade the auditors to reverse their decision in 12 minutes. Alan scrolls to the top of the document to review the areas of program weakness that they identify. ‘‘You two handle any questions about assessment,’’ Alan says to Jessica and me, ‘‘I’ll handle capacity.’’
Although none of our team members has a chance to notice, this is a symbolic moment for once again UTEP is faced with the challenge of pulling it together and racing against the clock while the stakes are at their highest. How does UTEP always end up here – in this space of panic pulling things together at the last minute? There is a history here, a history that is once again repeated in these 22 minutes in the hallway of the Doubletree hotel.
LOOKING BACKWARD: THE KITCHEN FOR COOKING THE SOUP Carolyne: Imagine Marvin Gaye singing ‘‘What’s Going On.’’ It is the summer of 2005 and I have just moved to Newark from Flagstaff, Arizona where I worked with the Hopi and Navajo Nations to create culturally honoring indigenous teachers. As the new chair of the Department of Urban Education at Rutgers-Newark, I am eager to join those experiences with my previous work in urban education reform in Cleveland in this new context. I know the program is small. I know the faculty share similar commitments. I know there is a great need for powerful teachers in Newark. I know the university has a mission to serve the city. I dream of an extraordinary urban teacher program grounded in the city, and over the next three years, for each creative step forward, we encounter cuts: cut the secretary, cut the director of teacher education, cut the placement director, cut four faculty lines, cut the budget, and eventually cut me out of the position as department chair. But this is no victim narrative.
126
CAROLYNE J. WHITE ET AL.
For the first two decades after the formation of Rutgers-Newark campus in the late 1940s, the Teacher Education Program operates as a satellite of the Graduate School of Education (which is based in New Brunswick and is not affiliated with the Newark campus). In the early 1970s, the satellite arrangement changes with the transfer of Rev. Dr. James Scott from New Brunswick to Newark. Dr. Scott, in addition to being brought in as chairman of the now autonomous Newark Teacher Education Program, is pastor of one of the city’s most prominent African-American churches, Bethany Baptist. Dr. Scott’s mission is to develop an urban ethos in the course sequence in the Teacher Education Program at Rutgers-Newark, which until this time had been isolated from the city and its surrounding school districts. For many years, the program remains very small, offering three to four courses per term. Jean Anyon leads the department and the teacher education program in the 1990s and it still remains small. Alan Sadovnik is hired after Dr. Anyon takes a position at another university. He creates the first graduate programs in the department and the teacher education program remains small. We fast-forward to 2003 and see the Department’s name officially changed to the Department of Urban Education, highlighting the deep commitment to the city of Newark. In 2004, the assistant provost, Gary Roth, becomes acting chair of the department. Faced with state-mandated accreditation, Provost Roth garners additional faculty and staff resources, which are focused on building student enrollment, and formalizes a partnership between the program and the university across the street, The New Jersey Institute of Technology (NJIT). NJIT does not officially have the ability to prepare students to be certified as teachers. However, their students can take courses at Rutgers-Newark and because UTEP is open to NJIT students, several of NJIT majors complete the necessary requirements to be certified. This arrangement becomes formalized with Dr. Fadi Deek, Dean of NJIT’s College of Science and Liberal Arts (CSLA), and NJIT receives a memorandum of agreement from the state of New Jersey that allows the programs in CSLA to work with the Rutgers program to prepare students for teacher certification. Dr. James Lipuma is designated NJIT’s liaison and point person for NJIT students. Jim: I had to learn a great deal about how the Rutgers-Newark program worked and how the state requirements could be met. At first, I was told by my Dean to help our students find Rutgers advisors and meet the certification requirements. I only passed the students on to the program director and that was it. I did not get the feeling that they really wanted my help and I had little knowledge of what went on in the program.
Making Stone Soup
127
Carolyne: When I arrive the department is beginning the work toward national accreditation with TEAC and hope is tangible. We have two new tenure-track hires in addition to me, and a new director of teacher education. We also have minimal codified policies and procedures. Our small program has a tradition of working well with students to craft what could be likened to ‘‘individualized learning plans’’ as they work toward certification. While this may be viewed as an admirable intention to serve students, it is not a practice that allows for the types of program assessments needed for accreditation. We have a mission to prepare urban teachers, yet many of our students complete practicum and student teaching in suburban schools. There is a lack of program integrity.
Students in my social foundations class comment: ‘‘We are going to walk from campus to the New Jersey Historical Society?’’ and another adds ‘‘We are really going to walk in downtown Newark? Is it safe?’’.
‘‘Yes,’’ I say and lead my class into the nation’s third oldest city. These are students who say they want to be urban teachers. We walk three blocks to the New Jersey Historical Society. Soon students more at home in the city take the lead and I move to the back, listen to conversations and make sure we don’t lose anyone to their fear.
A student says, ‘‘I didn’t know the New Jersey Historical Society was located in Newark.’’ Another, ‘‘You really live in Newark, professor?’’ We walk into the historical society and are embraced by Linda Epps, the director. She tells us what it cost her to create the exhibit, the funding lost, the board members angered, the resistance to exploring the rebellion that continues to mark this city, her city, a city she loves. She invites us to listen to the oral histories that have been collected to create this exhibit, invites us to consider the multiple perspectives available here from citizens of this city. She stands in the face of little and no agreement about what is possible. She explains the title of the exhibit, ‘‘What’s Going On?’’.
What’s going on that students at Rutgers-Newark are afraid to leave the campus? What’s going on that they know so little about its historic citizen rebellion? What’s going on that they think they can be urban teachers and be afraid of the city? What’s going on that Rutgers-Newark has a small teacher education program in the middle of such tremendous need for powerful teachers? One thing that’s going on is that the state of New Jersey is ranked 47th out of 50 in funding for higher education and we, located on the colonial campus, continue to encounter budget cuts. With a legacy of a small teacher education program on a Research One campus, it is a challenge to create solid institutional commitment to the program. What’s going on when a campus administrator looks me in the eye and asks, ‘‘So you really think you can make a difference in the Newark Public Schools?’’ What’s going on that when I ask him about our department name not being listed on the sign outside of our building, he responds that he can’t get it fixed? One Saturday morning un-named individuals, armed with rubon letters from Staples, add our department name to the building sign, using letters with a distinctive jazzy flair!
128
CAROLYNE J. WHITE ET AL.
FIRST BATCH OF SOUP: THE PROCESS OF PROGRAM CHANGE As we begin creating a teacher education program focused on the city of Newark and its children, a place-based program with integrity, what follows is an abundance of student complaints as they are being held accountable to more rigorous program requirements. What also follows is a significant loss of faculty and staff. Increasingly challenged by the realities of preparing the TEAC Brief amidst her concern that we lack the institutional capacity (faculty, staff, and budget) to obtain and sustain accreditation, the director of the teacher education program accepts a position at another university. The placement person retires, three assistant professors take jobs at other institutions, an associate professor asks to be moved to another department where she will not be involved in teacher education, and a secretary is let go with the promise that we can replace her. What actually happens is that we are denied funding for the secretary, not allowed to replace the four tenure-track positions, and we are given insufficient funds to replace the other two staff positions. We create a plan for using doctoral assistantships to cover the teaching of some of our courses and some of the supervision of practicum and student teaching. We hire a part-time person to handle placement and hire a former student, Jessica Vassallo, as program coordinator, with the primary responsibility of advising students. Due to Jessica’s dedication to the program, she assumes responsibilities far beyond her job description as we scramble to maintain our program and work toward accreditation. What we have is a partially written Brief, an excessively complicated artifact collection system and no clear plan for overall programmatic evaluation. Jessica: As a graduate of the Urban Teacher Education Program, I bring a unique perspective to my position as Program Coordinator. As a student, I experienced the extraordinary nature of the program: the individual teaching styles, creative projects and assignments, high expectations, and holistic assessments, every faculty and staff member contributes to the growth and development of their students as future urban teachers. Often when people hear the words ‘urban district,’ and especially Newark, the conversation is followed by a list of things lacking: resources, quality teachers, safety, support, etc. However, through my experiences in UTEP, I know that the opportunity to teach in an urban environment is a cause for celebration, a celebration for the multicultural abundance in each and every classroom. I know that teachers who celebrate diversity, collaboration, activism, and democracy can create possibilities for their students, their schools, and the surrounding communities. I know that teaching is one of the most challenging, and yet one of the most rewarding professions. It is not just a career y it is a way of being. Teaching is a way of looking
Making Stone Soup
129
at the world critically and empowering our students to do the same. Teachers need to be creative in their modes of communication, facilitation, and design; dedicated to their students and communities; passionate about knowledge; collaborative; and most importantly committed to social justice and equality. I want to inspire and ignite this same passion in each and every one of the potential teachers I meet, and I also want to challenge them to personalize this vision. I want to be a part of the dynamic efforts to create the best urban teacher education program in the country, and the most extraordinary teachers. As I work toward our accreditation, I continually ask how do we maintain this powerful vision and create ways to document that it is being achieved.
Meanwhile, NJIT’s role grows. UTEP’s science educator leaves and the department is not able to replace him. Dr. Lipuma is asked to fill that important gap. Dean Deek appoints Jim the NJIT Teacher Education Coordinator and provides release time to work with all education-related matters, advise students, as well as support any teacher education-related challenges. How much this will actually entail does not become fully apparent until much later when the TEAC Brief is due, and NJIT is intimately involved in supporting the new program design efforts. Jim: I am asked to teach the Science Education track students as well as help the elementary education program provide students with the necessary coursework to meet state certification requirements. At the same time, the program director and I work to see how the science education courses can be brought together so everyone is taught about science literacy and pedagogy. We put this into effect and are discussing other options when I am told that she has left the program. I become familiar with Jessica, the new program coordinator, and explore how to help.
WHO ORDERED THE SOUP? – MEETING THE ACCREDITATION NEEDS New Jersey’s state mandate that university teacher education programs receive national accreditation by January 2009 is a principle motivation for UTEP to redesign its practices. While seeking accreditation, it becomes apparent that there are many ideas about what the program is, should be, and might become. At the same time, it is vital to meet the needs of the state of New Jersey Department of Education who determines eligibility requirements for teacher candidates, university policies that determine graduation requirements, and, of course, those things needed to meet the needs of TEAC. As very little consistency exists and the system that was in place to gather and retain student data is not coherent, the work to meet all
130
CAROLYNE J. WHITE ET AL.
of these different needs is difficult. This is exacerbated by limited faculty and resources. Jessica: I spend a year overseeing our accreditation – a process that has many people frustrated. In preparing for accreditation, we have been looking closely at the possible disconnects in our program and areas that need improvement. Transition and change is definitely something that is not welcome in a department that is already underfunded, understaffed, and overwhelmed. We have doctoral students and part-time staff teaching many of our courses, and our filing system is like a snapshot from the dark ages. The lack of funding and staff leaves me feeling like a sort of one-woman band at times. Over one summer, I watch online tutorials to learn how to use a software program to update our website, I update our filing system, create countless excel files to organize and track all of our student, course, alumni, supervisor, and cooperating teacher/mentor information, and I attempt to create a database but I can’t seem to find the time to read the textbook that accompanies the free software available on our campus. For a year, I represent the department at meetings for all of the colleges of teacher education in New Jersey, and the consortium for the accreditation process. I advise all of the students in the program, handle all prospective students, maintain the website, update all of our publications, oversee the collection of artifacts and evidence for our accreditation, handle scheduling issues, update and maintain data, and recruit new students. This is not a complaint y it is a testament to the dire state of a department that I know has the potential to be the best urban education program in the country. I know because I am a product of it.
We commit to making Stone Soup and our challenge is how to blend the unique individual contributions into a program with integrity and accreditation. Joelle: I am hired as the Director of Teacher Education and one of my first tasks is to ensure that UTEP receives initial TEAC accreditation by January 2009 – just four months away. At first this seems like a daunting task. This position has been vacant for over a year. There is a TEAC room filled with rich evidence but with minimal guidance documents, and the due date of the Initial Brief is a month away, with a site visit in twomonths. Instead of being filled with trepidation, I draw on my doctoral scholarship in urban education and my work as an educator for the past fourteen years. I’ll never forget that morning of September 1, 2008 – exactly three months before the visitors are coming – it is hot muggy and buggy and of course, there is road construction on First Street. Not knowing if I am going to make it on time to my first meeting as the director of teacher education (one week until I would become Dr. Joelle J. Tutela and 17 days after my dad is diagnosed with advanced stages of pancreatic cancer which metastasizes to the liver and that morning off to the advanced biopsies without me) I remember saying, ‘‘Just breathe Joelle, Just breathe’’ and just when I take that deep breath and look up and there is the sign, ‘‘You just passed’’ OMG is my initial reaction. But for real – in red, white and blue it states ‘‘You just passed’’ – What? Who passed? My dad’s biopsies came up negative? I passed my doctoral defense? UTEP passed TEAC? I need to know which one is the truth. I know it could not be all three.
Making Stone Soup
131
In the nick of time, I get to my first meeting and there is Jessica with notes prepped for the day. Quickly, I learn that tensions over receiving national accreditation are similar to the tensions within our lives and how we choose to attack and address those tensions often times determines the fate. Just before the semester begins Jessica and I meet to discuss the ingredients needed for UTEP’s stone soup. At first we are both guarding our Italian family’s special ingredients but as the morning unfolds – we know we have the right stuff for UTEP’s stone soup.
GETTING THE SOUP READY TO SERVE: COMPLETING THE TEAC BRIEF AND PREPARING FOR THE VISIT With less than three months until the TEAC visit, we need to finalize the TEAC Brief and organize the evidence. There is a great deal of pressure and tension related to the uncertainty and sheer size of the task in front of us. Joelle: Who is UTEP? What does UTEP do? How does UTEP do it? are my initial questions. Just like an Italian chef shops at local mercados to find the freshest ingredients for family recipes, I search to find out what are the philosophies that guide this Urban Teacher Education Program. I quickly learn that UTEP has very limited guidance documents. Often times they were hand-written notes or newly created, not yet fully implemented protocols. Before feeling totally overwhelmed I am given the latest draft of the UTEP’s Inquiry Brief. With a gleam in my eyes, I think t, ‘‘Great-there is a plan for this madness.’’ As I read through the brief I am impressed by the goals of creating a collaborative urban teacher program but as I begin to reflect on the artifacts I see the TEAC room, things become cloudy. How do the numerous program artifacts come together for program evaluation? Why are there so many different artifacts and why are artifacts not consistent across the student binders?
As the date of the visit is quickly approaching, the first step is to determine the structure of the program and clearly identify and state in writing the mission, the objectives to achieve the mission, and refine the evidence used to demonstrate that the mission is being accomplished so that they can be organized into a coherent report. Moreover, this basic plan will help determine what is not yet fully described as well as show how the plan can be made more effective. Joelle: Instead of feeling frantic and stressed I feel a sort of calm. I remember thinking, that all UTEP needs is a strategic plan that includes a vision and mission that states who we are, our goals (claims) and evidence to demonstrate achievement of our claims. Before doing anything else, it is critical that I scrutinize TEAC’s guidelines. Understanding the principles and goals of TEAC as well as the steps to accreditation provides the parameters of what we need to do. Unlike its competitor, the National
132
CAROLYNE J. WHITE ET AL.
Council for Accreditation of Teacher Education (NCATE) which provides predetermined indicators to be met, TEAC asks that we describe our program, provide claims about our program, and the evidence used to measure the achievement of our claims around three core issues: content mastery, pedagogical knowledge, and caring. There in black and white is what needs to be done.
SETTING THE TABLE – THE AUDITOR’S VISIT The first item is to create a framework that will guide our work. This begins with our vision and mission, a strategic plan, and creation of protocols for UTEP policies. Instead of being fearful of what we do not have in place, it is important to recognize UTEP’s strengths and limitations. By identifying these areas, UTEP will become more accountable for creating a plan to address them and take ownership of our future development. Joelle: I start to view TEAC as our friend, not our enemy. I start meetings with my new motto, ‘‘TEAC is our friend.’’ UTEP’s Inquiry Brief now reflects who we were and who we want to become.
A revised Brief is mailed October 15, 2008. It is 41 days until the audit team will arrive and in that time, it is vital to ensure that all of the claims made in the Inquiry Brief have sufficient evidence to support what is claimed. Joelle: Thank goodness we took out the information that did not have evidence. ‘‘Less is more,’’ echoes in my mind as I work to prepare for the visit. To ensure we are ready for the visit, I create a master checklist of the items that need to be done, convert one of our offices to UTEP’s Teacher Candidate Resources Room and give the UTEP faculty and staff the charge of spring cleaning their offices by the day before Thanksgiving break (November 26, 2009).
To ensure that our guests are comfortable, Jessica and Joelle fill mobile filing units with all of the UTEP evidence. From the mass of disorganized and sometimes incomplete information, a coherent picture of UTEP emerges. It takes a tremendous effort on everyone’s part to get the program ready for the visit and this is just the beginning. Tension grows as the date of the visit approaches.
TASTERS ARRIVE: THE TEAC VISIT The Monday after Thanksgiving break is usually a day for catch-up. Not this year; we need to be our best as the three members of the audit team will arrive promptly at 8:30 am, Monday December 1, 2008, and will stay for the
Making Stone Soup
133
next three days. In preparation for their visit, Joelle arranges meetings with the stakeholders, including focus groups with UTEP’s faculty, RU-N administration, NJIT’s administration, teacher candidates, supervisors, cooperating teachers, visits to UTEP’s classes, and work sessions to review UTEP’s evidence. Each semester, UTEP hosts a Gallery Walk that allows teacher candidates to showcase their best unit plan through a story board and portfolio presentation from their student teaching experience in the Newark Public Schools. This particular semester, the Student Teacher Gallery Walk happens to fall on the Tuesday evening of the TEAC Auditor’s visit. We could not have planned a better way for our program to come to life for the auditors. As the TEAC brief is being prepared, the faculty of UTEP see that we are not sufficiently prepared in the area of assessment and evaluation. Dean Fadi Deek is asked for help. He participates in planning and helps with reworking the clinical evaluation form. During the auditor visit, confusion emerges about the role of NJIT. After clarifying that NJIT is a separate institution closely connected to Rutgers, the discussion turns to questions about the assessments currently being used by UTEP and more generally how problems with the program might be resolved. Jim: A few days before the audit visit, Dean Deek asks me to be ready to support all the efforts during the TEAC review. I am not sure what might be asked, but I am confident I can answer most general questions about assessment. During the actual interview, the auditors do much more than seek a general understanding about NJIT’s role. After convincing them of our knowledge and expertise in assessment methods and tools, I explain some possible changes that might occur. I have some knowledge of the problems that most assessment tools present and from my discussions with the members of the department I am able to explain a course of action that might be taken to improve short term assessment tools and mechanisms as well as long term programmatic assessment schemes and methodologies. Little did I know that this would become my job for nearly the next year as we undertake the redesign of UTEP.
For some, the visit is stressful and for others it seems to pass quickly. The interviews all seem to go well and there is a shared sense among the faculty and staff in the department that our students made a great impression on our visitors. Still, we know that we have significant issues regarding program capacity. We also know our assessment plan needs much more clarification and refinement. Joelle: Before I know it, 11:00 am Wednesday morning arrives, the final day of audit review (December 3, 2008) and the authors of the Brief meet with the audit team to discuss UTEP’s success or failure in providing evidence for its claims. Once again, I sense a great calm. The tensions that I faced just three months ago are slowly dissipating –
134
CAROLYNE J. WHITE ET AL.
UTEP’s evidence answers many of the questions raised throughout the visit. In fact, walking into the mid-morning meeting, I think, ‘‘TEAC is our friend.’’
The meeting starts on a friendly note but ends swiftly as the auditors have to catch a flight out of Newark. The team provides direct feedback about what they have seen and understood. This debriefing is vital to the improvements that have to be made to respond to the questions about the brief that would be coming soon. Joelle: As I listen attentively to the critique both in favor and against UTEP – I am pleased – We knew our strengths and drafted a strategic plan to address our weaknesses. One auditor commented negatively about our strategic plan as it did not have clear due dates for the completion of the tasks at hand. That is a great comment. See TEAC is our friend. They are telling us how we can improve. The other important comment that I hold near and dear is their recommendation that UTEP concentrate on its urban mission. With a sigh and a sense of relief I can finally breathe as the auditors leave.
THE CRITICS REVIEW: THE RESPONSE TO THE AUDITOR’S REPORT Our main mission is now to create an Urban Teacher Education strategic plan with dates to roll out new materials and procedures. There is much to do, but the plan has some basic elements that can be put into effect almost immediately. All policies are reviewed by the full-time faculty. Uniform templates are created for lesson plans, unit plans, and course syllabi. A new system for artifact submission is implemented and Jim starts work on the most pressing issues: reworking the assessment tools and methods. Jim: Since I was not intimately involved with UTEP previously and am not clear about what they do or do not have, I start at the beginning as I see it. From my work with leadership, like Michael Fullan’s Leading in a Culture of Change amongst others, I know that I need to see the vision and mission statement that will lead to the objectives of the program and how it is put into practice. Unfortunately, I do not find what I expect. At the same time, there are time pressures to roll out new assessment methods and tools to be used or field-tested in the spring. So while I work to refine the vision and mission with the department, I revise the student teaching assessment form. I work closely with Joelle. She is responsive to my suggestions and her support and participation are encouraging. It quickly becomes apparent that there is so much to do in such a short time that I will have to work both for the programmatic level down towards daily tools and from the immediate daily needs of the program to the larger overarching goals. If there were more people, this might be easier but with such a small group, it becomes necessary to do something and refine it as time goes on. All of this is set against the
Making Stone Soup
135
backdrop of having to respond to the issues raised by the auditors report about the visit. More and more I see the pressure grow around me and I do what I can to help. Given enough time, I am confident the program assessment and evaluation process can be remade. However, I do not know the dire nature of the situation and without knowledge of the politics of Rutgers or the larger issues faced by those in the department, I focus on my tasks and supplying Joelle with what she requires to meet her immediate needs and long-term goals for TEAC.
Jim and Joelle meet weekly to discuss the future direction of UTEP. Winter break is quickly approaching and the first things on the schedule are coming due. Joelle: I look to finalize the strategic plan with due dates, over the break. Unfortunately, my plans are interrupted by the death of my father. Wounded, I returned from winter break, and feel overwhelmed. How will we get UTEP in order? Just as I feel my world crumbling, I receive an email from Jim. Over the break he had created UTEP’s new evaluation system scale for the practicum and student teaching candidates. BEAMS is a continuum by which UTEP teacher candidates will be rated: Beginning, Emerging, Attaining, Mastering and Surpassing (program expectations).
By the middle of January, UTEP receives its TEAC audit report, which highlights areas of the evidence that are missing or need improvement. We had been graded at 89%. During the spring semester, we start to field test the newly constructed evaluations and continue to revise our strategic plan. Joelle creates UTEP’s first public relations kit with the motto, ‘‘Do you have what it takes?’’ and with the help of Jessica and other members of the department creates a PowerPoint presentation for open house events. The program refines its collaboration with the Newark Public School’s Future Educator’s of America clubs. Jessica contacts the local newspaper, the Star Ledger, and their education section spotlights UTEP and we create a more visible presence for the program.
HEARING OF INITIAL REVIEW FOR UTEP’S NATIONAL ACCREDITATION May 26th quickly approaches. The hearing is to be based on what was provided in the audit report, but there is also an opportunity for us to explain what the program is doing to improve. In the past four months, a great deal of change has occurred and far more is planned and being implemented. Our story has now come full circle, back to the Doubletree Hotel where Alan, Jessica, and Joelle are about to enter the TEAC Panel.
136
CAROLYNE J. WHITE ET AL.
As Alan, Jessica, and Joelle enter the TEAC room in Philadelphia, the initial vote weighing heavy on their minds, they anticipate seeing the five panel members who will determine the fate of UTEP. Surprisingly, there are nine people patiently waiting in the room, with three open seats at the head of the rectangular table design. Clearly, this is more of a trial than a conversation. Although, only five of the nine have voting power, each panel member is able to ask UTEP’s team questions or to request elaboration about a previous answer. With only a few minutes to review the email and prepare, the team members draw on the months, and for some years, of work that have gone in to assessing the UTEP. Joelle: Once again I feel a sense of calm. I know that since the auditors had visited, we are finally able to demonstrate who we are and that we have evidence to back it up. Jessica: As all nine auditors start to probe, the three of us y it is like this strange dance y this unspoken script where we each know our lines by heart. We know when to chime in, and when to sit quietly and attentively listen to our teammate. We know what needs to be done and what has to be said to defend our dear UTEP. There is no dress rehearsal y no stage y we are on trial y UTEP is on trial, hanging in the balance. Joelle: Two hours later, we walk out with a unanimous decision in favor of initial accreditation for five years.
POSTSCRIPT Carolyne: As I work on this manuscript, I’m haunted by our analogy to Stone Soup. Making stone soup enables me to more fully appreciate the similarities between the struggles we face at the university and the struggles our teacher candidates will face in their future classrooms and the importance of learning to stand in the face of little institutional agreement. Even so, is soup made with stone sustainable? The children of Newark and the students attending the state university of New Jersey deserve more.
REFERENCE Forest, H. (1998). Stone soup. Little Rock, AR: August House Little Folk.
CHAPTER 9 DEVELOPING DATA SYSTEMS FOR CONTINUOUS IMPROVEMENT UNDER THE NCATE STRUCTURE: A CASE STUDY Elaine Ackerman and John H. Hoover ABSTRACT The history of continuous improvement, particularly requirements to close the feedback loop, was explored through an analysis of experiences at St. Cloud State University (SCSU). A method for generating evidence of the use of assessment data is provided. Several program improvements tied to this example were cited, including increasing the number of program area reports, adding to the number of qualitative studies, and strengthening advisement. Difficulties encountered with the system included institutionalizing the approach, response rates, and workload issues.
The concept that leaders of nimble public organizations ought to employ data to improve services and outcomes originates in the business world (Norman, 2001). Because of this, its utility in higher education can and should be carefully debated. Yet, it makes enough sense that we will not
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 137–161 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012012
137
138
ELAINE ACKERMAN AND JOHN H. HOOVER
participate in that exercise here, but start from the assumption that continuous improvement (CI) and data-centered management is a reform that is here to stay. We also assume that the documentation of data use in program improvement, the closing of the loop, is an essential feature of CI (Soundarajan, 2004). Over the past decade and a half, accreditation agencies have argued for twin reforms related to CI. First, National Council for Accreditation of Teacher Education (NCATE), among other oversight organizations, has mandated that teacher education programs collect, analyze, and report outcome data and that CI should thus become part of the institutional climate. Furthermore, the use of data in a CI cycle would be included in criteria upon which accreditation would be based (NCATE, Strategic Goals and Objectives and Current Issues, n.d.). Second, once data are collected, a legitimate issue is one of use, perhaps best organized around the following two questions: Are data regularly and systematically employed in making programmatic decisions? How best can program developers collect, analyze, summarize, and report evidence that data are being considered, much less employed, in decisionmaking? In this chapter, we offer a case study on a system for managing CI within the NCATE system. Put more plainly and in keeping with the volume’s theme, we will examine the complexities attending the development and institution of such a system. We could not convincingly argue that the system for assessing the use of data described in these pages is firmly established in St. Cloud State’s College of Education. As is true of all such efforts, data use and documentation of such application remain under construction. From the outset, we should lay out another shared bias: Namely, that the call for what might be termed ‘‘meta assessment,’’ or perhaps ‘‘assessing the assessment’’ is legitimate and necessary to program improvement (Banta & Palomba, 1999). Thus, the chapter is intended to describe a pathway, and some of the obstacles on the trail toward practices meant to assist assessors, leaders, and program developers in tracking data use. We generate observations and generalizations that we hope will assist others in addressing the employment of data in program development.
Developing Data Systems for Continuous Improvement
139
THEORETICAL BACKGROUND OF DATA USE IN CONTINUOUS IMPROVEMENT NCATE’s Mandate for Continuous Improvement In 2001, NCATE implemented a performance-based system of accreditation. This system requires institutions to provide evidence of competent teacher candidate performance. The new performance-based system was predicted to enhance accountability and to produce improvements in educator preparation (NCATE, 2002a, 2002b). In addition, NCATE language is replete with suggestions that performance-based data must be considered and employed in making decisions about programmatic effectiveness and in generating improvements in services to candidates (c.f. NCATE, 2002a, 2006). Standards developers frequently refer to the collection of performance data, for example, under standard one, a portion of the acceptable-level rubric stipulates that ‘‘Eighty percent or more of the unit’s program completers pass the academic examinations in states that require examinations for licensure.’’ Under the target level of performance for standard one, we learn that completers must ‘‘pass the academic content examinations in states that require examinations for licensure’’ (NCATE, 2006, p. 14; Professional Standards, PS). In a requirement seen as exceedingly difficult at some institutions because of its putative removal from direct in-house control, NCATE requires that program leaders demonstrate that candidates affect the learning of students in the schools: ‘‘Teacher candidates accurately assess and analyze student learning, make appropriate adjustments to instruction, monitor student learning, and have a positive effect on learning for all students’’ (student learning, language from the target level of performance, NCATE, 2006, p. 16; Professional Standards). Another issue recently on NCATE’s radar is that unit assessors seek performance evidence that data are not just collected, but that these data are employed in decision-making. The expectation exists, in other words, that a system must be in place that tracks whether or not, or to what degree, data are disseminated, considered, and that they contribute to program improvement or administrative change. In fact, in the SCSU unit, we operated on the assumption that closing the data loop for CI lay at the heart of Standard 2 (Assessment, NCATE, 2006, pp. 21–24; Professional Standards). ‘‘The unit has an assessment system that collects and analyzes
140
ELAINE ACKERMAN AND JOHN H. HOOVER
data on applicant qualifications, candidates, and graduate performance, and unit operations to evaluate and improve the unit and its programs’’ (p. 21, our emphasis). Despite the rather odd sense of personification in the written standard, the leadership in our unit well understood it to mean that a system needed to be developed that would document that data were (a) disseminated to program leaders and (b) considered in program evaluation. Note clarifying language from the 2006 manual: The unit’s system includes a comprehensive and integrated set of evaluation measures that are used to monitor candidate performance and manage and improve operations and programs (NCATE, 2006, p. 21). Meeting this [assessment] responsibility requires using information technologies systematic gathering and evaluation of information and making use of that information to strengthen the unit and its programs (p. 23). Program review and refinement are needed, over time, to ensure quality (p. 23). NCATE’s standards developers conclude the section on assessment with a series of bulleted statements describing features of effective programs, one of which is, ‘‘the unit uses results from candidate assessments to evaluate and make improvements in the unit, and its programs, courses, teaching and y clinical experiences’’ (NCATE, 2006, p. 24). These same standards are promulgated, in even stronger language, in instructions to accreditation team members (Handbook for Accreditation Visits, NCATE, 2002a; also note that [evidence for] ‘‘use of data for program improvement’’ is provided in the current online template for examiners (NCATE, 2002b; Board of Examiners Template). The salience of CI is summarized in a 2003 description of effective assessment in teacher education by an NCATE study team (Elliot, 2003, p. 9). Under criterion 4, program representatives must provide evidence that assessment is employed in rendering, ‘‘meaningful decisions y including ones that y evaluate courses programs or units’’ (p. 9). In the same publication, Elliot reported that in examining 29 samples of institutional data, committee members were hard-pressed to find evidence that ‘‘results are combined with other data to evaluate program and unit effectiveness’’ (p. 5). Despite the fact that the professional standards and programmatic expectations are constantly under revision (we assume, in the case of NCATE, that changes in accreditation standards and practices also result from consumer data), it is unlikely that the impulse to monitor the use of
Developing Data Systems for Continuous Improvement
141
data in program improvement will change. Both in business and in the public sectors, it is clear that CI servo mechanisms are here to stay. The heart of CI is that organizations must constantly measure the effectiveness of their processes and strive to satisfy all aims, but perhaps most importantly, assess those objectives most central to the organizations’ mission including difficult-to-measure aims (Quality Training, 2001). In supporting program improvement, data dissemination must allow for consideration of the following elements: Assessment must reflect the unit’s learning goals. The system must provide students feedback about the knowledge, skills, and dispositions candidates can expect to possess after completing coursework and academic programs. The total assessment system should provide a means for units to understand the dimensions of student learning when seeking to improve student achievement and the educational process (Minnesota Association of Colleges for Teacher Education [MACTE], 2009). Assessment is an ongoing process requiring continuous reevaluation as to whether teaching and learning processes have achieved preestablished goals and outcomes. Changes are made following reevaluation. Assessment should be organized around a CI cycle. That is, the process must be cyclical and self-improving. (The iterative nature of CI is illustrated through St. Cloud State University’s (SCSU) data-use system depicted in Fig. 1.) Pomerantz (2003) suggested that educators may have focused excessively on accountability at the expense of program improvement, an assessment element identified as its primary purpose (Fullan, 1999). In comments that might explain why Elliot (2003) found little evidence of efforts to close the loop in teacher preparation programs, Soundarajan (2004) identified two key explanations of why leaders often fail to employ assessment results in program improvement. First, improvements in courses or groups of courses are often the result of variables such as changes in the field and [presumably informal] feedback from students (elements not always identified through typical data collection systems). Second, documenting improvements and especially tracking gains to data obtained during assessment activities remains challenging. As discussed below, our experiences with closing the loop matched Soundarajan’s observations and produced one other noteworthy challenge: We found that the important task of tracking data use in CI produced a challenging new level of need for data collection, storage, analysis, and reporting with attendant management and workload issues.
142
ELAINE ACKERMAN AND JOHN H. HOOVER
Program Mission, Vision, Conceptual Framework, and Resultant Goals
Data Use & Program Improvements Collected & Reported to Accrediting Agencies
Teacher/ Administrator Preparation Activities
Improvements and Changes in... Collection of Assessment Data (e.g., Two-to-FiveYear Follow Up Study)
Assessments Reports Disseminated/ Along with Use-ofData Forms
Fig. 1.
Closing the Loop: Continuous Improvement Illustrated through the St. Cloud State Use-of-Data System.
In the following sections, we describe SCSU, the education unit, and end with a specific discussion of the system that we developed for tracking data use. The section dealing with our efforts to close the assessment loop features a discussion of both development of the system and its first round of results. We end with a summary of our findings and experiences.
Research Base On paper, the effectiveness of data-based CI is hard to challenge; as we noted above, we find the idea so persuasive that we did not want to debate its use in this article. However, for the sake of fairness, it must be noted that evaluation of CI appears to be a weakness in the literature with few studies that compare, for example, the performance of institutions that close the loop with those that do not in terms of dependent variables of interest to
Developing Data Systems for Continuous Improvement
143
educators (candidate learning, student learning, cost-effectiveness, etc.). This effort, rife for more and better attention from researchers, would prove a profitable topic for higher education and educational leadership researchers. Despite the relative lack of what some may call ‘‘hard’’ data, case studies are available supporting the notion that programs can be improved through the employment of data in a servo loop. Bernhardt (2004) describes how middle school educators examined low test scores to settle on a CI strategy. In reviewing data, staff members shared ideas regarding what they needed to do to obtain different results. Once they had presented the data with a solid proposal, the district funded an outside facilitator to establish a structure for effective practice. ‘‘The structure included time to analyze their data and student work, and to develop strategies for improvement using the result of their analyses. Students’ test scores in the following year were greatly improved’’ (2004, p. 5). Chenowith (2007) documented the importance of closing the data loop as applied to pre-K-12 education in her examination of schools in poor neighborhoods that succeed despite the odds against them. In summarizing the performance of institutions where the achievement gap had been closed, she noted that across programs, educators ‘‘embrace and use all the data that they can get their hands on’’ (p. 217). On the basis of these data, school officials constantly reexamine and adjust practices, certainly an example of closing the loop. Students at Philadelphia’s M. Hall Stanton Elementary surpassed state averages during the 2004–2005 academic year (the institution showed an increase of students passing state examinations in one academic year from about 15% to over 60%). Chenowith attributed this growth in great part to adjustments based on data interpretation. Stanton’s growth was not a fluke, but rather ‘‘a reflection of new practices y a careful reorganization of instruction, comprehensive professional development of teachers, close examination of student data y.’’ (p. 128). It remains important for teacher preparation programs to demonstrate such gains based on the performance of their candidates.
THE INSTITUTION AND THE UNIT The Institution SCSU serves nearly 17,000 students in 205 academic degree programs, 139 undergraduate level, and 66 at the post-baccalaureate level. The institution is classified as Master’s College and University (larger programs) under the
144
ELAINE ACKERMAN AND JOHN H. HOOVER
Carnegie classification system (Reaching Higher: HLC Accreditation at SCSU, 2007). Other aspects of the Carnegie designation are listed below (Reaching Higher: HLC Accreditation at SCSU, 2007, p. 11): Undergraduate instructional program: Balanced arts and sciences/ professions, some graduate coexistence; Graduate instructional program: Post-baccalaureate comprehensive; Enrollment profile: Very high undergraduate; Undergraduate profile: Full-time, four-year, selective, higher transfer-in; and Size and setting: Large four-year, primarily nonresidential The university is organized into five colleges, one of which houses the teacher education unit (the College of Education). The institution is governed through the board of the Minnesota State Colleges and Universities (MnSCU), a body that oversees all Minnesota public Institute of Higher Education outside of the University of Minnesota and its affiliates. St. Cloud State is accredited by the Higher Learning Commission (HLC) and belongs to in the North Central Association. SCSU possesses several characteristics that could be portrayed as distinctive. First, its technology resources, including instructional infrastructure, are on the high end for what used to be called Carnegie II intuitions. St. Cloud State was declared one of the ‘‘most wired’’ schools within its classification (Reaching Higher: HLC Accreditation at SCSU, 2007). Another distinctive feature of SCSU is its number of accreditations, with most eligible programs having earned such recognition. Finally, the Minnesota legislature approved applied doctorates in MnSCU institutions in 2005. St. Cloud State currently has two approved applied doctorates, both affiliated with the College of Education, although jointly administered with the School of Graduate Studies through a Doctoral Center (Educational Administration and Leadership/Higher Education Administration). Through MnSCU administration, SCSU recognizes nine bargaining units, the most significant (for this discussion) being the Inter-Faculty Organization (IFO), representing teaching staff. The tradition at SCSU is for the IFO to be considered a strong advocate for faculty self-governance, obviously a factor to be reckoned with planning for accreditation and assessment. Strong faculty self-governance affects CI in many ways, but primarily through the faculty prerogative for curriculum oversight. As a result, several issues regularly appear as foci in deliberations. For example, faculty members carefully scrutinized measurement targets because many see such appraisal as lending itself to de facto curricular construction.
Developing Data Systems for Continuous Improvement
145
Of course, an extension of this notion is that assessment instruments deserve close study as their phrasing potentially affects program and even course content (rather than the other way around). In December of 2007, for example, a faculty member, noting that two items related to student teaching policies had been added to a survey instrument, indicated that ‘‘The nature of items and the targets we set, the way we measure them, impacts curriculum and how it is offered.’’ Another argued during an assessment team meeting that certain survey items reflected, ‘‘Curricular concerns that are mostly a factor left to faculty governance.’’ The weight of opinion on the Assessment Committee, however, was that feedback from students, cooperating teachers, and the administrators employing our graduates would help faculty members exercise reasonable judgment in curriculum oversight. The requirement that data are to be employed in program change could also be portrayed as impinging upon faculty rights. Although we worried about the issue during planning stages, faculty members rarely voiced this latter sentiment. As will be developed below, these legitimate faculty concerns can only be addressed through an iterative, transparent, and bottom-up process. From our experience and observations at nonunion academies, we conclude that these virtues remain essential to all institutions in the cause of ownership; participation of the faculty association in essence formalized the process of attaining transparency and democratic input. Partly as a function of the most recent HLC accreditation visit during the fall of 2007 and the years leading up to it, St. Cloud State faculty members and administrators refined their existing university assessment program. An assessment director receives release time and is advised by representatives of the faculty. Each year, faculty members at the levels of both programs and departments develop student learning outcomes and report progress on these outcomes for the most recently completed academic year.
The Unit The teacher education unit is administered in the College of Education; however, members of the Teacher Education Council (TEC) of the extended education faculty represent all but one SCSU college. For example, content area representatives of the Social Studies Education program are housed in the College of Social Sciences, and Communication Arts and Literature faculty reside primarily in the College of Fine Arts and Humanities. Science
146
ELAINE ACKERMAN AND JOHN H. HOOVER
and mathematics education students matriculate in the College of Science and Engineering. The fact that scholars with interests in particular academic domains and whose history features secondary teaching in disciplinary areas (mathematics, the sciences, communication arts, and literature) reside both in the college of education and in SCSU’s other academic units produces both challenges and opportunities for assessment. For example, while it is important for the sake of vertical curricular issues that elementary mathematics faculty members (housed in the College of Education (COE)) communicate regularly with secondary mathematics faculty (housed in the College of Science and Engineering), administrative boundaries must be broken if this is to occur. This challenge is balanced by the fact that if through grant writing and mechanisms such as the TEC, teacher education also tends to bring professors together across colleges. This integration was codified in 2004 when the COE and the College of Science and Engineering formalized their teacher education partnership through application for membership in the National Network for Educational Renewal, since lapsed (National Network for Educational Renewal [NNER], 2009). Along with the TEC, the standing Assessment Committee provides an advise-and-consent function for the Dean and for the college’s Assessment Director. All unit policy decisions are ultimately heard in a Dean’s Advisory Council made up of chairs and directors. The COE has seven departments all of which are involved in teacher education either through housing programs or through provision of support courses. Along with the departments, the College of Education currently houses three support units: the office of Clinical Experiences (student teaching and field placements in educator preparation programs), Office of Cultural Diversity, and Special Projects and Applied Research (dealing with external partnerships and encouragement of scholarship). Two factors are illustrated in Tables 1 and 2. First, SCSU is a sizable teacher preparation institution, probably ranking in the bottom half of the top 20 nationally, although this clearly varies by year. Second, we experience difficulties recruiting candidates into mathematics, science, technology, as well as in world languages, a problem not unusual nationally (e.g., Carroll & Fulton, 2004). Finally, SCSU struggles with the recruitment of students of color into teacher education and support programs. The student body at St. Cloud state, by our last reckoning, was about 7% students of color, the figure is slightly lower in the teacher education unit. However, both the university and the unit are at least as diverse as the five counties surrounding the institution. Finally, our male to female ratio may be an issue in that
147
Developing Data Systems for Continuous Improvement
Table 1. Gender by Race Estimates for the Teacher Education Unit at St. Cloud State, Summer 2008 to Spring 2009a (Completer Estimates). Racial/International Category
Male
Female
Total
N
%
N
%
N
%
International students African American/Black American Indian/Alaska Native Asian or Pacific Islander Hispanic/Latino/a White/ non-Latino/a Race/ethnicity unknown
1 2 0 1 2 74 8
1.1 2.3 0.0 1.1 2.3 84.1 9.1
0 4 3 5 1 313 4
0.0 1.2 0.9 1.5 0.3 94.8 1.2
1 6 3 6 3 387 12
0.2 1.4 0.7 1.4 0.7 92.6 2.9
Column totals
88
100.0
330
100.0
418
100.0
a
The term estimate is employed because these are the most current figures for reporting purposes and the study is still in progress. These figures include all undergraduate and postbaccalaureate candidates.
some concern has recently been expressed about the need for teachers as role models for boys, particularly boys of color (c.f., Noguera, 2003).
THE SCSU CONCEPTUAL FRAMEWORK The notion of a conceptual framework (CF), as promulgated by American Association of Colleges for Teacher Education (AACTE)/NCATE, is intimately tied to data use in the CI cycle (see Dottin, 2001, for an excellent explication of this argument). Essentially, a CF consists of a concise statement of the philosophies and resultant processes that guide delivery of the education preparation sequence. The CF supports CI primarily in its role of focusing the aims of the education unit around a sense of purpose: ‘‘the relationship between the conceptual framework (unit purpose) and continuous improvement is ‘the ability to simultaneously express and extend what you value. The genesis of change arises from this dynamic tension’’’ (Dottin, 2001, p. 32, citing Fullan, 1993, p. 15). Dottin concluded his thought with the sensible argument that ‘‘Continuous performance improvement is therefore facilitated by the conceptual framework as the aim of the unit is facilitated by a process of continuous improvement that moves from a conceptual big picture to parts and then back to the whole to the use of results to effect change’’ (p. 32, emphasis added).
148
Table 2.
ELAINE ACKERMAN AND JOHN H. HOOVER
Completers by Program, 2005–2006 to 2007–2008, St. Cloud State University.
Program/Program Category
2006–2007
2007–2008
2008–2009
Secondary/ K-12 programs Communication Arts & Literature Music Instrumental & Classroom K-12 Visual Arts K-12 World languages: French World languages: German World languages: Spanish English as a second language K-12 Mathematics 5–12 Science: Chemistry 9,012 & 5–8 Science: Earth and space Science 9–12 & 5–8 Science: Life Science 9–12 & 5–8 Science: Physics 9–12 & 5–8 Technology education 5–12 Social studies 5–12 Library media specialist K-12 Physical education teaching K-12 Special education (all areas graduate þ undergraduate) Elementary (K-8) Early childhood education (birth to Grade 3)
10 6 15 1 0 3 10 5 1 2 8 0 14 28 2 19 77 113 45
21 1 9 0 2 4 5 5 0 2 6 0 1 26 2 25 74 151 32
16 5 13 1 0 2 2 2 3 2 4 0 6 24 6 19 70 114 36
Total completers, 2005–2008
359
367
325
Notes: The data in this table are completers as we reported them for the state Title II report. Note that final figures for the 2008–2009 year were not complete as of this writing. Initial licenses only. If additional licenses are included, the total for 2008–2009 becomes 416, 76 of which represent additional special education licenses, all at the graduate level.
In looking at the CF, Educator as Transformative Professional (ETP, 2008, Fig. 2), one can see that a compromise was effected between the specificity sought by accrediting bodies and the need to accommodate the broad range of philosophical values in the unit, from what could be seen as postmodern in many departments to a direct instruction orientation in one subdivision, not altogether an unusual state of affairs in teacher preparation. In the end, faculty members agreed on a model primarily informed by constructivism (Bruner, 1996) centered on social change. While ETP can be legitimately criticized for its complexity, it effectively communicates the unit’s firm commitment to social change. The SCSU unit would be a poor fit for potential faculty members or candidates who see education through the lens of essentialism, for example. Such communication of purpose is a
Fig. 2.
A Graphic Organizer for the St. Cloud State University Conceptual Framework.
Developing Data Systems for Continuous Improvement 149
150
ELAINE ACKERMAN AND JOHN H. HOOVER
putative virtue of CFs generally. Highlights of the Teacher as Transformative Professional CF (Fig. 2) are explicated below. In online documents (Conceptual Framework, 2008), the philosophical underpinnings are laid out: We embrace the notion of social constructivism but only in the sense that we believe that knowledge (about teaching and learning) must be reworked and transformed (made personal) by candidates as they acquire it. We believe that an excellent education program transforms individual learners. Candidates are transformed via acquiring the knowledge, skills, and dispositions of professional educators. Students in the schools are changed to become more sophisticated learners from their interactions with candidates. Professors and instructors rethink modes of instruction and even their belief systems as they interact professionally with candidates. Parents of public school students and school administrators are changed constantly by new information emanating from the unit.
As can be seen in Fig. 2, we organized the model around the progression of experiences arranged for candidates (termed, ‘‘process and knowledge arenas, dimensions of learning, integration of multiple perspectives, and interdisciplinary collaborations’’), culminating in seven role performance expectations. Notice the avoidance of terms such as ‘‘outcomes,’’ ‘‘objectives,’’ or even ‘‘aims’’; such nomenclature would have been greeted unfavorably in the unit as excessively deterministic. Assessment Committee members crosswalked the role performances with New Teacher Consortium principles, to develop items for assessment instruments (Interstate New Teachers Consortium [INTASC], n.d.). A brief description of each of seven role performances is provided below (Conceptual Framework, 2008): Content transformer (A-1). Candidates continuously evaluate and modify pedagogy and instruction in light of their lived experiences, technology, and newly acquired information. The Inclusive Educator (A-2) effectively considers diversity in the design, delivery, and development of learning. Humanistic educator (A-3). Candidates display the disposition to deeply value all persons, thus treating them equitably–evidencing a regard and appreciation for the worth and dignity of individual human beings. The primary transformation implied in the role of Culture Transformer (A-4) is that candidates develop dispositions and a knowledge base allowing them to embrace many cultures and subcultures and that they prove able to transform appropriate aspects of their classroom and school culture(s). Researcher (A-5). We expect that candidates will adopt the stance of a systematic enquirer as part of their professional identity.
Developing Data Systems for Continuous Improvement
151
Problem solver/Decision maker (A-6). The transformative professional must effectively employ formal and informal data (quantitative and qualitative) in making decisions about curriculum, learning and behavioral outcomes, and planning methods to be employed with the individuals that he or she serves. Reflective practitioner (A-7). Personal transformation requires deep and continual reflection. The candidate continually participates in healthy self-criticism regarding teaching and learning; in addition, the individual continuously and rigorously re-examines personally held and professionally accepted field-based assumptions. Preparation for the 2008 Accreditation Visit. SCSU’s accreditation and state licensing (concurrent) visit was conducted in April of 2008, with final approval granted on October 31 of the same year. As NCATE requires two annual cycles of academic performance data, faculty members found 2006 and 2007 to be extremely busy years. Aside from regular meetings of an Assessment Committee and an NCATE Oversight Committee, many ad hoc meetings and activities were held as the self-study process under the leadership of our past two deans. Leadership teams organized several college-wide meetings around the topic of diversity. The first of these stressed the importance of documenting diversity competencies in coursework, whereas a second retreat in December of 2007 dealt with self-study findings regarding the diversity of student teaching and field placements. Diversity data from follow-up and employer studies were shared and discussed at this meeting. The CF was revisited through two efforts. First, faculty members looked at a redesign effort in late 2005 and early 2006; second, the dean, in consultation with advisors, organized efforts to redesign explanatory materials and to integrate the CF with more current educational research (it will be recalled that the model first appeared in 2000). As Minnesota officials combine state licensure and accreditation visits, program evaluation materials for the Minnesota Board of Teaching review were under revision at the same time that summary data were being organized for the NCATE appointment. We viewed the combined visits as decreasing rather than increasing the long-term workload, while simultaneously intensifying short-term preparation efforts. During the two-year period leading up to state and NCATE visits, the dean led the charge to revisit both the content of our approach to dispositions and the organizational design of these competencies. The end result was that language was modified from the INTASC dispositional
152
ELAINE ACKERMAN AND JOHN H. HOOVER
standards; the in-house standards refer to educators as opposed to teachers, language considered more inclusive and a better fit with the unit’s mission. Members of the Assessment Committee established and then implemented a system for documenting the dissemination of data and to record its use in CI. This aspect of our work is laid out in more detail below.
THE SCSU DATA-USE SYSTEM System Development The system that evolved for reporting on data use in the unit was the work of many hands, mostly faculty – members of the Assessment Committee. However, the final product included emendations proposed by many sources as it cycled between the Assessment Committee, the dean’s advisory panel, the TEC, programs, and departments. Representatives first reached agreement on a schedule for dissemination of assessment studies (Assessment Matrix, 2008). We agreed that each time a report was disseminated, a format for responding to the instrument would be distributed in paper and electronic versions. Also, the graduate students working on the assessment team would remind chairs and program coordinators periodically to return the data-use form (appendix). A depiction of the process we envisioned is shown in Fig. 1 (note that this is our interpretation, not an institutional figure).
Initial Findings Working through a mandated IFO committee, we developed a pilot system, including a set of expectations and a format for ‘‘reporting back’’ on unitwide and program data use (Ackerman & Hoover, 2007, Data Use Process). Despite the very real problems discussed in the next section, 14 program representatives responded to the pilot project (Ackerman & Hoover, 2008). For the purpose of this discussion, we have also examined the approximately 25 additional responses that have been collected since that time. Initially, we received 14 responses to six disseminated reports – out of a potential response rate of approximately 60, just under a quarter of those possible. These figures have climbed since that time, but not to desired levels. Assessment Committee members intend to reexamine the system to determine how it (see appendix) can be revised and simplified. Despite the
Developing Data Systems for Continuous Improvement
153
relatively low response rate on the pilot application of the data-use system, we learned important lessons about our programs and, most specifically, our data management system; the language of these conclusions is adapted from the pilot study report referenced above and from data collected since the 2007–2008 academic year. Several of the conclusions are drawn from findings since that time. Representing both practical and philosophical needs, program-level respondents formally requested that more qualitative, ‘‘contextualized’’ data be collected, in some cases to replace questionnaires, in others to supplement and lend context to data collected through surveys. Some difficulty was encountered by report readers in translating unit-wide data to the program perspective – especially in translating needs to the ‘‘local’’ level. This complaint was true even when qualitative, narrative data were disseminated. The ‘‘program-level vs. unit wide’’ issue was, however, not insurmountable. While respondents requested disaggregated data (subsequently supplied based on these requests), several demonstrated the ability to integrate data from the unit-wide report with information that they had collected on program candidates. Respondents consistently identified unit and program strengths and perceived areas for improvement. Respondents consistently (although not unanimously) endorsed the following strengths, suggesting that they represent valid themes for future unit-wide growth: (a) diversity preparation, (b) respect for all students, (c) high expectations for all, (d) candidate ability to affect student learning, and (e) dispositions related to the field. Chairs and program coordinators identified consistent areas for growth in the unit, although they occasionally questioned the applicability of these elements to programs: (a) assessment practices, (b) curriculum development, and (c) advisement.
Programmatic and Assessment Changes Several changes resulted from feedback to reports disseminated each year and to data collected from more informal sources. These are highlighted below. In line with feedback from chairs and program coordinators, we increased the number of program-level reports, completing a process that we had initiated earlier. Data reports were disseminated for every program with more than 20 respondents, even if data needed to be cumulated over a longer period to produce target numbers.
154
ELAINE ACKERMAN AND JOHN H. HOOVER
Due to low response rates, methods for collecting the self-report instrument were altered in 2008, increasing the response rate from about 30% to about 85%. Although this information did not emanate from the use-of-data forms, it represented a similar orientation toward CI. Several specific changes to test instruments accrued from the feedback process. On the basis of a problem (ceiling effect) with data generated through a ‘‘performance-based’’ instrument (collected during student teaching), members of an ad hoc committee revised the tool. The end result of this was that the reliability metrics improved as the instrument yielded a more defensible range of scores. Members of one department critiqued the self-report and cooperating teacher questionnaires because they could not disaggregate between their birth-to-age-four and K-grade-3 cohorts. Accordingly, the recommended adjustments were made in the fall of 2009. Members of the Assessment Committee expressed concern about the lack of response to state-wide testing reports. As a result, this year, members of the unit produced a summary report based on the Praxis series that summarized pass rates and provided several programs with specific data regarding domain performance. Because of relatively low pass rates in selected programs highlighted in Praxis returns, both the PPST and the Praxis II series, we organized a Praxis Center (2006) designed to support candidates’ content knowledge and test-taking strategies. The first few iterations of a unit operations study revealed that, while relationships between students, faculty, and staff remained strong, candidates desired better and more accurate advising. Several actions have been taken directly based on this feedback. First, the unit representatives have begun working more closely with the advising center to improve services to candidates. Second, strong faculty advisors in some programs were assigned to more direct contact hours with candidates and to educate newer faculty members in the art and science of advising. Third, we generated plans to address advising during student focus panels (planned for 2010 and 2011). On the basis of the desire for more qualitative information, plans were made and carried out in three initiatives. First, members of the Assessment Committee revised the dissemination schedule. Even though questionnairebased data would be collected each year, they would be disseminated every other year (covering three-year periods). This would allow members of the assessment team to design and implement more qualitative studies and to respond to ad hoc requests from departments and programs. Second, the decision was taken to periodically analyze and disseminate reports based on the written (e.g., qualitative) comments returned on questionnaires and
Developing Data Systems for Continuous Improvement
155
to provide these comments in raw form as appendices to quantitative reports. Finally, we designed focus panel studies to be carried out in 2010 to supplement self-report and cooperating teacher studies. Feedback from student advisory panels yielded several alterations in unit operations. First, Space and Technology Committee members oversaw improvements in pedagogical spaces, including redesigned classrooms and installation of instructional technology enhancements in the form of audiovisual equipment. Second, interactive boards were added to many unit spaces. Third, we added cafeteria facilities in the education building. SCSU participates in the National Assessment of Student Engagement (National Survey of Student Engagement [NSSE], n.d.). Although we found these data quite useful, once results were disaggregated, numbers were too small for reasonable interpretation – even at the unit level. Because of this, we added a unit operations survey to our arsenal of assessments. As mentioned above, this change produced some small criticisms of advisement in the unit that have subsequently been addressed. As of the fall of 2009, members of the Assessment Committee considered problems generated by competing reporting requirements. A plan is underway to integrate requirements for university and accreditation accountability systems.
SUMMARY OF EXPERIENCES/LESSONS LEARNED REGARDING DEVELOPMENT OF A CONTINUOUS IMPROVEMENT SYSTEM We found that external and internal pressure to document data use as a central feature of program improvement produced meaningful and positive additions to our practice, despite a host of both predictable and unexpected complexities. Several specific aspects of this generalization deserve particular mention in light of this volume’s guiding theme. The data-use process obviated the problem frequently voiced by assessors that their products disappear into the ether. As assessment veterans, we have often lamented what practitioners’ voice, in their more cynical moments, as the ‘‘file 13 dilemma.’’ Reports, often the culmination of considerable thought and effort, end up in the waste can. More than a waste of financial resources, such practices produce human cost in lost productivity traceable to declining motivation and increasing cynicism about assessment and its functions. We would prefer to hear from faculty members that a given
156
ELAINE ACKERMAN AND JOHN H. HOOVER
report lacked utility (and we did hear this) than never to know whether consumers considered or even read it. Assessment procedures are likely to be strengthened if a system exists to garner feedback on the process and its resulting products. We found that this generalization accounted for our experience in several ways. First, once it became clear that program representatives were attending to data, members of the assessment team found new reasons to produce their best work. Second, negative statements about assessment reports and dissemination practices helped assessors refine their efforts. In short, it is likely that CI processes render the entire system more useful and ultimately more userfriendly (Banta & Palomba, 1999).
Difficulties At first glance, the process of tracking data use looks straightforward, but we can explain some of the pressures and pitfalls of developing and implementing such a system. We discuss a cross-section of emergent complications below. One source of resistance to developing and implementing the system was termed by one departmental wit as ‘‘systems collision,’’ wherein competing demands for data and data-based reports and responses produced confusion and, not to put too fine a point on it, hard feelings. Our data-use system, especially the pressure to produce departmental and program reporting requirements come on top of competing program and departmental, and, most significantly, university demands for reports, much of them ‘‘looking like’’ the assessment information requested in a data feedback loop; this produces, as might be expected, workload issues. Faculty members’ workload concerns struck us as perfectly legitimate. After all, unit efforts alone produce 15 studies per year, nine of them disseminated. Once the continuing improvement system was instantiated, faculty members were required to read, consider, and respond in writing to this new document. This work tended to devolve on chairs and coordinators; unfortunately, these increasing demands coincided with reductions in release time resulting from the recent economic downturn. The resistance factors discussed seemed to result in tactics such as nit-picking documents during their production and during the approval process, perhaps subconsciously for the sake of ‘‘putting off’’ the initiation of requirement to schedule meetings, discuss reports, and send documents forward. It is difficult to determine whether this resistance is conscious or
Developing Data Systems for Continuous Improvement
157
unconscious – the only certainty is that it remains both comprehensible and ubiquitous. Occasionally, faculty members would express a great deal of worry. We recall one chair lamenting that, ‘‘No matter what we decide here, people are not going to do this’’ [extra work]. We found it difficult to balance movement toward an institutional goal with the necessary patience required to produce faculty ownership. Depending on how one achieves this tipping point, leaders will either meet excessive resistance simply because they are seen as officious or will oversee many late nights’ work as deadlines loom. Perhaps, the most important lesson that we learned was that leadership involves a tolerance for the natural ambiguities that accrue to systems change. Working in a complex institution with many leaders, players, and constituents renders transformations difficult; perfectionists need not apply. A particular pressure devolved on committee members. Because individuals on the Assessment Committee kept themselves informed about accreditation issues, they understood the need for data reporting in the context of CI. This knowledge did not reduce the potential for unpleasantness when representatives took products and processes back to departments and programs for discussion. One outcome of this was that strategies for reporting new assessment requirements and approaches to communicating about them often dominated committee discussions, not an unreasonable outcome. Heuwinkel and Hagerty (1998) offered a salient observation on the topic by distinguishing between ownership and commitment. With bottom-up processing, we found that committee members started to identify with evolving systems precisely because they helped develop them (a good thing), but that putting one’s self on the line for commitment to systems change, and the required effort, is entirely another matter. Simply put, procuring buy-in or commitment to the process is essential if accreditation and the self-study associated with it are to lend themselves to CI as opposed to merely going through the motions. The process must be bottom-up in the sense that process and product are designed by faculty members and other groups that will employ the data. The assessment director must stand back and attain a balance between pushing for completion and eliciting bottom-up feedback to produce commitment; we found persistence versus patience a difficult balancing act. In a related note, it is probably wise to alert faculty members working on systems change that they will likely encounter push-back when they bring ideas to department and program meetings for approval. In other words, change agents must come prepared to encounter resistance to change, some more informal than formal (e.g., missing or forgetting meetings). It is the
158
ELAINE ACKERMAN AND JOHN H. HOOVER
responsibility of faculty leaders and administrators to create environments where resistance occurs as infrequently as possible. We suspect that the factors (democratic processes, transparency, constituent participation, direction, and patience) shown as effective in systems change achieve their efficacy, in part, through reducing resistance. We found it useful, even after all parties had agreed to an assessment practice, to carefully plan the rollout and beta testing of a new system. If our experience is predictive for others, assessors should be prepared for a situation where the first round of feedback from a data-use tracking system reflects primarily on their own work. Once we got over the shock (and wiped away the tears), we found this feedback extremely valuable. Transparency is paramount in producing significant changes to the unit’s way of doing business and procuring commitment to these alterations. This aspect of change is enhanced not damaged by a strong union environment, where democratic processes for change tend to be institutionalized. In systems change, no substitute exists for what a colleague calls ‘‘front-end loading’’ in the planning process. We mean by this doing the admittedly difficult work of team and consensus building at the processes launch. Related to planning, we learned that change is and always will be iterative. We must constantly revisit team building (for obvious reasons), consensus seeking, and the planning work itself. CI is predicated, after all, on the notion that the process itself will highlight the need for redesign. The new layer of assessment data, the data-use forms, requires inordinate attention to detail. While clerical staffers were accustomed to sending out reports, they had to be reminded frequently at first that data-use forms needed to be disseminated with assessment reports. Ways had to be found to store the forms, analyze information, and disseminate [new] information. A bit of humor comes in here because the use-of-data step potentially produces an infinite regression: Disseminated reports require response; the response is used to develop a new report; the resulting report is disseminated; thus requiring a response; the responses are analyzedy . As can be seen in Fig. 1, we elected not to formally disseminate the reports based on the use-of-data forms, except as part of the accreditation visit. Careful accounting needs to be made of all gatherings of important constituencies of the unit and programs. Special attention should be paid, not just to careful note taking, but to archiving these qualitative data and finding creative ways to disseminate the information (and to determine
Developing Data Systems for Continuous Improvement
159
whether these data are employed in decision-making – closing the feedback loop). A few examples can be cited from our experiences: Undergraduate and graduate student advisory committees operate much like focus panels and, because members tend to be selected because they demonstrate leadership, we have found their feedback to be particularly useful (thus written summaries are presented to the Dean’s Advisory Council and archived for future consideration). Departmental feedback to NCATE and Assessment Advisory Committees should be regularly recorded and archived; as was true of student advisory panels, these gatherings often serve as de facto focus panels. Focus panels, interviews, and data collection from grant activities related to the unit’s mission can serve as data sources for accreditation visits. The notion of data-driven decision-making toward CI is a tenuous one at best, especially when void of evidence to support that substantive changes have been made. To enable a comprehensive data-driven assessment plan, it takes strong leadership to ensure its implementation (Bernhardt, 2004). However, data-driven decision-making potentially provides schools, colleges, and universities valuable information about their current situation and guides them as to how to use data to create substantive change.
REFERENCES Ackerman, E., & Hoover, J. H. (2007). Data use process. College of Education Report. St. Cloud State University, St. Cloud, MN. Available at http://www.stcloudstate.edu/coe/ ncate/standard2/exhibits2/documents/AssessmentFeedbackProcess.pdf. Retrieved on November 15, 2009. Ackerman, E., & Hoover, J. H. (2008). Use of data: Pilot study. Report No. Datause.08. St. Cloud State University College of Education, St. Cloud, MN. Available at http://www.stcloudstate.edu/coe/ncate/standard2/documents/Use_of_Data_Analysis.pdf. Retrieved on November 15, 2009. Assessment Matrix. (2008). Available at http://www.stcloudstate.edu/coe/ncate/standard2/ exhibits2/documents/Key_Assessment_and_Dissemination_MatrixREV.pdf. Retrieved on December 21, 2009. Banta, T. W., & Palomba, C. A. (1999). Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass Publishers. Bernhardt, V. L. (2004). Continuous improvement: It takes more than test scores. ACSA Leadership (November/December), 16–19. Bruner, J. (1996). The culture of education. Cambridge, MA: Harvard University Press. Carroll, T., & Fulton, K. (2004). The true cost of teacher turnover [Electronic version]. Threshold, 16–17. Chenowith, K. (2007). It’s being done. Cambridge, MA: Harvard Education Press.
160
ELAINE ACKERMAN AND JOHN H. HOOVER
Conceptual Framework. (2008). St. Cloud, MN: Author. Available at http://www.stcloudstate. edu/coe/ncate/framework/default.asp. Retrieved on February 1, 2009. Dottin, E. S. (2001). The development of a conceptual framework. Lanham, NY: University Press of America. Educator as Transformative Professional. (2008). Unpublished conceptual framework document. St. Cloud State University, St. Cloud, MN. Available at http://www.stcloudstate.edu/ coe/ncate/framework/exhibitsF/. Retrieved on March 1, 2010. Elliot, E. J. (2003). Assessing educational candidate performance: A look at changing practices. Washington, DC: National Council for Accreditation of Teacher Education. Fullan, M. (1993). Changing forces: Probing the depths of educational reform. London: Falmer Press. Fullan, M. (1999). Change forces: The sequel. Bristol, PA: Falmer Press. Heuwinkel, M., & Hagerty, P. (1998). The development of a standards-based assessment plan in a school-university partnership. In: M. E. Diez (Ed.), Changing the practice of teacher education: Standards and assessment as a lever for change (pp. 111–120). Washington, DC: AACTE. Interstate New Teachers Consortium (INTASC). (n.d.). Homepage. Available at http:// www.ccsso.org/Projects/interstate_new_teacher_assessment_and_support_consortium/. Retrieved on December 5, 2009. Minnesota Association of Colleges for Teacher Education. (2009). The MACTE minute (read and presented to the Minnesota Board of Teaching at the October, 2009 meeting). Available at http://www.mnteachered.org/. Retrieved on February 1, 2010. National Council for Accreditation of Teacher Education (NCATE). (2002a). Handbook for accreditation visits. Washington, DC: Author. National Council for Accreditation of Teacher Education (NCATE). (2002b). Board of examiner report template. Available at http://www.ncate.org/documents/boeMaterials/ BOE%20Rept-Revised%20Stds.doc. Retrieved on January 15, 2010. National Council for Accreditation of Teacher Education (NCATE). (2006). Professional standards: 2006 edition. Washington, DC: Author. National Network for Educational Renewal. (2009). Home page. Available at http:// www.nnerpartnerships.org/. Retrieved on December 1, 2009. National Survey of Student Engagement. (n.d.). Home page. Available at http://nsse.iub.edu/ index.cfm. Retrieved on January 15, 2010. Noguera, P. A. (2003). The trouble with black boys. Urban Education, 38(4), 431–459. Norman, R. (2001). Reframing business: When the map changes the landscape. New York: Wiley. Pomerantz, N. (2003). Closing the loop: Program review in student affairs. NASPA technical report. Available at http://www.naspa.org/netresults.2003. Retrieved on December 1, 2009. Quality Training. (2001). Lean manufacturing overview 130. Available at http://www.toolingu. com/definition-900130-12156-continuous-improvement.html. Retrieved on November 5, 2009. Reaching Higher: HLC Accreditation at SCSU. (2007). Author, St. Cloud, MN. Available at http://www.stcloudstate.edu/hlc/selfstudy/default.asp. Retrieved on November 25, 2009. Soundarajan, N. (2004). Program assessment and program improvement: Closing the loop. Assessment and Evaluation in Higher Education, 29(5), 597–610.
Developing Data Systems for Continuous Improvement
161
APPENDIX. ASSESSMENT DATA FEEDBACK FORMAT Department, Program, or Unit Person Completing Form Name of Report(s)/ Information Considered:
Date of Meeting
Instructions: Please fill out this form after any meeting where information and/or data related to unit, program, and/or candidate performance has been considered. It is expected either that departmental chairs or the Assessment Committee representative will fill out and return this document. At least one form is required after each set of unitor program-level data is disseminated. Please fill out form electronically and take as many pages as you need. Members of the Assessment Committee ask that you fill out this form after any significant program change is undertaken (new course, change in course, change in program) in order to track the ways that data are employed for program changes. Please direct either a hard copy or an electronic copy to the Assessment Director. 1. Based upon the above report(s) evidence for particular programmatic strengths (if appropriate, cite other data/ information sources that confirm or question program strengths): 2. Based upon the above reports, evidence for consideration of programmatic areas that show need for improvement, if any (if appropriate, cite other data/information sources that confirm or disconfirm program area “need for improvement”). 3. Proposed or considered curricular, policy, or procedural changes. Please list programmatic changes that have been considered since the last report and the information upon which proposed change(s) was/were based. This information is particularly relevant given information contained in the report accompanying this form 4. Describe a prospective process for change, including timelines (if appropriate), and progress indicators. 5. Need for more information. Please enter requests for data, information, or reanalysis that are suggested by the information included in the report(s) listed above. 6. Other comments:
CHAPTER 10 WHAT’S THAT NOISE? THINGS THAT KEEP US AWAKE AT NIGHT: THE COST OF UNEXAMINED ASSUMPTIONS IN PRE-SERVICE ASSESSMENT AND ACCREDITATION James H. Powell, Letitia Hochstrasser Fickel, Patricia Chesbro and Nancy Boxler ABSTRACT This chapter examines the recalcitrant effects of isolationism and the intentional efforts that are necessary to create authentic, collaborative partnerships between schools and universities, between schools and schools, and among educators. The tension between a vision of community and collaboration and the ability to enact that vision raises questions about the necessary knowledge, skills, and dispositions required to be a part of a community-based professional culture, what it means to prepare teachers to work in such a professional community, and to question the unexamined assumptions about the definition of professionalism and teacher knowledge that undergird current accreditation and accountability frameworks. Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 163–181 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012013
163
164
JAMES H. POWELL ET AL.
To relieve that tension, we must start demanding data that demonstrates preservice candidates’ ability to work collaboratively toward more effective practice, rather than focusing so narrowly on statistics that describe what they know and have done individually within a classroom setting.
Tension is the emotion felt when what we know does not equate to what we are doing or what we hope to do. Tension creeps into the cracks created when we are forced to hold up data to current practice. It is increased with the realization that, in education, it is never clear when knowledge becomes sufficiently powerful to override beliefs and current practice. At what point do we begin to rely on data, data derived from both our own experiences and the evidence from the field, rather than traditional professional culture to make decisions about best practice? Currently, professional learning communities (PLCs), collaboration, partnership, and networking are words often used to describe the ideal relationships among educators who intend to improve student learning. Certainly, the idea of PLCs is permeating the educational literature and language of K-12 educators (Byrd & McIntyre, 1999; Dufour & Eaker, 1998; Fullan, 2008; Goodson & Hargreaves, 1996; Stoll & Louis, 2007). In fact, Gajda and Koliba (2007) refer to such collaboration among educators within schools as an ‘‘imperative.’’ Moreover, there is a growing body of evidence from the field that these professional structures have positive effects on both student learning and teachers’ attitudes and beliefs (Mitchell & Sackney, 2001). Yet, the continuing pervasiveness of isolation within classrooms, schools, and universities raises tensions between our vision of quality teaching and learning and the reality of our current experiences as education professionals. In our efforts to create the Alaska Educational Innovations Network (AEIN), we have discovered that the recalcitrant effects of isolationism have been difficult to mitigate: in the development of interdependent relationships among K-12 and university educators, in the professional practice of the partners, in organizational change, and in our own approach to our work. We have learned that intentional efforts are necessary to break down the walls of isolation to create authentic, collaborative partnerships between schools and universities, between schools and schools, and among educators. It is in this tension between our vision of community and collaboration and our ability to enact that vision that we have begun to raise questions for ourselves about the necessary knowledge, skills, and dispositions required to
What’s that Noise? Things that Keep Us Awake at Night
165
be a part of a community-based professional culture, what it means to prepare teachers to work in such a professional community, and to question the unexamined assumptions about the definition of professionalism and teacher knowledge that undergird current accreditation and accountability frameworks. From this tension, we are now asking of ourselves and others, when do we start demanding data that demonstrates our preservice candidates’ ability to work collaboratively toward more effective practice, rather than focusing so narrowly on statistics that describe what they know and have done individually within a classroom setting?
FROM INDIVIDUALISM TO COMMUNITY Since the inception of the industrial model of schooling, teaching has been characterized as the work of individual practitioners; a practice conducted alone, away from the scrutiny of one’s colleagues, and within the privacy of one’s own classroom. In his seminal work, Lortie (1975) examined the work culture of school teachers and made explicit the history of school organization as one intentionally designed to isolate and insulate teachers from each other. This organizational isolation, he argued, gave rise to the workplace norms of individualism and privatism. This workplace culture has not been the only aspect of individualism that has shaped the profession. Teacher professional development has similarly been focused on the individual, framed within a personal, psychological model of learning. More than 30 years later, these conceptual constructs of individualism and privacy remain as strong norms within the culture of teaching and are pervasive within schools. Yet, in the past two decades, a growing body of research has demonstrated that it is in fact these very norms that militate against educational reform and improvement efforts. As our knowledge base related to cognition, learning and the development of expertise has been expanded over the past decades (e.g., Bransford, Brown, & Cocking, 1999; Lave & Wenger, 1991); there has been a shift away from a focus on the internal, individual psychological processes toward a sociocultural understanding of learning as situated within and defined by a particular social group. As such, there has been a growing understanding of teaching as a shared practice of meaning-making, situated within and mediated by a particular social context. As McDiarmid and ClevengerBright (2008) point out, ‘‘much of what teachers do, they do in groups, especially creating learning opportunities through choosing, using, and evaluating curricular materials, instructional and assessment methods, and
166
JAMES H. POWELL ET AL.
classroom management approaches’’ (p. 145). This is why school reform and teaching and learning improvement efforts that focus primarily on teachers as individual practitioners has had little appreciable positive impact. Most school reform efforts are now framed in the concept of ‘‘communities of practice’’ (Little, 1993; Wenger, 1998) or PLCs (e.g., Hord, 1997; Louis & Marks, 1998). Although there are various definitions, within the literature, there are a set of commonalities that characterize such communities. They are ‘‘focused not just on individual teachers’ learning but on (a) professional learning; (b) within the context of a cohesive group; (c) that focuses on collective knowledge, and (d) occurs within an ethic of interpersonal caring that permeates the life of teachers, students and school leaders’’ (Stoll & Louis, 2007, p. 3). ‘‘The metaphor of the learning community assumes, first, that schools are expected to facilitate the learning of all individuals, and, second, that educators are ideally positioned to address fundamental issues and concerns in relation to learning’’ (Mitchell et al., 2001). Lieberman (2007) extends this, stating, ‘‘we are even beginning to understand that these communities are helping to create contexts where people’s capacities expand, their motivations to improve and increase their commitments to students become re-energized’’ (p. 200). Moreover, there is growing evidence that the presence within schools of a professional community focused on student learning makes a significant difference to student achievement outcomes (Louis & Marks, 1998; McLaughlin & Talbert, 2001; Anderson & Togneri, 2002; Bolam, et al., 2005). Yet, if these are accurate assumptions, as the data suggest they are, our work has indicated that schools and universities seldom are able to break the walls of privacy to create PLCs that facilitate the learning of all individuals. While most of the literature focuses on changes in K-12 education, the university situation mirrors that of schools. Structures around the work we do at the university support and reward privacy and individualism. In many instances, it might be argued that the university setting demands even more adherence to valuing individual effort and progress. Faculty promotion, tenure, and merit pay decisions are often weighted more favorably on solo work than on that done collaboratively. Similarly, classroom practice within both the university and our K-12 partner schools emphasizes individual achievement. How often are preservice candidates expected or even allowed to work collaboratively on assignments? When content knowledge is more closely monitored than collegial practice, what must we assume about the hidden curriculum of teacher preparation? Discussions about learning communities that meet the needs of all students are focused on working with schools so that K-12
What’s that Noise? Things that Keep Us Awake at Night
167
educators may embrace the personal, interpersonal, organizational, and interorganizational capacities implicit in learning community. There seems to be no literature that ponders the importance of developing learning communities at the university. Those most commonly left out are the future educators who need the skills and tools required to effectively participate in a PLC. While it might be expected that experienced educators, at every level, do not have the skills and behaviors necessary because PLCs were not envisioned nor promoted during their professional preparation, it is difficult to explain why current graduates also are not gaining the knowledge and practice to become life-long members of a professional community. One critical factor that we have come to recognize is the effect that accreditation standards, especially those promulgated by the Specialized Professional Associations (SPAs) in the National Council for Accreditation of Teacher Education (NCATE) process, have on maintaining the status quo in preservice education. Certain shifts must occur if teacher candidates are to be prepared to fully engage in learning communities. This situation makes it difficult to focus attention on what needs to be done to prepare new teachers to expect and seek PLCs upon graduation. It is much easier to keep our attention turned to the more easily measured attributes valued by the SPAs rather than the more difficult and nuanced behaviors that ensure effective participation in PLCs. However, if we are to reduce the tensions between what we have learned and what we are still doing, we must first build the case that our experiences have value. Mitchell et al. (2001) note that three vital capacities must be developed to build learning communities: the personal, the interpersonal, and the organizational. Our work in Alaska and the Networked Learning Communities project in Britain would add a fourth essential piece: interorganizational capacity building. Jackson and Temperley (2007) suggest three ways these interorganizational connections enhance professional learning: 1. In a network of schools, the strength of some schools’ internal learning culture enables other schools to learn from that through network activity. 2. A schools’ own professional learning culture is enhanced by ‘‘networked learning.’’ In other words, schools learn to collaborate more effectively internally by collaborating externally. The benefits are recursive. 3. Permeability to learning from the external knowledge base (theory, research, and the practice of other schools) is necessary to avoid stagnation and constant recycling of a schools’ existing knowledge base. (p. 53).
168
JAMES H. POWELL ET AL.
As with much human endeavor, we have noticed that expanding capacity is more organic than mechanical, more iterative than linear. All four capacities interconnect to create a whole greater than the sum of its parts and it is difficult, yet necessary, to address all simultaneously. Hence, our networked learning experience is not easily separated into discrete emphases that address the four capacities. And none of these capacities are evident in the data collected for any of the SPA accreditation reports. Simply because something is organic and iterative does not make it valueless, it only requires new tools with which to measure it.
THE CONTEXT OF ALASKA AND THE UNIVERSITY OF ALASKA ANCHORAGE Located in Southcentral Alaska at the University of Alaska Anchorage, the College of Education (COE) draws upon the resources of a culturally rich and diverse community in preparing students for meaningful and challenging careers in education. Alaska, large enough to hold the eastern seaboard comfortably within its borders, consists of largely undeveloped lands. Anchorage is Alaska’s largest city with a population of 260,285. While half of the total population of Alaska resides in Anchorage, approximately 23% of the state’s Alaska Natives live there also making it the fourth largest Alaska Native/American Indian community in the United States. Located upon the traditional Athabascan tribal lands, Anchorage is now home to urban Aleut, Inupiat, Alutiiq, Yup’ik, Eyak, and Tlingit/Haida/Tsimpsian people and is a community that celebrates and embraces its diverse Alaska Native culture. The remainder of Alaska’s Native population lives in Fairbanks, Juneau, or among the approximately 300 rural communities scattered across the state. The COE is committed to developing quality educators, individuals accomplished and confident in their capacity to inspire and inform their students. We see this as vital to addressing student needs within our community and our state, as well creating a culture of scholarship within the COE. In partnership with the Anchorage School District (ASD) K-12 schools, COE is able to provide settings for student teaching, faculty development, and field-based research. Additionally, the ASD Summer Academy, a professional learning event organized by ASD’s Training and Professional Development Department in close collaboration with the COE Office of Professional and Continuing Education, serves nearly 1,000 teachers annually. Our academic programs are accredited by the National
What’s that Noise? Things that Keep Us Awake at Night
169
Council for Accreditation of Teacher Education. An Institutional Recommendation (IR) for teacher certification from UAA COE is accepted throughout the United States. Our accredited program helps students become reflective practitioners across a broad array of disciplines. The Department has 17 full-time instructional faculty and offers the following degrees and majors:
Associate of Applied Science in Early Childhood Education Bachelor of Arts in Early Childhood (P-3) Bachelor of Arts in Elementary Education (K-6) Certificate in Early Childhood Education Graduate Certificate in Language Education Master of Arts in Teaching (secondary, 7-12, K-12) Occupational Endorsement Certificate in School-Age Care Administration Occupational Endorsement Certificate in School-Age Care Practitioner Post-Baccalaureate Certificate in Early Childhood Education Post-Baccalaureate Certificate in Elementary Education
THE ALASKA EDUCATIONAL INNOVATIONS NETWORK ‘‘There is overwhelming agreement that professional learning, though not a magic bullet, is directly and persistently linked to educational improvement and school development’’ (Mitchell et al., 2001). As our US Department of Education grant is focused in enhancing teacher quality, the school– university partnership is founded on this premise. Our vehicle for the partnership is the AEIN. Certainly, for our network, one challenge is bridging the vast distances in Alaska. Although teachers’ perceptions of being isolated is present within most schools, this feeling is exacerbated in small, rural communities accessible only by plane, boat, or all-terrain vehicle (ATV). Within the AEIN structure, we could not rely on proximity to create collaborative relationships but have had to rely on a mixture of in-person sessions, synchronous electronic spaces, and asynchronous tools to connect people and ideas. We have been moved out of our own comfort zones to become facilitators, evaluators, coaches, and brokers as we work to break the isolation of educators. We move forward because we know that such partnerships will ultimately help all participants recognize they are vital components in enhancing professional learning throughout the network. We
170
JAMES H. POWELL ET AL.
believe, and the data support, that this is the type of growth at all levels that will improve the learning of our students, from preschool through preservice teacher candidates. Nevertheless, our path from isolation to community has been one of blind alleys, wrong turns, and potholes wide and deep. Although committed to the National Staff Development Council Standards that call for ongoing, job-embedded professional learning, the early relationship between grant staff and schools inadvertently designed a series of professional development events that resembled a professor– student culture rather than the fluid, transparent, expertise-driven network (Anklam, 2007) we had envisioned. We assembled leadership teams from schools, facilitated activities intended to build relationships among partners, talked about the qualities of learning communities, and instructed our colleagues in data literacy. Individuals reported that they enjoyed the events and that they gained knowledge and skill; yet, when the teams returned months later, little had changed. Learning was not translated into action. Our early efforts did not intentionally strengthen the personal, interpersonal, organizational, or interorganizational capacities of the schools and had even less effect on our preservice teaching program. Often, the development of personal capacity became a cause of further isolation for our partners within their own schools. They returned to their sites with new knowledge and great hope, only to be met with weak interpersonal relationships and organizational roadblocks. Furthermore, the separation of schools from each other and from their higher education colleagues remained. For this school–university partnership grant, the interorganizational capacity building became the obvious entry point for developing the learning community. Efforts to enhance personal, interpersonal, and organizational capacity grew from the idea of networked learning. Yet, simply deciding to build interorganizational capacity, in concert with the other essential foci, required a thoughtful change in our approach. Several issues had emerged in both schools and higher education around each of the capacities: Personal:
the reluctance at every level to examine professional practice ‘‘niceness’’ factor personal efficacy: characterized by isolated control or collaboration? perception of role or disposition (be the expert, teacher, trainer, etc.) Interpersonal:
mistrust ineffective systems for professional learning
What’s that Noise? Things that Keep Us Awake at Night
171
weak collaboration skills the cultural norms of privacy and isolation that pervade educational institutions Organizational: the apparent lack of tools and processes to address organizational improvement the ineffective use of data to inform planning and evaluate progress the potential for ‘‘groupthink’’ power relationships Interorganizational:
competitive or inward focus the reliance on outside experts rather than local practitioner wisdom the implicit hierarchy between schools and universities working from stance of avoidance and dependency rather than a partnership that is mutually beneficial
Throughout the process, sustained discussion about the role of the university and school faculty and administration in addressing these issues occurred each time objectives were enacted into practice. One of the more persistent issues to be dealt with was described by Miller and Hafner (2008). In an initiative to create a university–school–community partnership, the university facilitators made strong efforts to invite participants from all groups. However, power inequities continued to exist. The researchers note, ‘‘it is the responsibility of leadership to create collaborative conditions in which mutual participation is maximized’’ (p. 105). We decided to create maximized mutual participation by dispelling the notion that educators from the university are experts. We adopted a stance of ‘‘radical collegiality’’: connectedness-based, focused on respect for professional expertise, committed to similar goals, and including a disposition to support and cooperate with colleagues (Fielding, 1999). To change the implicit hierarchical relationship between schools and university, we were forced to reconceptualize our roles. We moved from behaving as outside experts or consultants to becoming colleagues, brokers, and facilitators. We followed Friere’s admonition (as cited in Miller & Hafner, 2008) that trust is established by aligning intentions with actions. The network welcomed all voices and honored the wisdom of practice. Events were organized to create spaces for personal and interpersonal development. We gave up controlling the agendas, instead giving trust to our colleagues. We talked less and listened
172
JAMES H. POWELL ET AL.
more, adopting an appreciative inquiry stance to focus on success rather than perceived or prescribed failure. Our work focused on probing the wisdom of our colleagues, thereby honoring the diverse perspectives and cultural traditions we brought to the dialogue. Specifically, we used a culturally responsive lens to frame our interactions (Wlodkowski & Ginsberg, 1995). The focus on inclusion, developing shared meaning, fostering positive attitudes, and engendering competence has further strengthened interorganizational capacity as well as interpersonal ties. To address organizational capacity, AEIN staff, university faculty, and partner schools and networks adopted a logic model process (adapted from Killion, 2008) to use as a planning tool to pursue site-based initiatives. Developed and guided by ongoing data collection and analysis, the logic models are organized around issues that partners care about. They hypothesize the actions that will enhance student learning and undertake those actions logically. Within the process, school leaders facilitate a design that moves from the development of knowledge, attitudes, and skills to the organizational scaffolds and structures that will help practitioners move from knowing to doing. Many sites are using peer observation and feedback to support the changes in practice that the learning requires. Therefore, this process strengthens personal and organizational capacity. Their network colleagues serve as critical friends through the use of probing questions at face-to-face events as well as in electronic communication. To facilitate these processes, teachers, principals, and grant staff needed to learn how to create positive collaborative cultures. Site-based leaders practiced ways to invite their colleagues into substantive conversations around logic models. Once shared meaning and focus was established, educators engaged in joint work to engender competence and address the perceived needs. Regular meetings, coursework, use of professional development days, and released time were used to strengthen collaboration. One school scheduled interclassroom visits, providing feedback through conversation or just a videotaped copy of the lesson. Doors were opening; trust was strengthening. Examination of personal practice is the heart of educational change. For us, the words we have chosen to describe ourselves in our evaluation tools demonstrate a shift in our practice. Initially, we called ourselves ‘‘trainers.’’ That soon shifted to ‘‘presenters,’’ then ‘‘facilitators.’’ The focus has shifted from what we do to assessing the learning of the participants. As Easton (2008) points out, using the word train or even develop implies that something is done to participants, while learning engages the learner. Furthermore, she notes that event-based development does not allow the time, space, or opportunity to apply the knowledge and skills. To actually use knowledge
What’s that Noise? Things that Keep Us Awake at Night
173
and skills, educators need to practice and receive feedback. These ‘‘personal’’ capacities are necessary, along with the disposition to engage in true dialogue around one’s work. In many ways, the development of reflective practice has been the most challenging portion of our work together because of the many issues related to organizational and interpersonal capacity. ‘‘Personal capacity is an amalgam of all the embedded values, assumptions, beliefs, and practical knowledge that teachers carry with them y Building personal capacity entails a confrontation with these explicit and implicit structures y’’ (Mitchell et al., 2001, p. 3). Interestingly, the capacity of interorganizational collaboration appears helpful in this personal development. Network theory suggests that weak ties, those outside of our regular work network, provide greater potential for interrupting thinking than the strong ties we develop with those with whom we regularly interact (Mitchell et al., 2001). Katz, Earl, and Jaafar (2009) make an important distinction. They note that friends, those with whom we have strong ties, tend to be tolerant of our shortcomings and are often more focused on providing positive commentary than frank feedback. Critical friends, however, can observe what may not be apparent to insiders and offer support and critique. One way AEIN partners provide feedback with each other is to use probing questions to provoke thinking. These probing questions also serve to diminish the implicit power inequity between university and school partners while inviting the reflection and conversation that has the power to really affect practice. So what does this mean? What have we learned and what are the implications for our work as a COE? What would happen if our newly certificated teachers entered schools with the skills and dispositions necessary for active membership in learning communities? Could any of our experience be used to reduce the tensions created from what we know and have experienced and how we are implementing our teacher preparation program? We are back to our initial questions about the conditions when data use overcomes culture. Unless we can translate what we have accomplished within the university and school partnerships to include the teacher preparation program, the next generation of teachers will be no better prepared to form effective PLCs than the last.
SPA REPORTS The State of Alaska is geographically immense while having a population of a little more than 600,000 residents. To be certified to recommend
174
JAMES H. POWELL ET AL.
candidates for licensure, all educational preparation programs are required by the Alaska Department of Education and Early Development (EED) to be NCATE Unit Accredited and to submit SPA reports. EED perceives a number of positive benefits from this requirement. First, the SPAs provide highly qualified reviewers to evaluate program effectiveness in meeting the SPA standards for the preparation of teacher candidates, a task that the State has neither the personnel nor the resources to provide. In addition, the four teacher preparation institutions in the state can be assured that each of its programs is getting the same level of review. However, not every program has a SPA to which it can file a report. Those programs must submit a report to the state that addresses the general requirements of the SPAs. Programs that are too new or have too few graduates also are not required to file a report with the SPA but must address the SPA requirements in a report to the state. This requirement generated a number of positive benefits for our programs as we progressed through the most recent accreditation process. Most importantly, it started significant conversations about criteria we held for student success and whether our assessments were adequately measuring them. The process was able to involve all three departments within the college. It created space within which we were able to tie together the mission, goals, and core values with the critical attributes we sought in our graduates. However, as we were compiling and analyzing data, a number of us who had helped to develop and had participated in the AEIN-created PLCs began to ponder what seemed to be missing. As noted earlier, one basic tenet for each of the partnership units was that while data are critical for success, it is the questions the data address that are the key for improvement. So we began to ask ourselves a new set of questions based on what we have learned through the PLCs. How do we make practice public and is there value in doing so? What are we doing to create programs that model and initiate collegial collaboration and PLCs for preservice teachers? What are we doing to address all of the developmental stages of learning to teach? How are we supporting a P-20 model of the PLC and collegial collaboration? Each of these questions is built on an assumption that the PLC has the potential to radically change the way we view ourselves and interact as
What’s that Noise? Things that Keep Us Awake at Night
175
professionals within our educational community. However, it must also be noted that each SPA has made several assumptions about what constitutes effective teaching and learning as well as what attributes a highly qualified teacher of that content must possess. We feel that it is critical to unpack both sets of assumptions before we can begin to determine the optimum means for the preparing educators to become reflective practitioners who learn from their experience. For the SPAs, one of the first and foremost assumptions is the primacy of content knowledge in effective teaching. There are a number of ways in which programs determine whether a candidate has enough content knowledge to be able to effectively teach the subject. The evaluation of this content knowledge is done through examination of coursework and grade point average (GPA) as well as some specialized exams for some content areas such as the Oral Proficiency Interview required by the American Council on the Teaching of Foreign Languages. In math and social studies, teacher preparation programs have to verify specific types of content courses, history of math, and requisite hours in social studies disciplines. In addition, each SPA is interested in how candidates are able to plan and present this content knowledge effectively to students, how they then assess student learning, and most importantly how they are able to demonstrate that students did master the content under their instruction. SPAs also ask institutions to describe the content qualifications of those who monitor participants during their teaching residencies. Finally, programs are asked as well to detail how throughout their course and fieldwork the candidates have fulfilled their roles as bridges to parents and communities and how they have demonstrated a professional attitude and been actively engaged in professional growth. All of these are important data points for determining how well a candidate has been prepared to address the needs of his students and their parents and communities. However, the data form just a snapshot in time and say nothing about how the candidates will support their growth throughout a professional career or how they will contribute to the PLCs to which they belong. What have we put into the program to help these future teachers develop and practice the personal, interpersonal, organizational, and interorganizational capacities they will need if they are to operate within a PLC that strengthens and expands their view of themselves as professionals? If, as we have demonstrated, building these capacities within experienced teachers is so powerful, how could we not provide them also for our preservice educators?
176
JAMES H. POWELL ET AL.
SPA STANDARDS THAT ADDRESS COLLABORATION Table 1 demonstrates how a number of the SPAs that represent our various programs choose to address the issue of collaboration in their standards. Our experiences have led us to believe that if we can build teacher preparation programs to enhance these capacities, we will be better prepared to address how our preservice and in-service teachers are able to demonstrate their implementation of effective teaching and learning practices. We have come to realize that the only way we will be able to reduce the tensions our questions have raised is by demonstrating how we can use the
Table 1. Specialized Professional Organization
SPA Standards. Standard Addressing Collaboration
Association for Childhood Education International
Professionalism: Professional growth, reflection and evaluation–Candidates are aware of and reflect on their practice in light of research on teaching, professional ethics, and resources available for professional learning.
National Council of Teachers of English
3.0 ELA Content Knowledge: 3.7.1 Reflect on their own teaching performance in light of research on, and theories of, how students compose and respond to text and make adjustments to their teaching as appropriate. 4.0 ELA Candidate Pedagogy: 4.3 Work with teachers in other content areas to help students connect important ideas, concepts, and skills within ELA with similar ones in other disciplines.
National Council on the Teaching of Mathematics
None
National Council for the Social Studies
9. Professional Leadership: Social studies teachers should posses the knowledge, capabilities, and dispositions to foster cross-subject matter collaboration and other positive relationships with school colleagues, and positive associations with parents, and others in the larger community to support student learning and well-being.
Teachers of English to Speakers of Other Languages
5. c. Professional Development and Collaboration: Candidates collaborate with and are prepared to serve as a resource to all staff, including paraprofessionals, to improve learning for all ESOL students.
What’s that Noise? Things that Keep Us Awake at Night
177
knowledge gained from our PLCs to build these required capacities within each candidate.
OUR QUESTIONS The tensions were from our examination of current practice through the lens of the SPA standards compared to our experiences. The different assumptions about teaching and learning generated questions about making practice public, creating programs that model and initiate collegial collaboration, and addressing developmental stages of teaching. Each of these issues is critical to understanding and improving our practice. All three are essential as we seek to design, implement, and support a P-20 model of the PLC and collegial collaboration. It appears that the traditional SPA assumptions and the assessment programs we have created to measure them do little to overcome the privacy of practice that has dominated education. Little (1982) pointed this out over 28 years ago; yet, the inherent problems associated with this model of isolation have been noted consistently since that time. As we developed our SPA reports, we were left to ask ourselves whether the assessments and subsequent data collection had any impact on reducing this culture of individualized and isolated practice. What were we doing to build a climate in which our candidates could feel secure enough to begin to address their perceptions of their personal efficacy? Was there anything in the program that would demonstrate our belief that educators are not simply trained but need to become reflective practitioners who rely on colleagues for professional growth and guidance? Which assessments measured how effectively we were addressing the ‘‘niceness’’ factor that promoted stories about teaching experiences rather than generating probing questions designed to aid reflection about teaching events? Equally importantly, what did our clinical placements demonstrate about our beliefs about learning communities? The current review served to highlight what we had seen in a previous program. During an earlier grant, the program invested resources necessary to build a cadre of mentors who had the space and time to tackle these issues. Located at a smaller number of sites, these educators formed teams with the university faculty and began to address the issues of privacy and efficacy, and more importantly how to move what they were learning into the teacher preparation program. Those mentors involved at that time were able to demonstrate what it meant to be a reflective practitioner who was open and willing to question his or her practice. Our preservice candidates observed
178
JAMES H. POWELL ET AL.
experienced teachers open themselves up to probing questions from their peers. They witnessed the possibilities for growth when teachers trusted their colleagues enough to open their doors and seek out collaborative experiences. They saw their mentors act as facilitators and guides and were able to practice these roles within safe environments. Being enacted, not just advocated. They could ask the probing questions of each other. The work done by everyone involved in supporting our teacher candidates was made richer and more contextual because it was done collaboratively as they coconstructed the program. While our current program meets SPA standards, it is no longer part of an interorganizational effort and does not fully represent or expect our candidates to participate in schools that have a professional culture that values learning communities. Our learning community and the interorganizational supports were essential to helping candidates be prepared to be part of learning communities. We have discovered, without much surprise, that the four capacities are interrelated and you are not able to take one away with seriously impacting the others. As we look at the program now, we have excellent mentors and clinical faculty working hard to ensure that each preservice candidate is able to observe effective practice and provided a safe, secure environment in which to begin their practice. However, it is one mentor and one room. Our candidates tend to see themselves as sole practitioners because that is what they have seen and experienced throughout their education, and the program is not able to offer an alternative vision. Our current program still requires the student to do 30 days of residency in which they are teaching alone in the room. The opportunities to observe and participate in groups engaged in collegial practice are limited and no longer the norm. Our candidates once again tend to see the university and the schools as separate institutions that are connected only through their involvement in their classroom. In looking at the power of the PLCs, we are better able to see what has been lost from the earlier program. Like Mitchell et al. (2001), we have come to understand that a specific type of communication that blends advocacy and inquiry is essential. The distinctions and overlaps between advocacy of a teacher’s own practice and inquiry into a colleague’s opinion are required to build a collaborative team. This structure is characterized by questions as often as by statements. If this is to be truly effective, then it needs to be seen by our candidates in our classrooms, among the university faculty, as well as at their residency sites. If we expect the schools to break down the barriers of privacy, what are we doing within the college to do the same? And how are we working in partnership to support each other in this difficult task.
What’s that Noise? Things that Keep Us Awake at Night
179
ADDRESSING THE DEVELOPMENTAL ASPECTS OF LEARNING TO TEACH Another tension that we have felt is the day-to-day accreditation compliance issues that keep us from attending to the time it takes to participate and cultivate learning communities and networks. In a program as small as ours, the SPA reports tend to be the work of individuals. This keeps us from putting the effort toward learning community. Accreditation, which has the potential to be a catalyst for change, keeps us from devoting the time necessary to work and learn together. As Mitchell and Sackney pointed out, the first phase of shared understanding and new practice is ‘‘naming and framing’’ in which individuals are afforded the time and space to define the context and working parameters of the issue at hand. Having experienced the ability of faculty from schools and universities to form collaborative, collegial partnerships that honor and rely on both theory and local practitioner wisdom, we are left wondering how to recreate those structures within the existing constraints of time and resources. The wonder is not that we feel the tension being created by this dilemma, but how, with what we have learned, we can utilize that tension to leverage the SPA standards to foster environments where our preservice students can begin their career focused on collaboration and PLCs. When time and resources spent on reducing privacy of practice are not a clear part of the SPA measurements, can we ever expect to overcome this issue? And more importantly can we shift from compliance to proactive constructs? Our questions about teacher preparation have risen through our work in trying to engage in-service teachers in the opening and sharing of their practice. The successes we have enjoyed in the PLCs are due to the three conditions Fullan (2008) identified as being necessary for ‘‘positive purposeful peer interactions’’ (p. 45). First, there must be agreement on the larger values of the organization, individuals, and groups. Secondly, information and knowledge about what works has to be transparent and shared among all participants. Finally, systems must build in the means ‘‘to detect and address ineffective actions while also identifying and consolidating effective practices’’ (p. 45). We can identify each of these three in the positive growth of the PLCs; however, the tension we have felt is created when we ask ourselves are we assessing for any of these three in our current program. Believing the PLCs to be the best model for professional development, the only way to reduce that tension is to build a program that does model those practices we have come to respect and honor.
180
JAMES H. POWELL ET AL.
REFERENCES Anderson, S., & Togneri, W. (2002). Beyond Islands of excellence: What districts can do to improve instruction and achievement in schools. Washington, DC: Learning First Alliance. Anklam, P. (2007). Net work: A practical guide to creating and sustaining networks at work and in the world. Burlington, MA: Elsevier. Bolam, R, McMahon, A., Stoll, L., Thomas, S., Wallace, M., Greenwood, A., Hawkey, K., Ingram, M., Atkinson, A., & Smith, M. (2005). Creating and sustaining effective professional learning communities. DfES Research Report RR637. University of Bristol, Bristol, England. Available at www.dcsf.gov.uk/research/data/uploadfiles/RR637.pdf. Accessed on December 30, 2009. Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Byrd, D. M., & McIntyre, D. J. (1999). Research on professional development schools: Teacher Education Yearbook VII. Thousand Oaks, CA: Corwin. Dufour, R., & Eaker, R. (1998). Professional learning communities at work: Best practices for enhancing student achievement. Bloomington, IN: Association for Supervision and Curriculum Development. Easton, L. B. (2008). From professional development to professional learning. Phi Delta Kappan, 89(10), 755–761. Fielding, R. (1999). Radical collegiality: Affirming teaching as an inclusive professional practice. Australian Educational Researcher, 26(2), 1–34. Fullan, M. (2008). The six secrets of change: What the best leaders do to help their organizations survive and thrive. San Francisco: Jossey-Bass. Gajda, R., & Koliba, C. (2007). Evaluating the imperative of intraorganizational collaboration: A school improvement perspective. American Journal of Evaluation, 28(1), 26–44. Goodson, I. F., & Hargreaves, A. (1996). Teachers’ professional lives. Washington, DC: Falmer. Hord, S. M. (1997). Professional learning communities: What they are and why they are important? Issues About Change, 6(1), 1–8. Austin, TX: Southwest Educational Development Laboratory. Jackson, D., & Temperley, J. (2007). From professional learning community to networked learning community. In: L. Stoll & K. S. Louis (Eds), Professional learning communities: divergence, depth & dilemmas (pp. 45–62). Berkshire: Open University Press. Katz, S., Earl, L. M., & Jaafar, S. B. (2009). Building & connecting learning communities: The power of networks for school improvement. Thousand Oaks, CA: Corwin. Killion, J. (2008). Assessing impact: Evaluating staff development. Thousand Oaks, CA: Corwin. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press. Lieberman, A. (2007). Professional learning communities: A reflection. In: L. Stoll & K. S. Louis (Eds), Professional learning communities: Divergence, depth, and dilemmas (pp. 199–204). Bershire, England: Open University Press. Little, J. W. (1982). Norms of collegiality and experimentation: Workplace conditions of school success. American Education Research Journal, 19, 325–340. Little, J. W. (1993). Teachers’ professional development in a climate of educational reform. Educational Evaluation and Policy Analysis, 15(2), 129–151. Lortie, D. C. (1975). Schoolteacher: A sociological study. Chicago: University of Chicago Press.
What’s that Noise? Things that Keep Us Awake at Night
181
Louis, K. S., & Marks, H. (1998). Does professional community affect the classroom? Teacher works and student work in restructuring schools. American Journal of Education, 106(4), 532–575. McDiarmid, G. W., & Clevenger-Bright, M. (2008). Rethinking teacher capacity. In: M. CochranSmith, S. Feiman-Nemser, D. J. McIntyre & K. E. Demers (Eds), Handbook of research on teacher education: Enduring questions in changing contexts (3rd ed., pp. 134–156). New York: Routledge/Taylor & Francis Group and The Association of Teacher Educators. McLaughlin, M., & Talbert, J. (2001). Professional communities and the work of high school teaching. Chicago, IL: University of Chicago Press. Miller, P. M., & Hafner, M. M. (2008). Moving toward dialogical collaboration: A critical examination of a university-school-community partnership. Educational Administration Quarterly, 44(1), 66–110. Mitchell,, C., & Sackney, L. (2001). Building capacity for a learning community. Canadian Journal of Educational Administration and Policy, 19, 1–10. Stoll, L., & Louis, K. (2007). Professional learning communities: Divergence, depth and dilemmas. Berkshire, England: Open University. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Cambridge, UK: Cambridge University Press. Wlodkowski, R. J., & Ginsberg, M. B. (1995). A framework for culturally responsive teaching. Educational Leadership, 53(1), 17–22.
CHAPTER 11 REVISITING SELF IN THE MIDST OF NCATE AND OTHER ACCOUNTABILITY DEMANDS$, $$ Cheryl J. Craig ABSTRACT Through the use of narrative inquiry, this self-study focuses on my teacher education practices. A flashback in time probes the influence of four simultaneous accountability reviews – a national accreditation review, a regional accreditation review, a university system review, and local campus review – on my personal experiences and identity within academia. The recollection provides a public view of private practice, explores the hidden curriculum of accountability, reveals cover stories personally and collectively lived, and illuminates how my knowledge of accountability became heightened. Through drawing on multiple forms of $
This chapter is based on the personal experiences of the author and does not necessarily represent the views or opinions of the institution, individuals, or agencies identified in this work. This chapter was written for the purpose of understanding the impact of accountability on the author’s teacher education practices and was not meant to be critical of other’s practices, roles, and behaviors within my institution. $$ A version of this chapter first appeared in Studying Teacher Education. Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 183–198 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012014
183
184
CHERYL J. CRAIG
evidence, I reconstruct a series of changes I lived that had to do with human subjects’ reviews, course syllabi requirements, student assignments, grading procedures, and personal productivity. The self-inquiry lays bare individual and institutional compromises that were made to achieve acceptable measures of success as determined by external agencies. Reflections on what occurred in the aftermath of the reviews are also included. Most of all, hard lessons learned amid multiple accountability agendas are brought forward for discussion and analysis, not only by me, but by the national and international teacher education communities whose memberships face similar demands for performativity, the European equivalent of accountability. The accumulation of self-studies such as mine will help to show the incipit nature of the accountability phenomenon and its pernicious impact on teacher educator’s work and personal images of teaching. Research such as this will demonstrate how desperately productive change is needed in the fields of teaching and teacher education.
At a recent faculty meeting, the topic of NCATE (National Commission for the Accreditation of Teacher Education) came up. A decision will soon be made whether or not our campus will participate. I thought the faculty would respond with a decisive ‘‘no,’’ especially since the state’s two Tier 1 universities have opted out of NCATE as well as a sister campus in our system. That was not the case. For the most part, the query was met was silence. But then I pondered the matter further. I realized that the majority of those attending the meeting were either visiting/adjunct professors or non-tenured faculty. They did not have an accountability experience similar to my own the last time around. Moreover, none of them were in a position to say no, even though they would most certainly be required to do the bulk of the work. The quirk of fate is this: We do not have many tenured professors left. We lost key members in the aftermath of the last set of accountability reviews and they and several others have not been replaced y The truth of the matter is that our department has not recovered since then, although other departments in our college have had their tenure-track positions filled y (Journal Entry, November 2009)
This snippet of conversation having to do with a recent exchange that took place at a faculty meeting triggered a flashback in time for me. It caused me to reach back to my previous accountability experience and reflect on how we came to be in our current position. I recount those experiences in the self-study that follows: The press for accountability is driving me wild y I see it not only paralyzing others but seizing me as well. For the teachers in my research, it is weekly submissions of lesson plans, benchmark tests, achievement exams, practice testing, attendance reports,
Revisiting Self in the Midst of NCATE and Other Accountability Demands
185
telephone logs, site visits, quality reviews, training sessions, and meetings, meetings, meetings y For me, it is human subjects renewals, compliance documents, multiple versions of course outlines, salary review vita, personal productivity portfolio, meeting agendas, official form submissions, and meetings, meetings, meetings y My evidence, which initially filled a binder, now occupies a shelf, not to mention the artifacts and reflections saved online y (Journal Entry, January 2007)
This journal entry focused on my mounting sentiments toward the four simultaneous accountability reviews in which I, as a faculty member at my institution, found myself involved, in addition to other compliance procedures that the local teachers and I routinely completed. Before this, I had conducted research concerning the influence of accountability – or performativity (Ball, 2003; Kelchtermans, 2007) as it is called in Europe – on teachers’ knowledge developments (Craig, 2006a; Deretchin & Craig, 2006), a phenomenon whose roots trace to the mechanistic view of change underlying the technical rationalist (Apple, 2008; J. Olson, 2002) philosophy that pervades Western universities and whose impact has escalated as global competitiveness has increased. And before that, I had worked at a private institution exempt from public policy dictates, which graduated approximately two teacher candidates annually. The latter experience, combined with my Canadian cultivation, my relative newcomer status to Texas, and the historical gap between what happens in academia and schools (Levine, 2006), had caused me to mistakenly think that the expectations placed on teachers in the public schools would not be required – at least to the same extent – of those of us employed in universities. I had not thought that the increased contestation of the classroom space (Craig, 2009a) that I witnessed on one public school campus after another would extend its reach to the academy and place boundaries on what I could know and do in my teacher education practices. I had not imagined that I, too, would feel captive like ‘‘a butterfly under a pin’’ (Craig, 2009b) as I and those around me met faceto-face with the ‘‘[accountability] dragon in the [College of Education’s] backyard’’ (Craig, 2004), as a principal in a previous narrative inquiry metaphorically put it. In this chapter, I use the autobiographical and relational aspects of narrative inquiry to examine my personal experiences in the throes of a national accreditation of teacher education programs, a regional accreditation of universities and colleges, a university system review, and a local college review. I particularly illuminate difficult lessons I learned as a byproduct of my cumulative accountability experiences. My purpose in engaging in this self-study is to (a) examine my personal experiences in a public way, (b) explore the hidden curriculum (Jackson, 1968) of accountability practices, (c) unpack compromises I personally
186
CHERYL J. CRAIG
made, (d) investigate cover stories lived and told by myself and others, and (e) illustrate how my understanding of the influence of accountability practices deepened as a consequence of this self-study. The self-study genre of inquiry is particularly supported by Schwab who believed educators needed to develop a kind of ‘‘self-consciousness’’ about changes we experience to ‘‘give teaching such a cast that it could stand alongside graduate studies and the emerging scientifically-based professional education [in importance]’’ (Westbury & Wilkof, 1978, pp. 6–9). Only through pursuing such an approach can humanistic education (Allender, 2001; Allender & Allender, 2008) and sustained educational improvement be achieved (Schwab, 1969/1978).
METHOD Narrative inquiry, the research approach I used to explicate how multiple accountability demands were experienced by me in my higher education setting, is best understood as a personal experience method (Clandinin & Connelly, 2000, 2004) where story serves as both method and form. As a research methodology, narrative inquiry has been successfully employed in multiple settings where the excavation of teacher knowledge has been the focus of study. Narrative inquiry’s autobiographical and relational qualities – just two of its many strengths – make it particularly well-suited to self-studies focused on professors’ knowledge in context, which extends the idea of teacher knowledge development into academia and shares such knowledge with the teacher education community. Deriving support from such diverse disciplines as philosophy (Johnson, 1989), anthropology (Geertz, 1973), organizational learning (Czarniawka, 1997), and women’s studies (Gilligan, Lyons, & Hanmer, 1990), the research methodology involves my thinking about, representing narratively, and contextualizing my experiences of the accountability reviews through the use of personal journal entries, excerpts from participant observation notes, work samples, and historical documents produced by others and myself. In the narrative inquiry tradition of Connelly and Clandinin (1990), these field texts allow me to ‘‘burrow’’ into my experiences of increased accountability, to ‘‘broaden’’ what happened at my institution through laying it alongside what was occurring elsewhere on the educational landscape, and to ‘‘story and restory’’ my accountability experiences nested within that broader backdrop. I additionally use a fourth interpretative device, ‘‘fictionalization’’ (Clandinin et al., 2006), to shift events in ways that do not alter their meaning
Revisiting Self in the Midst of NCATE and Other Accountability Demands
187
but subtly disguise identities and situations in an institutional context where personal and professional lives continue to play out. Through this approach, distinguishing features of self-study – context, process, and relationships – become apparent (Bullough & Pinnegar, 2001). Concurrently, self-study as ‘‘the study of one’s self, one’s actions, one’s ideas, as well as ‘not self ’’’ (Hamilton & Pinnegar, 1998, p. 236) is practiced, and a narrative exemplar like those published by Kitchen (2005) and Craig (2006b) is created around a topic that Manke (2004) has previously addressed from a leadership perspective, Kosnik (2005) from a Canadian standpoint, and Kornfeld, Grady, Marker, and Rudell (2007) from a state (California) accreditation point of view. Like other narrative exemplars, the one in this chapter describes intentional reflective action, is socially and contextually situated, interrogates aspects of teaching and learning, implicates identities (in my case, my own), and produces knowledge (Lyons & LaBoskey, 2002, pp. 21–22). In narrative inquiry fashion, I now present my self-study research texts in the form of a story of experience (Connelly & Clandinin, 1990). The topics woven throughout my narrative – things such as human subjects procedures, course syllabi preparation, work sample creation, and the like – follow time-event correspondence and are historically correct (fictionalization aside). At the same time, the analysis of those time-ordered events is imbued with my personal sense of narrative truth (Spence, 1982). This means others in my higher education context could vehemently disagree or heartily agree with what I share here.
STORY OF EXPERIENCE Strangely enough, an internal change in the human subjects’ procedures at my institution of higher learning sharply drew my attention to the heightened accountability requirements at my university and launched the story of experience that forms the core of this self-study research enterprise. While qualitative research has historically encountered challenges moving through human subjects reviews (i.e., Janesick, 1998), annual renewals of studies typically have been unproblematic. However, the human subjects review questions at my place of work changed, seemingly without notification, and I found myself documenting research participation not only for the past year of my four studies but also for all previous years when the requirements were not in effect. It did not help that tales were simultaneously told of professors having had their data confiscated. Because I also conduct field-based narrative inquiries with multiple research
188
CHERYL J. CRAIG
assistants in a number of school sites in an urban center where teachers and students are highly mobile, providing a detailed account of years of research contact is no single afternoon’s work. I used every available second for four weeks to meticulously prepare my reports. In the process, I realized my research was embroiled in the university system accountability review and that it would be especially risky for me to continue to conduct research involving children. Hence, I concluded one project two years ahead of time. Sadly, it was not how the particular narrative inquiry was unfolding that led me to terminate the investigation. Neither did the gnawing tension emanate from a research violation in one of my school sites nor was it due to the reams of paperwork I routinely prepared. Rather, it was my increased sense of feeling vulnerable to criteria that others could change on a whim. Underneath it all, I knew that ultimately a question would be asked for which I could not supply a swift and certain answer. Doubt in my competence as a professional was creeping in although I did not recognize and name it as such until much later. For the time being, the cover story (Crites, 1979; Clandinin & Connelly, 1995; Olson & Craig, 2005) I lived and told remained intact – unchallenged by others, undisturbed by me. My creation of multiple versions of course syllabi was the second major tour de force that inducted me into the era of increased accountability at my institution. My course outlines were no longer distinguished by topics, timelines, due dates, texts, and examples of how my grades would be apportioned but also were required to include bibliographies of supplementary and suggested readings, an outline for each assignment, the rubrics with which each assignment would be judged, and a matrix outlining the objectives and anticipated outcomes of learning activities, along with the percentage of students expected to fail/succeed. But the demands did not stop there. My particular university had its own set of requirements, as did the system of campuses to which it belonged. This meant course outlines needed to be branded with institutional logo, and common dates, deadlines, and accommodations now needed to appear in syllabi. Furthermore, the National Commission for the Accreditation of Teacher Education expected to see its own vernacular in curriculum offerings. For example, a written statement of how my college’s mission fit with each of the courses I taught as well with national standards was to be included. Also, teaching dispositions – a contentious topic in its own right (i.e., Berliner, 2005; Clarke, 2001; Korthagen, 2004) – needed to be presented as ‘‘rhetoric of conclusions’’ (Schwab, 1962) that denied differences in identification and interpretation existing in the field. Meanwhile, the Southern Accreditation of Colleges and Schools review process expected its own verbiage as well.
Revisiting Self in the Midst of NCATE and Other Accountability Demands
189
For that body, I was to include a statement that noted if students did not concur with the stances and values expressed in my classes, they were to report their concerns to me or my superiors, among other subtle changes. In all instances, my course outlines were screened by my Department Chair, among others. Also, while I was not personally asked to alter my syllabi, I was aware that other professors changed their documents under supervision, which most certainly rubbed against the grain of teacher/ professor agency and professionalism. All in all, multiple revisions were made to my course outlines and different alterations appeared in different versions of the syllabi, depending on the internal and external agency submissions. And, as my course outlines became increasing standardized in response to the different sets of external criteria, I began to lose ownership of them and my ability to identify with them. In numerous journal entries, I glibly referred to them as this review’s version or that review’s version, not as something foundational to my teaching and integral to my students’ learning, which had been my prior understanding. Also, when I distributed my course outlines to my students, I would indicate whose version I was using that particular term. These pithy remarks formed a thin veneer, which superficially glossed over the fact that I was swiftly losing sight of my ‘‘best-loved self ’’ (Schwab, 1954/1978, pp. 124–125), which had previously found expression in my course syllabi and my subsequent teaching practices. In fact, I found my preferred self rapidly being replaced by the ‘‘automaton’’ (Crocco & Costigan, 2007) that the abstract institutional directives dictated I should be. Also, the more time I spent appeasing the different agencies by tinkering toward utopia (Tyack & Cuban, 1997), the less time I had left to prepare for my classes and respond to student work. Overall, the increase in ‘‘auditable paperwork,’’ a term frequently used by local teachers and documented often in my interview transcripts and participant observation notes, caused me to dissociate myself from my scholarship of teaching and interfered with me developing genuine relationships with my students. Also, serious doubts concerning my competence compounded. This time round, however, I found myself acting on my subconscious thoughts. Faced with impossible deadlines, I missed a student defense and began to uncharacteristically lose things. Upon reflecting on the sorry situation in my journal, I recalled that one of the baseline assumptions of accountability systems is the belief that somebody is doing something wrong for which they should be punished (Achinstein & Ogawa, 2006; Goodlad, 1979). This became a major turning point for me. I realized that I, like the teachers I earlier had observed, had begun to live the inept image
190
CHERYL J. CRAIG
(Clandinin, 1986) of teaching that policy makers and the public had held for me since the A Nation At Risk: The Imperative for Educational Reform (Gardner et al., 1983) report had been circulated and widely accepted as truth by the American public. Also, when I learned from a mass email communique´ circulated in the department that I was one of only 20 percent of faculty members who managed to submit their syllabi on time, I realized that my colleagues were similarly overworked and simultaneously feeling pressure to produce an auditable paper trail. I also reflected on how pervasive the fear of perceived incompetence was. I additionally thought how contrary this sense of anxiety was to the productive kind of fear needed to fuel authentic teaching and learning (Latta, 2005). In addition to the course outlines, I also was required to attach student work samples of below-average, average, and above-average work that fit an overarching matrix, which I also had to create. Determining whose work was above average and average was an easy task to complete. However, identifying the below-average work samples proved difficult for me. First, I need to explain that I work closely with students over the course of a semester and the exemplar that might have been below average at the beginning of a course does not look that way at the end. Also, I categorically refused to use my international students’ assignments to exemplify belowaverage achievement, although, as I noted in my reflective journal, their language challenges made them prime targets. What I ended up doing was pulling materials from the box in my office where students retrieve their assignments. I reasoned that students who had not bothered to pick up their feedback had determined for themselves that their papers were less important or had fallen short of the mark. In my journal, I explained that this compromise freed me from scapegoating the intellectual products of my English-as-a-Second-Language learners and from making test cases of other students who also had expended their best efforts. Most of all, I was able to maintain a stance that would allow me to interact with integrity with the students with whom I continued to work. While I knew that the names on the assignments would be blotted out when they became anonymous artifacts in the accountability documentation process, I also was astute enough to recognize that that this did not mean that I would forget capricious decisions I might make for the sake of expediency in a high-stakes accountability environment. In short, I personally took steps to shield myself from the deficit thinking that the multiple accountability demands imposed on me, which, in turn, became transferred to my students as I increasingly was forced to be a conduit (Clandinin & Connelly, 1995; Craig, 2002), a mere instrument of public policy implementing what was
Revisiting Self in the Midst of NCATE and Other Accountability Demands
191
expected of him/her devoid of minded attention to how things could be made more individually and contextually relevant to one’s students and one’s self. At the same time as I personally struggled with which student exemplars to use, our more seasoned faculty, the most prominent members being in my program area, and those who were newer to the university and to the profession, experienced a difference of opinion concerning whether this type of evidence was in fact needed to pass. Because several long-term faculty members had participated in previous reviews, they reasoned that the examining teams would not have time to locate specific work samples, never mind evaluate them. On this occasion, however, I sided with those who tended not to be senior, which introduced a new set of dynamics to my relationships with the more experienced, male professors with whom I work most closely. For a while, ‘‘General Craig’’ became my alias, a moniker incongruent with my personal sense of self and different from the stories typically given back to me by my program area peers. As can be seen, changes in procedures introduced tensions to relationships (both self and others) that had previously not existed. However, I defended my position based on personal experiences I had had in other field-based narrative inquiries I had conducted in several urban school sites. In my ongoing research studies following teachers’ knowledge developments as shaped by context, I had witnessed firsthand a number of total quality management reviews in which examination teams isolated one tidbit of information and, in their words, used it to ‘‘drill down into the data,’’ as I repeatedly recorded in my participant observation notes. This meant that virtually any aspect of practice could become the fine-point case around which the visiting evaluation teams – whose members were drawn from many states – would generate their reports. For example, I had watched in horror how a school with a mostly Hispanic immigrant population – near the immigration office, nonetheless – was admonished because the teachers there provided instruction and communicated with primary students in Spanish. When the educators countered with the fact that Spanish was the language of instruction in the morning (the visit occurred in the morning), the Fortune 500 business consultant heading up the review informed them that if the evidence was not included in the documentation, it simply did not exist. This, in turn, incited the anger of some of the teachers and brought the principal and others (including community members) to tears. In the end result, all of our faculty, some professors obviously more begrudgingly than others, prepared the fully developed exemplars.
192
CHERYL J. CRAIG
As a result, our creation of print and digital documentation escalated, with the artifact tally catapulting from 5,567 (the sum announced at a meeting two weeks before the review) to an estimated 8,000 pieces of evidence (the amount made public at the conclusion of the review). The flight from experience to representation (Latta & Field, 2005) and, ultimately, to trivialization (Johnson, Johnson, Farenga, & Ness, 2005) was – out of necessity – alive and well at my institution. At the same time, significant ‘‘narrative smoothing’’ (Spence, 1982) took place and, to a great extent, negated the complexities associated with the diverse, urban milieu in which our teacher education program unfolds. Thus, through one thing and another, the smooth plotlines of the accountability story we crafted fit beautifully with the story of teacher education others expected to hear from us. In a mutually adaptive sort of way, our cover stories satisfied their cover stories, which denied all of us generative growth opportunities, a most unfortunate occurrence. As a result, we naturally rested on our laurels when the accreditation process was over rather than engaging in continuous improvement through ‘‘double-loop learning’’ (Argyris & Scho¨n, 1974; Scho¨n, 1983). But I am jumping ahead of how events and times corresponded. In addition to the accumulated syllabi and student work, I also was required to submit a salary review vita and a scholarly productivity portfolio, a process that had already become so lengthy that it had morphed into a summer project requiring the help of one graduate student – whose job assignment ironically was that of a Research Assistant. Where the latter was concerned, the number of pages that I published temporarily became more important than the quality of the journals in which my articles appeared. Also, because I had spent so much time on my institutional review board applications and other required paperwork, I failed to see that the formatting of the salary vita and portfolio had also been altered from the previous year and came perilously close to experiencing financial penalty. And, of far greater personal consequence, I did not submit my application for sabbatical leave, which was due about the same time as the artifacts. I simply could not stomach the thought of filling out another detailed form that was to be accompanied by evidence and letters of support, which I did not have time to solicit. Obviously, this decision hurt me more than anyone else. During the national accreditation site visit, I, as coordinator of the teaching and teacher education program area and director of elementary education, participated in several focus group discussions where I managed to jot participant observation notes for the purposes of this self-study.
Revisiting Self in the Midst of NCATE and Other Accountability Demands
193
One such exchange had to do with who taught our doctoral cohort that meets near the Texas–Mexico border. Our site visitors were clearly impressed when we told them that tenured faculty, rather than adjunct faculty, flew regularly to Brownsville, Texas, to instruct those courses and that our university produced the most Hispanic doctorates in education in the nation. However, the most telling conversation in which I participated took place with members of the college’s research committee. What was enormously revealing to me was the extent to which certain representatives from other departments, who in previous meetings dominated discussions, sat in stone-faced silence. Also, their opening declaration that they did not know anything about teacher education was highly troubling; especially because the impression was left that those of us directly involved were unwilling to share our knowledge. In that meeting, I, with the help of a new faculty member from another department, worked extremely hard to cover up intended or unintended blunders. I also struggled to control the intense emotions I felt boiling and bubbling inside of me as I later reported in my journal. In these telling moments, my strong suspicions concerning how Colleges of Education operate were confirmed. When push came to shove, I came to understand that responsibility for teacher education solely resided with the Department of Curriculum and Instruction and that other departments viewed the teacher education enterprise as less scholarly – possibly unscholarly? – and the work of other people, although faculty members in other departments obviously contributed courses and instructed preservice students who would soon be practicing teachers. A great deal more could be added here about the deep-rooted, highly entangled hegemonies present in Colleges of Education, but I leave it to readers’ informed imaginations to follow my drift. Suffice to say, challenges faced by teacher educators emanate as much from within the buildings where they work as they do outside of them in the broader university context and community at large.
CONCLUDING DISCUSSION The surface outcome of the national accreditation review was that my institution, the largest, most diverse Research I university in the United States, achieved a clean pass, which was a major cause for celebration. However, when the positive news was delivered, the strong efforts of the Department of Curriculum and Instruction – particularly those of our Chair – were glossed over and a cover story of the entire college faculty
194
CHERYL J. CRAIG
contributing equally to the success was articulated. In the heels of that narrative smoothing, our Chair, having not even been mentioned, swiftly exited the meeting and promptly resigned her position. Since then, the department’s 10 or so faculty lines remain unadvertised and a suite of prime office space has been taken up by a newly created institutional accountability office. Also, as fate would have it, the chair of a department less prominently involved in the accountability reviews was put in charge of the aforementioned office. Somewhat later, a similar thing happened to another department chair, the one furthest removed from teacher education. That individual also received a slice of the accountability pie: the job of sorting out the impact factors of the journals in which we publish, among other responsibilities related to faculty productivity and institutional growth. Also, our departments have been completely reorganized as a consequence of the successful accountability reviews. The Ph.D. degree has remained linked with the newly combined Educational Psychology/Higher Education department, whereas the Department of Curriculum and Instruction – along with practice-oriented faculty from the Leadership department – retained the Ed.D. program and added a new Executive Ed.D. program requiring fewer credit hours. As for me, I became the College of Education’s representative to the university’s Human Subjects Committee, a committee that was completely restructured and placed under new leadership. Despite the positive accountability review results, readers can see that personal and collective losses were many. Such losses had to do with recognition, compensation, position, space, time, and notoriety, to name but a few. As my storied experiences indicate, those faculty members who most carried the brunt of the accountability burden – that is, the rank and file teacher educator – were those least recognized and rewarded in the aftermath of the accreditation process. Concurrently, important knowledge gains occurred on my part as a teacher educator unavoidably caught up in the ‘‘brouhaha,’’ as a colleague once referred to it. My knowledge of the insidious ways that accountability procedures affect educators’ images of themselves as teacher educators and their relationships with others (faculty, students, administration) skyrocketed. The story of experience I shared instantiated specific changes that occurred in how I viewed and approached my situation as well as the underlying thinking that provoked such changes. Readers will recall that I transitioned from denial and lack of awareness of the new accountability demands, to almost being overcome by them, and ultimately to recognizing
Revisiting Self in the Midst of NCATE and Other Accountability Demands
195
how important it is – not only for me as a teacher educator, but also for others within my relational sphere – that I not buckle under the pressure that heightened expectations exert. What finally became altered was how I responded to changing contextual situations that were not of my making. Most of all, I came to appreciate how my image of my self as a teacher educator shapes what I can know and do in the educational enterprise and how absolutely vital it is for me to protect it from both external and internal abuse, particularly in an institutional environment where accountability demands routinely rain down and rarely run up. In a nutshell, I learned that if I cease to seek and defend my intellectual and professional freedom as a teacher educator, I will be unable to help my students (who themselves are teachers and future teachers) search for their freedom and to correspondingly assist their students in freeing themselves to learn (Greene, 1978). I came to know – once again, and in no uncertain terms – that liberation, not captivation, is the purpose of education – and that I am its agent (Schwab, 1954/1978). As a teacher educator, I cannot emphasize strongly enough the importance of this reinforced understanding, which brought with it renewed dedication to my profession. In conclusion, the research contributions that self-studies such as mine make to the field of teaching and teacher education can also be discussed. Through employing the autobiographical and relational aspects of narrative inquiry (a multidimensional research method), carefully rendered accounts like this one are able to chronicle insider experiences of accountability, especially the consequences of what happens when the press for additional evidence becomes intensified and affects not only the warp and weft of teachers’ and teacher educators’ practices, but also the core of their well-beings and the tenor of the institutional contexts within which they work. Without a doubt, such self-studies, when accumulated (Zeichner, 2007), have a major role to play in exposing how job satisfaction and sense of agency wanes as autonomy is lost (Ross & Reskin, 1992) in the accountability review process. So, too, are associated injustices at the micropolitical level made public. Within accountability stories lived and told, and relived and retold at universities dotted across the nation and around the world, seeds of generative change can be found. As Schwab (1958/1978) maintained, ‘‘there is yet more to know [and] know about y’’ (p. 153). This is particularly the case of the accountability phenomenon whose longitudinal influence on individual educators and the teaching profession can only be speculated.
196
CHERYL J. CRAIG
REFERENCES Achinstein, B., & Ogawa, R. (2006). (In)Fidelity: What the resistance of new teachers reveals about professional principles and prescriptive educational policies. Harvard Educational Review, 26(1), 30–63. Allender, J. S. (2001). Teacher self: The practice of humanism in education. New York: Rowman & Littlefield Publishers. Allender, J. S., & Allender, D. S. (2008). The humanistic teacher: First the child, then curriculum. Herndon, VA: Paradigm Publishers. Apple, M. (2008). Curriculum planning: Content, form, and the politics of accountability. In: F. M. Connelly (Ed.), Sage handbook of curriculum and instruction (pp. 22–44). Thousand Oaks, CA: Sage. Argyris, C., & Scho¨n, D. (1974). Theories in practice. New York: Jossey-Bass. Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18(2), 215–228. Berliner, D. C. (2005). The near impossibility of testing for teacher quality. Journal of Teacher Education, 56(33), 205–213. Bullough, R. V., Jr., & Pinnegar, S. (2001). Guidelines for quality in autobiographical forms of self-study research. Educational Researcher, 30(3), 13–21. Clandinin, D. J. (1986). Classroom practice: Teacher images in action. Bristol, PA: Falmer Press. Clandinin, D. J., & Connelly, F. M. (1995). Teachers’ professional knowledge landscapes. New York: Teachers College Press. Clandinin, D. J., & Connelly, F. M. (2000). Narrative inquiry: Experience and story in qualitative research. San Francisco, CA: Jossey-Bass. Clandinin, D. J., & Connelly, F. M. (2004). Knowledge, narrative, and self-study. In: J. Loughran, M. Hamilton, V. LaBoskey & T. Russell (Eds), International handbook of self-study of teaching and teacher education practices (pp. 575–600). Boston, MA: Kluwer Academic Publishing. Clandinin, D. J., Huber, J., Huber, M., Murphy, S., Orr, A. M., Pearce, M., & Steeves, P. (2006). Composing diverse identities: Narrative inquiries into the interwoven lives of children and teachers. New York: Routledge. Clarke, A. (2001). The recent landscape in teacher education: Critical points and possible conjectures. Teaching and Teacher Education, 17(15), 599–611. Connelly, F. M., & Clandinin, D. J. (1990). Stories of experience and narrative inquiry. Educational Researcher, 19(5), 2–14. Craig, C. (2002). The conduit: A meta-level analysis of lives lived and stories told. Teachers and Teaching: Theory and Practice, 8(2), 197–221. Craig, C. (2004). The dragon in school backyards: The influence of mandated testing on school contexts and educators’ narrative knowing. Teachers College Record, 106(6), 1229–1257. Craig, C. (2006a). Why is dissemination so difficult? The nature of teacher knowledge and the spread of school reform. American Educational Research Journal, 43(2), 257–293. Craig, C. (2006b). Change, changing, and being changed: A self-study of a teacher educator’s becoming real in the throes of urban school reform. Studying Teacher Education, 2(1), 105–116. Craig, C. (2009a). The contested classroom space: A decade of lived educational policy in Texas schools. American Educational Research Journal, 46(4), 1034–1059.
Revisiting Self in the Midst of NCATE and Other Accountability Demands
197
Craig, C. (2009b). ‘Butterfly under a pin’: An emergent teacher image in the throes of forced curriculum reform. Paper presented at the International Study Association of Teachers and Teaching Conference, University of Lapland, Finland. Crites, S. (1979). The aesthetics of self-deception. Soundings, 42(2), 197–229. Crocco, M., & Costigan, A. (2007). The narrowing of curriculum and pedagogy in the age of accountability: Urban educators speak out. Urban Education, 42(6), 512–535. Czarniawka, B. (1997). Narrating the organization: Dramas of institutional identity. Chicago: University of Chicago Press. Deretchin, L., & Craig, C. (2006). International research on the impact of accountability systems. Lanham, MD: Rowman & Littlefield. Gardner, D., Larson, Y., Baker, W., Campbell, A., Crosby, E., Foster, C., Francis, N., Giamatti, A. B., Gordon, S., Haderlein, R., Holton, G., Kirk, A., Marston, M., Quie, A., Sanchez, Jr., F., Seaborg, G., Sommer, J., & Wallace, R. (1983). A nation at risk: The imperative for educational reform. Available at http://www.ed.gov/pubs/ NatAtRisk/index.html. Retrieved on December 30, 2007. Geertz, C. (1973). The interpretation of cultures: Selected essays. New York: Basic Books. Gilligan, C., Lyons, N., & Hanmer, T. (1990). Making connections. Cambridge: Harvard University Press. Goodlad, J. (1979). An ecological version of accountability. Theory into Practice, 18(5), 316–322. Greene, M. (1978). Landscapes of learning. New York: Teachers College Press. Hamilton, M. L., & Pinnegar, S. (1998). Conclusion: The value and the promise of self-study. In: M. L. Hamilton (Ed.), Reconceptualizing teaching practice: Self-study in teacher education. Bristol, PA: Falmer Press. Jackson, P. W. (1968). Life in classrooms. New York, NY: Holt, Rinehart, & Winston. Janesick, V. (1998). ‘‘Stretching’’ exercises for qualitative researchers. Thousand Oaks, CA: Sage. Johnson, D., Johnson, B., Farenga, S., & Ness, D. (2005). Trivializing teacher education: The accreditation squeeze. New York: Rowman & Littlefield. Johnson, M. (1989). Embodied knowledge. Curriculum Inquiry, 19(4), 361–377. Kelchtermans, G. (2007). Teachers’ self-understanding in times of performativity. In: L. Deretchin & C. Craig (Eds), International research on the impact of accountability systems (pp. 13–30). Lanham, MD: Rowman & Littlefield. Kitchen, J. (2005). Looking backwards, moving forward: Understanding my narrative as a teacher. Studying Teacher Education, 1(1), 17–30. Kornfeld, J., Grady, K., Marker, P., & Rudell, M. R. (2007). Caught in the current: A self-study of state-mandated compliance in a teacher education program. Teachers College Record, 109(8), 1902–1930. Korthagen, F. (2004). In search of the essence of a good teacher: Towards a more holistic approach in teacher education. Teaching and Teacher Education, 20, 77–97. Kosnik, C. (2005). No teacher educator left behind: The impact of US policies and trends on my work as a researcher and teacher educator. Studying Teacher Education, 1(2), 209–223. Latta, M. M. (2005). The role and place of fear in what it means to teach and learn. Teaching Education, 16(3), 183–196. Latta, M. M., & Field, J. C. (2005). The flight from experience to representation: Seeing relational complexity in teacher education. Teaching and Teacher Education, 21(6), 649–660.
198
CHERYL J. CRAIG
Levine, A. (2006). Educating school teachers, September. The Education School Project. Available at http://www.edschools.org/pdf/Educating_Teachers_Report.pdf. Retrieved on June 8, 2007. Lyons, N., & LaBoskey, V. K. (Eds). (2002). Narrative inquiry in practice: Advancing the knowledge of teaching. New York: Teachers College. Manke, M. (2004). Maintaining personal values in the NCATE process. Paper presented at American Educational Research Meeting, San Diego, CA, April 12, 2004. Olson, J. (2002). Systemic change/teacher tradition: Legends of reform continue. Journal of Curriculum Studies, 34(2), 129–137. Olson, M., & Craig, C. (2005). Uncovering cover stories: Claiming not to know what we know and why we do it. Curriculum Inquiry, 35(2), 161–182. Ross, C., & Reskin, B. (1992). Education, control at work, and job satisfaction. Social Science Research, 21, 134–148. Scho¨n, D. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books. Schwab, J. (1954/1978). Eros and education: A discussion of one aspect of discussion. In: I. Westbury & N. Wilkof (Eds), Science, curriculum and liberal education: Selected essays (pp. 105–132). Chicago: University of Chicago Press. Schwab, J. (1958/1978). Enquiry and the reading process. In: I. Westbury & N. Wilkof (Eds), Science, curriculum and liberal education: Selected essays (pp. 149–163). Chicago: University of Chicago Press. Schwab, J. (1969/1978). The practical: A language for curriculum. In: I. Westbury & N. Wilkof (Eds), Science, curriculum and liberal education: Selected essays (pp. 287–321). Chicago: University of Chicago Press. Schwab, J. J. (1962). The teaching of science as inquiry [The Inglis Lecture]. In: J. J. Schwab & P. F. Bradwein (Eds), The teaching of science (pp. 5–103). Cambridge, MA: Harvard University Press. Spence, D. (1982). Narrative truth and historical truth: Meaning and interpretation in psychoanalysis. New York: W.W. Norton. Tyack, D., & Cuban, L. (1997). Tinkering toward utopia: A century of public school reform. Cambridge, MA: Harvard University Press. Westbury, I., & Wilkof, N. (Eds). (1978). Science, curriculum and liberal education: Selected essays. Chicago: University of Chicago Press. Zeichner, K. (2007). Accumulating knowledge across self-studies in teacher education. Journal of Teacher Education, 58(1), 36–46.
CHAPTER 12 DOES NATIONAL ACCREDITATION FOSTER TEACHER PROFESSIONALISM? Ken Jones and Catherine Fallona ABSTRACT This chapter examines the University of Southern Maine’s experience seeking national accreditation through the Teacher Education Accreditation Council (TEAC). We share positive benefits from the process as well as the opportunity costs that come with the large commitment of time, energy, and resources to a national accreditation process. In conclusion, we discuss what there is to learn from our case that can shed light on the issue of how the accreditation process contributes to or detracts from developing a professionalized teacher corps through colleges of education.
What does national accreditation signify about a teacher education program? Is it an indication that the program develops professional teachers? Is it a fair and accurate public accountability system for this purpose? What is the impact of going through the accreditation process? What are the costs?
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 199–211 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012015
199
200
KEN JONES AND CATHERINE FALLONA
At the University of Southern Maine, we have wrestled with these questions over the past few years. Our teacher education program that consists of two primary structures – the undergraduate-graduate Teachers for Elementary and Middle Schools (TEAMS) Program and the postbaccalaureate Extended Teacher Education Program (ETEP) – was recently accredited by the Teacher Education Accreditation Council (TEAC) after having been previously accredited by the National Council for Accreditation of Teacher Education (NCATE). As we moved from our existing NCATE accreditation to TEAC accreditation, we have gained some insights with regard to these questions, particularly how national accreditation may or may not foster greater professionalism within teacher preparation.
MOVING FROM NCATE TO TEAC The decision to move the national accreditation of our teacher education program from NCATE to TEAC was the result of a long discussion among faculty and administrators that took into account a host of pluses and minuses related to program quality, the balance of local self-study vs. external review, faculty ownership of the process, the impact of the process, the politics and regulations associated with aligning national accreditation and state approval, the likely consequences for status and marketability, the possible effects on our program completers, and, of course, the workload and expense involved. What is interesting is that the issue of standards, while addressed, did not take that much deliberation. We were already NCATE accredited and had demonstrated that we could meet those standards. In fact, the state of Maine had adopted those standards for state approval, so we would need to meet them anyway, regardless of our decision about whether to go with NCATE or TEAC. At the same time, the TEAC quality principles, capacity standards, and cross-cutting themes provided an excellent and comprehensive set of standards for teacher education to which we could also subscribe. The two sets of standards, while organized and phrased differently, are very similar and a simple cross-walk between the two shows that providing evidence of meeting one set of standards essentially does the same for the other. So it was not the standards themselves that concerned us. Rather it was the process required to demonstrate evidence of meeting those standards. Many of us had been through the NCATE process in the past and found that it was very highly prescriptive, fostering a distinct attitude of compliance rather than commitment. Faculty members were
Does National Accreditation Foster Teacher Professionalism?
201
willing to go through this enormous workload for the sake of attaining national accreditation and the recognition that brings with it, but mostly with a sense of dread and reluctance. On the contrary, TEAC promised to be much less prescriptive, allowing a program to develop its own set of claims consistent with the quality principles and to then build its own case with data showing that it lived up to its own claims. This central difference was fundamental in our decision to move to TEAC. The difference spoke to how we saw ourselves as professionals. TEAC’s approach appeared to honor our own vision and judgment about teacher education, within given parameters, and it appealed to our belief in inquiry as a central tenet of meaningful intellectual work. Just as we were committed to fostering a disposition of questioning and evidence-based assessment among our teacher candidates, we were naturally drawn to an accreditation system that seemed to foster that in us. The TEAC information workshops portrayed the required inquiry brief as a type of institutional dissertation. We had more affinity with this approach than with NCATE’s detailed set of controls and requirements. As it turned out, we discovered that the difference was not quite so black and white – TEAC was more prescriptive than we expected. More on that later. A second important reason for changing accreditation agencies was that TEAC focuses accreditation on programs, not the so-called unit that NCATE requires. As is well-known and established, NCATE conceptualizes a unit as comprised of all the teacher education programs within a university, including initial certification programs as well as continuing education programs and ranging through all areas and disciplines including school psychology, counseling, educational leadership, and many others. Through all of this variation, NCATE requires one common conceptual framework and comparable standards and candidate assessment systems. We felt strongly that this is a forced fit, even though NCATE says it is in the name of equity and high standards for all. We also agreed with TEAC’s position that such a unitary approach allows weak programs to be carried by strong programs and sacrifices depth of program analysis for breadth of program agreement. In this new era of required outcome data, it also necessitates a centralized control and monitoring system at the ‘‘unit’’ level. By moving to TEAC, our teacher education program was able to zero in on its own vision, quality assurance design, candidate assessment system, and data collection and analysis in a way that generated creative and systemic change. We did not have to devote our attention to demonstrating a general and largely superficial commonality with other programs and so
202
KEN JONES AND CATHERINE FALLONA
could accomplish more in the way of authentic and formative program development. For this reason, if for no other, the move was well worth it: we were able to work within our sphere of influence.
RE-VISIONING As we got into the new TEAC process, one of the first things we needed to do was to describe our quality control system. Our task, as we defined it, was to formulate how we ensure adherence to our mission and program quality criteria in a coherent way. Through a series of faculty meetings that stretched out over a year, we developed a schematic and text that we felt embodied what we stood for and how our program processes established and monitored for quality. We thought, as in the world of total quality management (TQM) and systems thinking, in terms of looking for quality throughout the program, not just in terms of outcomes at the end of the program (Senge, 1990). We named it our quality assurance system (QAS) rather than the quality control system. Fig. 1 shows the schematic we developed. As shown, our QAS shows the mission of the program at its center, Instilling Commitment to Equitable and Engaging Learning. It was, and is, the belief and commitment of our faculty that this is the goal for the QAS – to ensure that our program enacts this mission. Moving from the center outward, there is an inner circle of key role players, the Cohort Coordinators. These are the faculty and site- or schoolbased partners who are primarily responsible for the program. Cohort coordinators are assigned to each teacher intern group, consisting of approximately 20 interns. The cohort coordinators are responsible for supervising and teaching the year-long seminar and for administering admissions processes, intern placements, mentor development, supervision, intern assessment, course instructor selection, school–district–university relations, cross-cohort collaboration, and overall program design and operation. The arrows indicate that the coordinators both develop and are guided by the mission and that they work closely with each other to ensure a coherent and comprehensive program for all interns. Moving out from the cohort coordinators, there are four foundational components of the QAS that were deemed to be the most important aspects of the program for ensuring that interns receive a high-quality preparation and develop a commitment to equitable and engaging learning: (1) standards and assessments; (2) admissions, candidacy, and completion;
Fig. 1.
University of Southern Maine Teacher Education Department Quality Assurance System.
Does National Accreditation Foster Teacher Professionalism? 203
204
KEN JONES AND CATHERINE FALLONA
(3) coursework; and (4) internship and seminar. Again, cohort coordinators both develop these components and are guided by them. The components also interact closely with each other in an interdependent way. We felt that developing this new understanding of our system and thinking through in some depth about how we assured quality was a significant accomplishment on our part and we had a sense of ownership as we headed into the next step of the TEAC process, the internal audit.
INTERNAL AUDIT The way we understood the TEAC process was that the internal audit would be a data-driven means of probing whether our quality control system was really working as it should. What we had not understood until we had designed and done a large part of the audit already was that the quality control system needed to be explicitly about the seven capacity standards defined by TEAC, one by one. This was an unpleasant surprise. Here we had taken and run with the idea of our own inquiry and were now being reined back in to comply with TEAC expectations. Apparently, TEAC was in turn complying with United States Department of Education requirements. In any case, the shadow of the controlling external agency fell over our work. Even so, what we had (mistakenly) done in terms of developing a QAS was productive and helpful to our understanding of our own program. The good news at this point of the process was that TEAC sets up a formative feedback system where a program has a TEAC consultant who reviews documents and gives guidance along the way. The bad news, of course, was that we had gone off in the ‘‘wrong’’ direction. What we now had to do was re-create a quality control system that derived from the starting place of the seven capacity standards. This was not hard to do, but doing it more or less took the wind out of our sails about this being our own inquiry. With the guidance of the TEAC consultant and some extra resources provided by our dean (graduate assistant and staff assistance), we were able to design probes on the effects of our system components in each of the seven capacities and then collect data to show that we met those standards. This took a good deal of time and effort and included surveys, interviews, searching for, analyzing, and sometimes coding program, college, and university documents. It was a little reminiscent of NCATE drudgery, although more informative, we felt.
Does National Accreditation Foster Teacher Professionalism?
205
DATA COLLECTION Early on, we realized that TEAC accreditation required us to gather more quantitative assessment data about the performance of our teacher candidates. The faculty understood the TEAC requirements for numerical data on candidates and chafed at it to some extent. Of course, NCATE has the same requirements. This is just the nature of accreditation as we now know it. For many years, our program had had a well-developed intern assessment system that included performance assessments such as portfolios, exhibitions, planned and delivered units of study, case studies, videotapes, journals, philosophy statements, classroom management plans, observations, and standards reviews. Much of this was formative rather than summative in nature, however, and – more importantly for accreditation purposes – little of this was scored. Faculty and supervisory professional judgment was most certainly used in all cases, and this judgment was regularly constrained and guided through discussions at faculty meetings and advances (we do not call them retreats). Interns were given feedback about their progress on the standards on a regular basis, in seminars and classroom placements and in writing, and the information was well validated through a multiplicity of measures. Now, however, we needed to change some of this, so that we could have measures that could be analyzed through statistical methods. Before long, but not without some argumentation and angst, the faculty reached agreement on a scoring system for our periodic standards reviews and for the instructional units, the two most comprehensive and summative assessments in our repertoire. Rubrics were developed and used and, as per our habit of continuous analysis and improvement, are still in the process of review and revision. The database we developed to house these new scores as well as a wealth of other data about our candidates (including admissions interview information, overall GPAs, content-specific GPAs, methods GPAs, and Praxis test scores) has, in fact, proven to be an enormous help to the program. It has proven not only extremely valuable for the continued collection and review of data for accreditation purposes but also true for facilitating and analyzing ongoing programmatic issues related to enrollment, advising, budget, contracts, candidate status, and other concerns.
CLAIMS, EVIDENCE, FINDINGS The heart of the TEAC process is the inquiry brief, where a program lays out its claims and then systematically summarizes its data to demonstrate
206
KEN JONES AND CATHERINE FALLONA
that the claims can be substantiated. The brief is restricted to 50 pages, but the accompanying appendices go far beyond that limit. The brief is indeed set up in form much as a dissertation might be, with five sections leading from program description through claims, methods, findings, and discussion and plan for the future. We had an initial stumbling block when we considered developing a claim that simply said that our completing interns all met all of our 10 program standards. With our new database and quantitative measures, we felt well-prepared to provide data to support that claim. Our TEAC consultant, however, informed us that we would need ten claims – one for each standard – and that our data would also have to show in detail that we met each quality principle and cross-cutting theme. After our experience with the quality control system and internal audit, we quickly decided on the path of least resistance: we would simply claim that we met the quality principles and cross-cutting themes and organize our data around those components. This time, the idea of compliance to the requirements of the external agency did not feel so onerous. Perhaps we had been acculturated. By now, we just wanted to get through this process with the least amount of extra work. As we worked our way through compiling our years’ worth of data on various measures, cleaning the data, running it through SPSS, creating charts and tables, and analyzing what we saw, we had mixed feelings about the value of all this work. Some of the data just reinforced what we already knew without the numbers – that seemed like much ado about documentation for its own sake. But we also gained some useful and new insights about our program. We found, for example, that: Program completers who were not in the dual certification program option (general education and special education) felt a marked relative lack of readiness to address the diverse needs of exceptional students; We still needed to find ways to improve the technology and assessment skills of our students; A declining percentage of our students were getting a certified teaching position in their first year out; Faculty inter-rater reliability on the scoring of units and standards reviews, while not bad, was still in need of improvement. In general, the process of accruing and evaluating evidence, while time consuming, proved to be a useful, rigorous, and defensible avenue for inquiry. It would not have been our faculty’s choice of methodology, but
Does National Accreditation Foster Teacher Professionalism?
207
adjusting our assessment system and our mode of analysis was not an insurmountable barrier. It helped that one of our twelve faculty members was especially adept at quantitative analysis and that we have a research center that was able to assist us. On the contrary, the shift toward greater quantification, we have found, does present continued tensions related to the nature of our relationships with interns and our commitment to qualitative data and customized feedback. Now that our collection system and database are in place, however, the stress of changing our system for the sake of complying with an external accreditation agency may be over. Now we just need to keep feeding data into the machine.
IMPACT ‘‘So what?’’ one might ask. Besides the recognition and validation that come with national accreditation, what was the value of all this? Did the process improve the program? Was there a positive impact on the quality of teacher education at the University of Southern Maine? The following changes in the program were made in the three-year run-up to the accreditation review and visit and were influenced at least in part by the work we were doing for TEAC. We also were doing this work because we valued it and had written and obtained grants to support our work in it. We developed a program-specific mission and set of core practices. We integrated existing program quality control mechanisms into a comprehensive QAS. We developed a new equity framework for use in program admissions decisions and review/revision of coursework. We revised the USM teacher certification standards and indicators to ensure a greater focus on equity and diversity. We created and implemented a new course focused on universal design for learning (UDL) and culturally responsive pedagogy (CRP). We developed greater articulation and consistency for candidate assessments, including alignment with program standards, rubric development, inter-rater reliability checks for standards reviews, instructional unit rating, and the review of evidence of performance demonstrated in portfolios. We reviewed and rewrote course blueprints and syllabi to ensure alignment with new core practices and program assessments.
208
KEN JONES AND CATHERINE FALLONA
We developed selection criteria and professional development activities for mentors and supervisors. We developed a new four-point rubric for the standards reviews. It is a developmental rubric allowing the same four performance levels to be used for the mid-year standards review as well as the end of year standards review. The four-point scale allows for a discriminating measurement of variation in performance among teacher education candidates. We instituted the use of online wikis for the collaborative posting and review of evidence related to the program standards – in part, to improve the technology competencies of our interns. In addition, in our plan to address certain findings from our study, we included the following plans in the final chapter of the TEAC inquiry brief: The faculty will continue to study the inter-rater reliability for ratings on units and standards reviews, taking appropriate action to improve reliability as needed. The faculty will continue to strive to improve its assessment of candidates by examining its assessment system through conducting case reviews, doing inter-rater reliability checks on approximately 10% of the instructional units and reviewing approximately 10% of interns’ portfolios. Teacher education internship sites will be re-configured so as to expose interns to a greater diversity of students. For example, we plan to reconfigure placement sites so that each internship cohort includes a partner school with a large ELL population. The program curriculum, assessment, and instruction will be systematically revised so that all interns develop a strong understanding of UDL and CRP. To better prepare beginning teachers and to position program completers to be more marketable as beginning teachers, more unified options are now being developed, including dual certificates in general education and special education as well as general education and English as a second language (ESL). Exit surveys reveal that students completing the program, especially those not in a unified program, feel relatively unprepared to negotiate school procedures for special education and 504 referrals. As a result, the faculty intends to sharpen attention on this, particularly in the exceptionality course required of all beginning teachers.
Does National Accreditation Foster Teacher Professionalism?
209
A follow-up survey was piloted for those who are one or two years out from their program completion. This survey will be continued and will provide longitudinal data about individual interns in future program self-studies. Thus, we can see many positive benefits from the process. But the opportunity costs that come with the large commitment of time, energy, and resources to a national accreditation process must also be taken into account. For example, if the program leadership had not been so involved in the documentation needed for accreditation, more ongoing contact and collaboration with partner districts would likely have occurred. If faculty were not spending so much time scoring and documenting evidence, they would have more time for direct interaction with interns and relationshipbuilding with schools and mentors. The challenging and time-intensive work we are now spending on revising our program curricula to incorporate principles of universal design and CRP might have started earlier. The resources spent for graduate assistants, staff time, database creation, and data analysis might have been spent on other priorities – an important consideration in this time of economic stress.
PROFESSIONALISM Looking beyond the effects of the accreditation process on our particular program, we might ask what there is to learn from our case that can shed light on the issue of how the accreditation process contributes to or detracts from developing a professionalized teacher corps through colleges of education. This is a large issue, blown by political winds, but crucial to understand if we are interested in improving the quality of schooling and teaching. Some educational reformers argue that all colleges of education should have national accreditation. And some say that there should only be one accrediting agency for teacher education programs. Some do not see the value of an external agency setting the rules for judging the worth of a program (why not let the market decide?). And, of course, some do not see the value of colleges of education, period. Central to this debate is what it means to be a professional teacher in a democratic society. Quite often, it seems, the word ‘‘professionalism’’ is used to mean working diligently to teach to legislated state standards, prescribed district curricula, and mandated standardized tests. A professional teacher has come to mean one who complies with top down mandates and
210
KEN JONES AND CATHERINE FALLONA
standards, a good soldier, perhaps more of a technician and follower than a creative and informed decision-maker. This notion of professionalism is not one to which our faculty subscribes. In fact, we think of it as inimical to a healthy democratic society, where there is a proper balance of freedom and responsibility. If we are sincere about schools having the primary function of preparing students for being citizens in a democracy, then we must have teachers who can function in schools as agents of that democracy. To be professional in this way means that teachers need to be knowledgeable about effective teaching and learning, have a disposition for fostering equity and excellence, and have the power to make informed and responsible decisions on behalf of their students and their families. It is this last element – the power to make informed and responsible decisions – that is at the center of the accountability debate in public schools. Are our external accountability measures for schools empowering or disempowering teachers? The same questions can be asked of teacher education programs and our faculties. Do accreditation agencies empower or disempower faculties as they endeavor to cultivate informed, responsible, and empowered teachers? Standards are important, but who gets to decide what they are and what evidence counts to demonstrate that those standards are being met? Within our faculty’s decision to move from NCATE to TEAC was the sense that the TEAC process afforded us a greater degree of professionalism. And even though the reality about internal auditing and claimssetting belied the professed discretion given to TEAC-affiliated programs, we still found a significant degree of difference between NCATE and TEAC. Despite the compliance we were faced with in our process, we still had significant ownership and engagement with the process. The formative feedback given during the process, although not always what we wanted to hear, was nonetheless a welcome form of collaboration and respectful interaction. During the audit visit, the questioning and probing of our inquiry brief and other documentation was focused on whether the evidence that we chose to present was trustworthy, not on whether it conformed to the accreditation agency requirement. As colleges of education are more and more beleaguered by critics, there will likely be a tendency to assert a greater quality control mechanism from above. With declining resources for public universities and an increasing interest in alternative teacher education programs, there will also be a pressure for teacher education programs to produce greater revenue. And so we are faced with the exact dilemma faced by our K-12 colleagues: higher expectations and control coupled with fewer resources. A recipe for failure.
Does National Accreditation Foster Teacher Professionalism?
211
Undoubtedly, both NCATE and TEAC see their work as protecting the integrity and quality of teacher education programs. But are they contributing to the norms of professionalism or compliance? The case of our accreditation process is something of a success story for our teacher education program, attained through some degree of compliance with an external standard and process – that is, through some amount of disempowerment. It is also an illustration of the middle ground taken by TEAC, probably because of political pressures. But what will happen as TEAC and NCATE join together in some way to present a unitary accreditation system? And what if all colleges of education are indeed required to have national accreditation, as some suggest? What if the Obama/Duncan education policy initiatives are successful and the graduates of our programs are evaluated in their public schools on the basis of standardized test scores? Will accreditation agencies not also be required to have teacher education programs produce evidence that shows that their graduates produce higher test scores? What will that mean for us in terms of professionalism vs. compliance? And for a democratic society? The nature and import of national accreditation is in transition. As we look at the political context in which it exists, we ask ourselves how the balance of professionalism vs. compliance will shift. We doubt that accreditation will go against the tenor of the times. And we wonder what the cost of our next national accreditation will be.
REFERENCE Senge, P. (1990). The fifth discipline: The art and practice of the learning organization. New York: Broadway Business.
CHAPTER 13 SOOTHING CERBERUS: THE WYOMING ODYSSEY Linda Hutchison, Alan Buss, Judith Ellsworth and Kay Persichitte ABSTRACT This chapter primarily describes the tensions the University of Wyoming College of Education experienced as preparations were made for the undergraduate portion of the NCATE review, as well as lessons learned from the experience. Three tensions are explored using the themes of: (a) knowing your context, (b) time and prior planning, and (c) flexibility. The experiences of the latest accreditation process are explored with suggestions to be savvy about your setting, plan for surprises, be flexible, know your limits, and have a good strategy for data collection. Ongoing challenges including NCLB, future accreditation requirements, and faculty awareness are examined.
The University of Wyoming (UW) College of Education (CoEd) was reaccredited by NCATE in Spring 2008. The CoEd now documents continuous NCATE accreditation since 1954, and our next review will occur in 2016. This chapter primarily describes the tensions we experienced as we prepared for the undergraduate portion of the NCATE review, as well as lessons learned from the experience. We are currently in the process of Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 213–229 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012016
213
214
LINDA HUTCHISON ET AL.
submitting response to conditions reports to a few final specialty professional associations (SPAs). At this time, most programs have been designated as nationally recognized. We will explore three tensions pointing out our Cerberus, with the heads (themes) we sooth being (a) knowing your context, (b) time and prior planning, and (c) flexibility.
THREE MAIN TENSIONS Knowing Your Context Being Savvy about Your Setting One key element to successful accreditation is to understand your setting. One thing we realized about most settings after the process of accreditation is that they are not closed systems – we all have partners, and these partners are crucial to accreditation. Most teacher preparation programs work within the context of state and federal requirements, as well as accreditation and university requirements. Different facets of each setting interact with decisions that are made so that what might work in one setting will most definitely not work in a different setting. The UW, and resulting expectations for the CoEd, must be understood within the context of state policy and regulations. As a land grant institution established in 1887, the UW is the only state-supported four-year higher education institution in the state and the only teacher education degree granting institution, with approximately 13,000 students enrolled. The State of Wyoming is the ninth largest state with the smallest overall U.S. population of just under 500,000 people. This means we serve many small towns spread over a large distance, all of which are served by one four-year university. Although a branch of UW is located in Casper, WY, the main campus is in the southeast corner of the state in Laramie. Laramie is closer to Denver, Colorado (130 miles), than it is to many of the Wyoming towns it serves, for example, Cody (NW corner) is 386 miles away, Sheridan (NE corner) is 295 miles away, Evanston (SW corner) is 308 miles away, and Casper (centrally located in the state) is 152 miles away. Distance from the university and, in turn, accessibility to UW programs creates the greatest tensions between UW and Wyoming residents. Although we recognize that issues of accessibility and distance are common in other rural states, what makes our situation unique is that we are the only four-year institution and, therefore, our responsibility is to the entire state. This creates unique constraints as we try to develop teacher education programs that meet the needs of the state and that address national requirements. In many ways, it
Soothing Cerberus: The Wyoming Odyssey
215
is expected that the CoEd should be all things to all people. We constantly strive to define our function in ways that truly work with our conceptual framework (CF), but sometimes definitions of our role become politically determined. The CoEd has been continually accredited by NCATE since 1954, with the most recent accreditation effective through 2016. The CoEd serves approximately 1,800 students who are enrolled in programs that prepare them to work in P-12 schools encompassing undergraduate and graduate degrees in initial and advanced elementary and secondary content, special education, school leadership/administration, and counseling. At the time of the 2008 accreditation review, there were 1,209 students in our undergraduate teacher preparation programs. Before 2003, undergraduate programs utilized a methods and residency (student teaching) system in which students were placed in schools throughout the state, often with only one student in a town. The semester before residency, students enrolled in the methods courses. During the methods semester, students would complete an on-site practicum in their residency school for several weeks and CoEd methods faculty would do a driving circuit around the state with ‘‘drop-in’’ visits at their assigned schools. Because of distances between schools, these faculty members could do little more than check-in and have a short visit with the student and mentor. During residency, a different group of CoEd-hired supervisors was responsible for visits and, most often, the students did not know the supervisors until they came to the schools. This system created a disconnect between our students, the schools, and the CoEd. In 2003, we began to consider and develop a model of residency fashioned after professional development schools. When we adopted the ‘‘Partner School District’’ (PSD) model, it resolved some issues and created others. In the PSD model, students are grouped into cohorts at the elementary level and at the secondary level cohorts of mixed content are grouped for assignment in partner districts. In our first implementation of the PSD model, a smaller number of district sites (or two districts in proximity) were identified for placements (five districts for elementary and three districts for secondary). The intent was to partner more closely with fewer districts by placing greater number of student teachers in the PSD. In this way, CoEd students always had a cohort, CoEd elementary faculty would be assigned to one site, and CoEd secondary faculty would be assigned to two sites. All CoEd faculty would have fewer towns to visit. Tensions arose no matter which model we used. First, placing students all over the state and having less control over the quality of the program
216
LINDA HUTCHISON ET AL.
contrasted with placing students in fewer places and having greater quality control over the program. Quality control was improved in our placement processes and mentor teacher selection. Another difference was using external personnel for supervising residency students (hiring outside supervisors) versus using internal resources for supervising (CoEd methods faculty). The partnership model with fewer districts eliminated placements in some communities where student teachers had formerly been placed. This tension continues but seems to be lessening in some areas of the state. The tension of distance described earlier is ongoing for the CoEd. It is particularly acute for our residency placements because there are no districts large enough to provide the practicum and residency experiences required to meet the program outcomes for all students in all content areas. Consequently, travel and additional living expenses for our students continue to be concerns of equity and access. In order for CoEd faculty to meet their supervision responsibilities effectively, faculty are flown on university planes to outlying partner district settings during residency. As we moved to the PSD model, our context also included a faculty member who was responsible for guiding us and who had served for many years as a member of the NCATE Board of Examiners. Her knowledge and understanding of the accreditation process, as well as emerging rule changes, streamlined the process. However, we still had many meetings and revised many internal processes as we prepared for our accreditation. Some faculty were at a loss to adapt to all this change or were simply complacent on many occasions as to what needed to be done to prepare for the accreditation visit. Yet, we continued to build our expertise on multiple levels to implement the PSD model and to prepare the assessments necessary to document outcomes for students and programs. The dean and department heads attended NCATE workshops and met with their department faculty on a frequent basis to help faculty understand their role in developing and administering the common assessments and collecting the resulting data. Although the UW CoEd context is unique in many ways, the complexity of the contexts for teacher education units is not a unique feature, thus the caution to be savvy about your setting.
Time and Prior Planning One of the most positive aspects of our NCATE accreditation preparation was that we began this work at least five years before the review. We began by having teacher education department heads and other faculty members
Soothing Cerberus: The Wyoming Odyssey
217
with varying levels of expertise in assessment attend the NCATE accreditation preparation meetings to learn what we needed to do for a successful attempt. This information was then shared with faculty to help engage them in the process. We had strong, functional relationships already with our partners in the College of Arts and Sciences and the College of Agriculture. We had the support of Academic Affairs as well as Deans in developing the secondary education concurrent majors (all secondary education students complete a content major in addition to a parallel teacher education major). These secondary education programs were developed in response to a University Academic Planning expectation and revisions to the federal definition of ‘‘highly qualified teachers.’’ We had specially designed coursework, from prior National Science Foundation (NSF) grants in mathematics and science, already required in our elementary education degree program. Although we found we had many prerequisite components in place that had served us well in prior accreditation reviews, we had additional pieces to develop as we worked on this data-driven accreditation process. The preparation process started in 2002 with college-wide discussions about our mission. This process was quite lengthy as each department, program, and individual had unique perspectives as to the mission of the CoEd. The initial tensions arose in these discussions on several levels. One source of tension was between departments with vastly different purposes and student audiences, such as counselor education and elementary education. Faculty soon realized that we needed to be broad in our statements. This led to another source of tension, created by sentiments that the development of broad mission statements was a waste of time. The challenges created by this tension were more evident as the preparatory work progressed: namely, that some faculty felt this accreditation work was merely an exercise requiring significant amounts of time that would result in meaningless documents. It would be safe to say that most faculty had little understanding of the intent of the CF. It was clear for the NCATE process that we needed one, but the purpose and utility of such a document was unclear to the faculty. As a result, time was needed not only to discuss and write the CF as a faculty, but time was needed to help faculty understand WHY they were writing it. We believed that the active engagement of the faculty was critical to the implementation and future utility of our CF, and we were challenged to keep faculty involved in this process. Later learned that some institutions wrote their CF and other guiding documents without faculty input, but this was not our approach. The decision to include all college faculty in the
218
LINDA HUTCHISON ET AL.
development of these documents was stressful at first, but eventually yielded dividends when faculty were asked to provide course and assessment details. Involving the faculty helped the college to be more cohesive and focused in our accreditation preparations. Eighteen months later, our mission and CF were complete and adopted by a faculty vote. The mission and CF are no longer contentious and have served as grounding for other accreditation preparations and program renewal. The CF is a broad statement that provides clear guidance for our efforts related to program improvement and our work as a National Network for Educational Renewal (NNER) institution. In simplest terms, we strive to develop competent, democratic professionals. Next, faculty turned their attention to develop a curriculum matrix and assessment plan for all undergraduate teacher preparation programs, which included key checkpoints for evaluating student progress. To write the curriculum matrix, we used INTASC and PTSB (Wyoming’s Professional Teaching Standards Board) standards as a starting foundation. The faculty met over a full week in the summer and then, in small groups, developed different components. The curriculum matrix mapped out each required course, the key content included, and which college, state, and national standards were being addressed. Of all the documents created during the preparation period, this was perhaps the most useful for faculty. For the most part, individual faculty had responsibilities in a single phase of the teacher education program, so they did not see the larger picture of how the curriculum developed over time; at least until the curriculum matrix was completed. Faculty stated that they were glad to see how key ideas, concepts, and skills were being developed through the program from beginning to end. The assessment plan then compiled all the pieces of this work. We paid stipends to teacher education faculty to attend a weeklong summer meeting, so we could spend time considering what sorts of assessments were needed and when it would be appropriate to implement them for the different decision points in our programs. To generate this plan, key courses and transition points between the three phases of our program (admission, foundations, and methods/residency) were identified as checkpoints. Instructors from the key courses of each phase met and developed assessments that would be common across all sections of each required course and experience regardless of individual instructor. Every student must meet the performance criteria at each decision point to proceed in the program. Each common assessment was aligned to specific standards. The assessment committee continued to meet for two additional years to revise and adopt
Soothing Cerberus: The Wyoming Odyssey
219
the specific assessments that the various departments and programs put forward to meet the criteria we all determined. Each department handled the assessments differently. The assessment committee, however, continued to oversee the entire process to ensure that individual standards were not being over- or under-assessed. The dean and department heads met with their departments on a frequent basis to help faculty understand their role in developing and administering the common assessments and collecting the resulting data. Throughout all this work, it became clear that at least one person needed to be responsible for guiding faculty, department heads, and the dean. We were fortunate to have a director of teacher education who had extensive experience as a member of the NCATE Board of Examiners. Her experience was invaluable in leading the college through such an extensive revision of the baseline documents and practices.
Flexibility: Meeting Changing Rules and Expectations Although the college was preparing for the NCATE review, other forces were influencing our work: a university-wide academic planning process, the requirements and implementation of No Child Left Behind (NCLB), and changes to state licensure. Adding to these, the NCATE standards and accreditation processes, as well as SPA requirements and processes, changed as we worked through our accreditation. These forces, both internal and external, demanded greater flexibility on the part of administrators and faculty. Of these forces, the changes in NCATE and SPA standards and requirements were the most troublesome. Five years before the NCATE review, as we met to set overarching goals, it was assumed that the NCATE standards and reporting requirements were firm. Then, four years before the visit, NCATE revised the expectation for assessment data and set a timeline for the reporting of assessment data that caused us to make significant revisions to our assessment plan. We also learned that we had to develop and implement a digital data collection and archival system. Additionally, 12 months before the visit, NCATE announced that they would pilot a streamlined review process that would include reduced Unit Report requirements and reduced requirements for the ‘‘Evidence Room.’’ We volunteered to participate in this pilot review process with the understanding that we would have opportunities to influence the details of our site visit as part of the initial pilot effort. Unfortunately, those understandings
220
LINDA HUTCHISON ET AL.
were not honored, and while the accreditation team was on site the pilot requirements were insufficient, additional data needed to be summarized and reports needed to be modified on the spot. More specifically, one graduate program, which was initially designated as not being subject to review, was tagged as needing to be reviewed. The combination of these changes forced impromptu meetings and late night report writing. Although these changes caused stress for administrators and faculty members, it impacted families as well. For instance, all department heads received a call on a Sunday afternoon to provide additional data, data analyses, and summary reports by Monday morning. At this point faculty and administrators began to question the reliability and validity of the process, not to mention its overall worth. For some, it seemed that all the preparation efforts were not valued. We were confident that everything was ready, that all the reports were completed and that all the necessary evidence had been gathered. It was a significant and unpleasant surprise to have to develop additional documentation. Post-review, it is clear that NCATE’s implementation of the pilot was not communicated explicitly to the responsible team chairs, and multiple changes over many recent months caused confusion and resulted in misinterpretations of expectations. However, when such requests are made at the very last minute, the institution is in a particularly vulnerable spot if they do not or cannot comply with such last minute requests. Be prepared for the unexpected.
SPECIALTY PROFESSIONAL ASSOCIATIONS FOR SECONDARY EDUCATION In previous accreditation reviews, content area program reviews were conducted on a state level, using state standards, not national SPA standards. The state program review rules were changed for the current accreditation when the PTSB required content areas with SPAs to demonstrate compliance with the national standards through a national SPA review. For those content areas that had a SPA, the faculty hoped becoming ‘‘Nationally Recognized’’ by content teaching organizations would demonstrate what we felt was true – we prepared teachers with excellent content backgrounds and content pedagogy. The PTSB maintained the state review requirement for Art Education and Agriculture Education, as no national SPAs exist for those content areas. We submitted to a SPA for technical education that we realize, in retrospect, is not a match for
Soothing Cerberus: The Wyoming Odyssey
221
Wyoming expectations in this content area, so the state is now revising state standards for that content area as well. The PTSB response to our needs as well as our collaboration in helping the state meet federal requirements would probably be unusual in other state education agencies that serve multiple institutions. We take our responsibilities to the state for developing excellent teachers seriously, and they take their responsibilities to help us make that possible equally seriously. Ultimately, the PTSB gives final approval on all programs, even if they are nationally recognized. The secondary education program includes the content teaching areas of Agriculture (Agricultural Communications, Agricultural Business, Rangeland Ecology and Wetlands Management, Animal and Veterinary Science), Art, English, Mathematics, Science (Biology, Geology, Physics, Chemistry, and Earth Science), Social Studies (Geography, History, and Political Science), Technical Education, and Modern Languages (French, German, and Spanish). We have one secondary education content area faculty member in the department for each content-specific methods area, except in science. In science, we have two faculty members, one of whom is assigned to our off-campus site in Casper. The state of Wyoming looks to us to provide content area prepared teachers for grades 6 through 12. We have successfully implemented concurrent majors for all secondary education areas to meet the ‘‘highly qualified’’ provisions of NCLB. All UW secondary education programs require two majors – a major in a content area (e.g., mathematics) and a major in the secondary education program area (e.g., mathematics education) to assure students develop both their content expertise and their pedagogical expertise. We collaboratively developed all the concurrent majors with our faculty counterparts in other colleges using SPA content standards as the basis. The number of students admitted to each secondary education major varies greatly. Each faculty member in the Secondary Education Department (SED) was responsible for writing their own content area SPA report – a significant undertaking for department faculty. This was a much larger load for this department than any other department in the college. We worked as a group to define our assessments for methods and residency, but each person was writing for what we later realized were very unique SPA standards. The SED agreed on generic assessments for unit plans, a video reflection assignment, and residency evaluations with the idea that the fewest number of assessments would make it possible for us to keep up with reporting data using our newly developed computer-based integrated data system. It should be noted that most of the secondary faculty had written accreditation reports in the past for their content areas when the state reporting
222
LINDA HUTCHISON ET AL.
requirements were more minimal. These faculty realized that data points, while important, had to be manageable to be meaningful. A huge tension realized by the secondary education faculty in developing the original assessments aligned with NCATE SPA standards was that each national organization had different assessment requirements for national recognition. Our attempt at standardization of secondary education assessments resulted in many changes as our initial SPA work needed to be revised. Each SPA also had its own criteria for selecting evaluation teams, affecting interrater reliability. In our experience, if there were areas for improvement cited, the same team might not review the revised version of a report. For example, areas within an initial report, which did not require additional review in the revised response, were later designated as needing improvement by the second review team even though the overall program achieved national recognition. Another tension evolved with the identification of secondary education residency sites, which are located in the two largest districts in the state plus the local school district. No single district can accommodate our student numbers. Currently, our memoranda of understanding (MOUs) require SED faculty to supervise in more than one district in more than one school. For example, no district has capacity for twenty mathematics education residents, much less a single small district, so that multiple sites are necessary. Adjusting to all these changes and accomplishing all these tasks in the same timeframe took a toll on the SED faculty who were required to supervise student teachers and be the sole author of a program report for their content area. SED faculty had to keep our programs vibrant, learn how to write a report, write the report, get feedback on the report – all as the rules in the various SPAs were undergoing change and we were implementing the new PSD model. Even under the best of circumstances, the service load on faculty at public institutions the size of the CoEd at the UW are very high and accreditation preparation strains faculty resources even further. Wyoming school districts have been very supportive of our concurrent majors because our graduates now have no problem meeting ‘‘highly qualified’’ definitions, and since we used SPA standards to define the content majors, we anticipated few problems in demonstrating how our programs met those requirements. A new tension erupted as SPA rules changed but our PTSB licensure requirements had not yet transitioned to national standards. As we began our accreditation preparation, Wyoming did not require any exam to be eligible for a teaching credential. After NCLB took effect, the state moved to require a PRAXIS II exam for all secondary
Soothing Cerberus: The Wyoming Odyssey
223
education students to comply with federal requirements for ensuring that teachers are ‘‘highly qualified.’’ We had shown through the development of concurrent majors and an increased required content GPA that our students met content requirements at a high level so the state did not implement content exams in an effort to streamline the process and to keep costs for teacher certification as low as possible. NCATE and the SPAs implemented a required exam as Assessment #1. We had discussions with NCATE administration about our situation and followed their advice as we prepared the SED program reports. Thus, our first attempt at the program reports for secondary education content areas listed an exam that we knew would not help in documenting content area expertise. We listed the newly required generic pedagogy test that the PTSB adopted to satisfy federal NCLB requirements, with the knowledge of NCATE administrators. Within the next eighteen months, the state changed the SED requirement to a single PRAXIS II exam for Social Studies and the other content areas have no test required. We had to change PTSB-adopted state licensure rules to help us demonstrate how our students meet the SPA content mastery standard. For programs that do not have a state-required PRAXIS exam, demonstrating content mastery is a complex undertaking. We are still working through this issue in some content areas. In some content areas, we have been successful by using course grades and a matrix with course outcomes crossed with final exams to document content mastery aligned with SPA standards. In SED majors that have greater student numbers, this is difficult to document, and we are thankful for the excellent cooperation of our Arts and Sciences colleagues and department heads that help us aggregate these data. We are equally thankful for the PTSB support as their actions regarding state testing requirements saved the day for us when they removed the generic test requirement for content area initial certification, acknowledging the strength of our concurrent majors.
UNIVERSITY ACADEMIC PLANNING ELEMENTARY EDUCATION As we were preparing for the NCATE review, a parallel, internal review process was underway, which grew out of the university-level strategic planning activity. An extensive review of the elementary education program was undertaken by the Department of Elementary and Early Childhood Education (EECE) that included the scope and sequence of required
224
LINDA HUTCHISON ET AL.
courses, course content, and all field experiences. As a result, several courses and field experience requirements were changed dramatically. Residency placements were limited to five PSDs, as opposed to placing students throughout the state, and residency would occur for the large majority, in spring semester only. This was done to allow the faculty responsible for teaching methods to follow their students into the field for more intensive supervision and to allow for more direct interaction with the mentor faculty in the PSDs. Accordingly, faculty roles changed so that they were much more interactive with building principals and mentor teachers, primarily through increased travel to make personal contacts and visits. The PSD model called for 12–15 EECE faculty to spend the equivalent of one day per week during the residency semester out in the partner schools, which presented them with new challenges to manage other teaching, research, and service responsibilities. These changes, too, were implemented as the program began preparation for the accreditation review, requiring additional energy on the part of the faculty and administration. The analogy, which kept arising, was that of changing the wheels of a bus while it was in motion. As a result, faculty energies were tapped on multiple fronts.
LESSONS LEARNED Flexibility and Knowing Our Limits for Common Assessments Of all the lessons learned, the constraints on and limitations of time were foremost. CoEd faculty were not given course or other workload releases to work on the development of the accreditation documents or common assessments. This work came at the expense of other research and service responsibilities. The dean was sensitive to the challenges presented by these demands, so she made funds available to provide faculty with a stipend for attending and participating in two weeklong summer accreditation workshops, with extra emphasis on work. These workshops were conducted away from the main education buildings, so faculty could extricate themselves from phones, email, and other demands on their time. Previously we described the approach that the SED and EECE took in defining common assessments for methods and residency. The Educational Studies Department (foundations curricula), on the contrary, put forward many assessments for each class they taught, which often matched course assignments, not realizing the long-term consequences of their decision. After a few reporting cycles, the department recognized that they had
Soothing Cerberus: The Wyoming Odyssey
225
committed to providing extraordinary amounts of data and that many of their assessments overlapped, so they undertook additional work to eliminate and consolidate their assessments. This revealed the tension of determining whether to report data for all of the assignments required in a course versus just a few major assessments to deeply address the standards. This became a consistent topic in the assessment committee meetings. The KISS (keep it simple stupid) principle finally won out in the end as redundancy was eliminated. The SED did have to revise all their departmental generic evaluations for subject content areas, as well as add evaluation for specific SPA requirements to the original rubrics. All the original assessments from the SED are still used with addenda for specific SPA requirements. The generic departmental assessments worked for NCATE but needed revisions for content with the SPAs. It is critical to hold time as flexible as possible when developing and implementing the assessment plan. Also, recognizing that this flexibility is going to be necessary, no matter how well you have it planned, can relieve stress during the accreditation visit. Assessments need to be developed, piloted, evaluated, redesigned, and reevaluated to ensure that the data collected will be of use.
Data Collection Is a Critical Component Accreditation preparation is labor intensive. Staff time was already constrained, so it was not going to be feasible for existing staff to coordinate data collection with faculty. A digital system was needed, which would allow individual faculty to enter data and which would then provide reports at instructor, course, and program levels for purposes of analysis. We knew that time would again be a limiting factor, so the system was designed specifically for our needs. The College of Education Integrated Database (CEID) was created as a way to keep track of student disposition data and common assessment data. Once the CEID was functional, data could be analyzed for state and national reporting and used to inform instruction as departments consider changes in their evaluation requirements. The CEID is formatted, so faculty members teaching each course can enter evaluation marks (most often D ¼ distinguished, P ¼ proficient, B ¼ basic, U ¼ unsatisfactory) based on embedded rubrics. The rubrics are mapped to the common assessments to assure consistency in the evaluation of student knowledge and skills.
226
LINDA HUTCHISON ET AL.
Although the CEID has proven useful, it has also had some major issues. Initially, inputting data was cumbersome as the system was slow and each separate rubric item had to be entered one at a time for each student. This was very time-consuming for faculty and created tension around entering the data. Also, while most of the data was on the D/P/B/U system, some rubrics required entering scores. This made analysis less comparable between levels and across programs. Although problematic, the need to stay the course through the accreditation review was imperative given the NCATE data requirements. With each year, faculty frustration has increased but now two things are occurring: (a) the CEID is being updated to make data entry quicker by entering data for each student all at once instead of one item at a time, and (b) faculty are recognizing that they may have been overly ambitious in developing the common assessments. Faculty are reevaluating which assessments are actually critical to course outcomes which items within each assessment are critical, and where data collection is redundant or now irrelevant, thereby simplifying what is required. These changes are currently in process, and although there are still tensions among faculty, these tensions will only be alleviated by using the modified system and assessments and verifying the effectiveness of the changes.
ONGOING CHALLENGES Future Changes in National Perspectives and Policy for NCLB Even though the faculty were part of the process from the beginning, there has remained a strong sense that the major program changes and implementation of the assessment plan were things done to them by college administrators, the state, and NCATE. Indeed, there was a strong push for the college to retain its accreditation by the dean, the Director of Teacher Education, and the department heads. Each administrator recognized the importance of having accredited programs for our graduates, but not all faculty shared that vision. Although small in number, these faculty members ended up causing internal strife and delays in document development. The feeling that changes were imposed has persisted well past the NCATE visit, resulting in the mechanical use of assessments to meet an external requirement as opposed to using the assessment data for program improvement and curricular adjustments. At one point during the accreditation process, alternative accrediting bodies were brought up in faculty meetings as a way to work around collecting and examining the data
Soothing Cerberus: The Wyoming Odyssey
227
and to return to a faculty load more conducive to research and scholarship productivity. Some faculty wanted to use their SPA reports as scholarly publications but the university system does not accept such effort as professional scholarship. Although this idea was rejected, as accreditation work is perceived as service, the request does highlight the issue. The dilemma arose because of the nature of the departments; one department wrote eight different program reports along with a graduate report while other departments wrote one or two. Although this is not a new aspect of accreditation, with new reporting requirements, for the unit and for programs, accreditation was a much larger task than in past years. Time is the most valuable commodity for faculty, and some were more compromised than others during preparation for an accreditation review. In some departments, faculty sentiments against external entities tend to remain or have grown stronger, and ongoing reports that accreditation standards are in a mercurial state and will most likely look vastly different when the college has to prepare for our next review exacerbates the situation. To address these sentiments and to maintain our focus on program improvement, faculty are being asked to apply principles of quality assessment practices, to revise existing common assessments to better meet instructional needs, and to not worry about what will happen five years down the road – for now. Shifting accreditation standards, expectations, and requirements are simply impossible to predict.
SPA Reporting: The Work Continues When some of our initial SPA reviews came back with further work to be done, our first call was to the specific SPA organization to get additional clarification. The expectations vary so widely across the SPAs that what is acceptable for one group is unacceptable in another, so it was critical for the administrators to hear what was needed and how the organizations varied. These calls were very beneficial to the administrators and to the report preparers, so that the right changes could be made. Many of the SPA representatives we contacted were very helpful and provided critical ideas from which we benefited. A common SPA problem was the content assessments (our PRAXIS II was not considered a content assessment but it was a state requirement at the time), and by talking with the SPA experts, we were able to create better content assessments. The secondary education faculty had participated in each individual SPA training session, but we had further work to do given the context of our state and university. This work
228
LINDA HUTCHISON ET AL.
was more difficult across the college because of the varied SPA expectations. Two years after our accreditation review, we are still completing this cycle for a few programs. The advantages of SPA review are that our programs are aligned with national standards, and we have out-of-state reviewers for our programs. The disadvantages are the cumbersome nature of the reporting with frequent changes in requirements occurring as well as disparity in the training of SPA reviewers from one SPA to another. We will be analyzing the costs and benefits of the SPAs carefully as we consider our next accreditation.
Nebulous Requirements for Future NCATE – SPA Relationships The change in rules, timelines, processes, and expectations for NCATE and the SPAs created significant tensions for us as we prepared for our 2008 accreditation review. The national conversations around teacher accreditation have not eased, leaving the future more uncertain than ever. For instance, fewer years of data are now required, the time period between reviews has been lengthened, and SPAs are modifying their standards. The national political scene is having a significant effect on NCATE and the SPAs, and it is not clear where the accreditation process is headed. Reauthorization of the Elementary and Secondary Education Act (formerly NCLB) provides an additional set of unknowns. For now, we are doing our best to monitor and maintain program quality based on what we do know, and we continue to collect student and program data that is useful to us as we evaluate student growth and performance.
Keeping Faculty Aware We have had many new faculty hires during the accreditation process and since. It would be very easy to just relax and rest on our full NCATE accreditation. But we realize that such an approach is not realistic for our next success, as we must continue to educate new faculty about the requirements for data collection, rubrics, data reporting and analyses, the CEID, and so on to acculturate them into the CoEd. New faculty have to be aware of our mission statement, the CF, and where the critical decision points are for our preservice students. As different parts of our system
Soothing Cerberus: The Wyoming Odyssey
229
change, new and continuing faculty must be aware of the changes and how the changes affect our data collection and analysis. Soothing the many participants through the complex facets of accreditation requires that you work within your unique context, remain flexible, and attend to the process early enough to have the time required to do it well. It will require the work of many people to be successful and faculty buy-in to the process will be critical to a successful accreditation outcome. We have many ongoing challenges that require thoughtful attention and consideration. Even though our accreditation is earned, it is a continuous process. We are very happy to have learned so much during our last accreditation review, but we realize we need to carefully monitor what is happening internally and externally as we begin to consider our approach to the next accreditation review.
CHAPTER 14 ACCREDITATION: RESPONDING TO A CULTURE OF PROGRAM EVALUATION Linda E. Pierce and Susan Simmerman ABSTRACT The School of Education at Utah Valley University has grown relatively quickly from a limited teacher education program to an elementary and secondary education licensure program with over 800 juniors and seniors. Growth on this level is accompanied by significant change, particularly with reference to programmatic requirements, assessment, and accreditation. The School of Education faculty has worked over several years to define its mission and philosophy, determine the program structure and evaluation procedures, and pursue accreditation. Developing a culture that responds to accountability requirements, while staying true to the school’s philosophic beliefs and practical restrictions, has been an on-going effort. However, these ‘‘tensions of change’’ have resulted in a faculty who work as a team to act in response to issues revealed by observation and assessment, to continually advance our mission-to prepare excellent teachers for our community’s schools.
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 231–251 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012017
231
232
LINDA E. PIERCE AND SUSAN SIMMERMAN
INSTITUTIONAL HISTORY Utah Valley University (UVU) is situated approximately 40 miles south of Salt Lake City in Utah County, the second most populated county in the state of Utah, which is located in the Rocky Mountain region of the United States of America. UVU has experienced continual growth since its origin in 1936, which has resulted in campus expansion and several name changes until reaching its present designation as a state university (July 1, 2008). With the focus on students and community engagement, UVU continues to provide programs that will meet the needs of students, communities, and businesses in its service area.
PROGRAM HISTORY In response to the community need for more teachers to staff its burgeoning public and private schools, a degree in elementary education began at UVU fall semester 1996. At that time, the program was given permission by the Utah State Board of Regents (BOR) to admit 30 students to the baccalaureate teacher education licensure program. After a designated period, the BOR lifted all admission restrictions, and the program was free to operate under university policies. Degrees in several secondary education content areas were approved by the BOR in 2001, and presently there are 14 content areas that offer education-related degrees in partnership with the School of Education. Over a relatively short period of time, the teacher education program has grown from a limited program that admitted one cohort of 30 students per year to one that is a School of the University offering licensure program options for both elementary and secondary education, with over 800 juniors and seniors enrolled in the two programs and approximately 1,100 students as declared education majors. In March 2008, the teacher education program at UVU was granted initial accreditation by the Teacher Education Accreditation Council (TEAC). Since this is a new program, and a new university, the TEAC accreditation was the first professional program accreditation for the school.
CHOICE The teacher preparation programs in institutions of higher education in the state had long been accredited by the National Council for the Accreditation
233
Accreditation
of Teacher Education (NCATE). Several were well established in the education community, and for many years, NCATE had been the only choice available for national accreditation. Consequently, it was thought that an affiliation with NCATE would be a logical direction. Our school studied NCATE principles and began the process of molding the program to meet their standards. During this same period, our dean became familiar with the TEAC, which at that time was a relatively new entity in teacher accreditation. TEAC advocates a philosophy of continuous improvement that is inquiry and outcome driven and acknowledges faculties’ ability to make decisions based on student and program data. Even though the Utah State Office of Education recognized both groups as legitimate for providing accreditation, we had to consider whether our program would be viewed as credible by other institutions in the state, and whether students would decline to choose our program because we were not accredited by NCATE. We also had to consider the background of our president who had been involved with NCATE at his previous university, was unfamiliar with the TEAC philosophy, and in fact, questioned TEAC’s credibility. Other university administrators had similar experience and doubts. Several meetings were held with the university administration where we shared a list of pros and cons summarizing NCATE and TEAC, our understanding gained in meetings and workshops, and our faculty and stakeholder input. They proved to be very interested in our findings and ultimately were completely supportive of our decision. TEAC’s credibility was also discussed extensively in the Utah Education Deans’ Council and, as with the administration, was viewed positively. Additionally, many prestigious teacher education programs nationwide were accredited by TEAC, which added another dimension to our decision. As a result of our study of TEAC principles, we felt that TEAC was a credible alternative to NCATE. UVU was the first Utah System of Higher Education (USHE) program to gain TEAC accreditation (a private institution was the first). Currently, every USHE institution in the state has either chosen or is considering affiliation with TEAC, as have the majority of the state’s private institutions.
Tensions of Choice The tension associated with growth impacted our choice of accreditors. The major growth in the community was a product of favorable economic
234
LINDA E. PIERCE AND SUSAN SIMMERMAN
and living conditions. Although growth has been a tension, it has been of benefit over time, as the program has been able to expand and provide much needed personnel for the area’s schools. Institutional student population during this time had grown to its current enrollment of approximately 28,700 while at the same time the education departments grew from approximately 260 total program students (juniors and seniors) to a current level of approximately 800. At the same time other pivotal changes occurred in the teacher education program. For example, the School of Education was created as an independent school in the university (previously the Department of Elementary Education had been positioned within the School of Humanities, Arts, and Social Sciences), the secondary education program became a department partnering with seven content areas, and the school moved into a new building. Simultaneous with these major changes, a moratorium on new academic programs was imposed on the university from the State Board of Regents, and the university, then a state college, continued its work to become a university. Since continued growth at the university and school level was an integral part of our institution, the constant tension associated with growth was an invariable factor in our decision to affiliate with TEAC. We felt that their vision of program evaluation and continuous improvement lent itself well to our inevitable growth at the university. Actions regarding the program could be based on data and how it was responding not only to the university’s growth but also to the rapid growth in our service area. We felt that data and professional judgment-driven decisions were much more responsive to current district needs and community situations than adopting a set of standards that were quite inflexible. We liked the prospect of evaluating and responding to current needs in a professional manner since it was compatible with our philosophy. To further the inquiry into the choice decision, the position of accreditation coordinator was established. Her responsibility was to look at every aspect of both accrediting groups, summarize their philosophies, requirements, decision making procedures, support, etc., and create a list of pros and cons for both groups and present the information. Decision input was sought from faculty, administration, and district stakeholders with the final choice being to affiliate with TEAC. Program evaluation and continuous improvement would be the underpinnings for our program. There was a final tension relating to choice. Sometimes terms get in the way of accomplishment and do not effectively portray the actual task.
235
Accreditation
Juliet said, ‘‘What’s in a name? That which we call a rose By any other name would smell as sweet.’’ (Shakespeare, 2007, Romeo and Juliet, II, ii, 1–2)
An example of the power of a simple name change happened early in our data collection process. Students were assigned to submit two teacher work samples (TWS) (Renaissance Group), one at the beginning of their program and a second completed at the end of their student teaching experience. Completing two TWS seemed like so much ‘‘busy work’’ to many, and their complaints were rampant. To solve the problem, we changed the name of the second TWS to ‘‘senior project’’ and the majority of the complaints and submission problems disappeared attributable in part to their perception of the task. Similarly, for some faculty in the school, a complete understanding of the process of accreditation was misunderstood, since it held a connotation that invoked ‘‘imposed restrictions.’’ As with our experience with students, this lack of understanding regarding the whole picture barred full implementation of program evaluation. Accreditation had been the driving force for many of the changes on which we embarked, but though it was a worthy and a critical goal, it was not the most important result that we wanted to achieve. The greater goal was program evaluation and continuous improvement – accreditation fell under the umbrella of those objectives. To bring about the paradigm shift, we would need to name the task and show the relationship of accreditation within the context of the desired framework. By changing the referent, the perception would and did change. Even though this shift in understanding was not totally complete until later in the assessment process, it began with the choice of accrediting groups. Use of an extensive description of both accrediting bodies, systematic discussions, workshops and training sessions, conversations with district stakeholders and faculty helped to make the final decision. Each of these stakeholders had a deep interest in the direction of the program. The tug of group decisions as opposed to representative decision-making faces all of them. It took two semesters to finally reach a conclusion, but we felt that we had done a fair evaluation of both NCATE and TEAC and determined which group was the best fit for our program and our philosophy.
236
LINDA E. PIERCE AND SUSAN SIMMERMAN
OWNING A CULTURE Faculty needed to establish ownership and define how program evaluation would look at our institution. This did not happen quickly or easily. Since this is a new process for most of the faculty, they needed to learn what the culture represented and how to participate. Enculturation, the process by which an individual learns about and participates in a culture, is a constant challenge, especially in higher education because we are dealing with some very strong personalities who have robust feelings and beliefs about their particular field. The tensions associated with ownership and enculturation were more pronounced than those of choice and have not yet been fully resolved. This tension includes all School of Education faculty as well as the faculty from partner schools/ colleges within the institution working with the secondary education program. Here we had to work with often strong held paradigms regarding program evaluation and student outcomes. Often faculty had views regarding evaluation that stemmed from their research strengths such as qualitative or quantitative perspectives. Some felt that the numbers portrayed an incomplete portrait of our program even though there were multiple examples of narrative data used in our evaluation system. Still others could not see the value of using established assignments and rubrics and submitting scores in a timely manner – preferring a more holistic approach. As we worked with faculty, we found that validating their viewpoints by allowing for different representations of data was helpful. Some needed instruction on how to work with the findings from our data. Faculty discussions and face to face conversations explaining the need for the process and offering assistance brought about positive results. For example, increased attention to secondary content faculty by the department chair and more personal contact has led to increased overall buy-in. Enculturation of new faculty and the tension felt from blending families of established faculty with new faculty from very diverse backgrounds occurred frequently. Most of the original faculty that were at the institution when the teacher education program was established had retired or moved on to different employment after the fourth year of the program’s existence. And as with all university programs, faculty are hired, some stay, others stay for a few years, move on or retire, others take their place, and still others remain for the greater part of their careers. In a particular situation, we have four faculty who have been with the school for 10 years or more, but the majority of faculty have been with the
Accreditation
237
university for four years or less, and even within this group more than 75% have been with the school for one or two years. The preponderance of the faculty are at a rank of assistant professor. Many do not have a great deal of university experience to draw from, which again becomes part of the enculturation challenge. The school is constantly endeavoring to bring faculty up to speed with the various aspects of our program culture. The struggle to create a sense of ownership among the faculty has not been a smooth process. Conversations, mentoring, and other personal contact strategies have contributed to more success in producing a feeling of ownership. However, there remains some resistance to the system among a very few faculty, but we are confident that continuing an open dialog will mitigate the situation over time.
CREATING AN ASSESSMENT SYSTEM To create an assessment system, the faculty looked seriously at the program curriculum, specifically examining appropriate courses and course sequence, current student evaluation instruments, and available outcome data. Initially faculty examined the program and began an extensive curriculum mapping project. Faculty felt this would lay the groundwork for the overall assessment system. The process took many weeks of presenting the scope and sequence for each program course. During each presentation, faculty also examined how the course related to others in the program, the impact of the time when a student took the course, as well as the length and placement of the field experience for that semester. At the conclusion of the presentations, the faculty examined student outcome data from the current assessment system in an effort to clarify the progress of the program and identify areas where it could improve instruction. By the end of the academic year, a viable plan for a major elementary education program revision was in place. The assessment system is an ongoing, dynamic process that we feel will constantly change to meet evaluation needs. It comes with many tensions since evaluation, in the large sense, can mean program change. On a smaller scale, this could mean examining course content, professor availability, delivery methods, supervision, or other issues directly relating to an individual course. We would like to give some examples of the changes that took place as a result of the mapping project.
238
LINDA E. PIERCE AND SUSAN SIMMERMAN
Curriculum Mapping and Available Program Data As introduced earlier, when UVU’s teacher education program was initiated on our campus, it was a shared elementary degree with another USHE institution. We adopted their entire program: courses, content, course sequence, field work, assessment instruments, etc. It was taught on our campus, but the degree was granted through that institution. After a prescribed period of time, the Utah State Board of Regents felt the program was viable, and we were allowed to offer and grant a Bachelor of Science degree in Elementary Education from our institution. With the advent of the curriculum mapping project, the faculty felt that sufficient time had elapsed from the program’s beginning to warrant making the degree unique to our institution. We felt there were four areas of the program that should be evaluated: course content and the usefulness of the course in the professional preparation of new teachers, course sequence, field work, and assessment instruments. Faculty each presented an overview of their course indicating the objectives, topic sequence, evaluation methods, fieldwork assignments, and rationale for each part of the course. At the same time meetings were held with area elementary and secondary school principals and a select group of cooperating teachers to gain feedback for program effectiveness as it relates to new teachers in the schools. On the basis of the information obtained from the mapping presentations, available student data, and stakeholder input, several changes were initiated. Two content courses, social studies and science for elementary teachers, were dropped from the program since it was felt that students had covered this content in general education courses taught by experts in those fields. The credit hours from these courses could be used more effectively elsewhere in the program. The program sequence was changed to enable us to have the significant methods courses closer to student teaching and courses that were more of a preparatory nature closer to the beginning of the program. The management courses were changed from two workshop format courses to four, one-credit hour courses, one taught each semester and the differentiation course was changed from a one-credit hour course to a three-credit course. Finally, the senior out-of-state field trip was eliminated, and the field assignments and local placements were arranged to fit course content and expectations. Faculty helped create new assessment instruments that were aligned with Interstate New Teacher Assessment and Support Consortium (INTASC) standards. They also began working on the processes for gathering,
239
Accreditation
reporting, and evaluating data. This curriculum mapping and program evaluation project became the groundwork for what would take place over the next five years and culminate in the School’s first national accreditation.
Tensions of Process There were multiple tensions associated with this substantial project as well as challenges that continue to influence our decisions. By no means have the tensions all been resolved and we do not expect them to be as they come from many different areas – some we can work on and some are beyond our control. Tensions seemed to fall into categories: program, people, data, and instruments. The following are some examples of tensions encountered and our efforts at solutions.
PROGRAM ESL/Multicultural This program change was one that proved to be especially successful. An ongoing challenge is the shifting demographics of the service area as well as state and national changes. More specifically, how can the program respond to the changing demographics of the region and become responsive to limited English proficiency students? This is critical for students’ success and in turn the program must support increased understanding of the nuances of teaching English language learners (ELL). Pamela Perlich, a University of Utah economist, suggests that by the year 2050 the minority may become the majority in the United States. Utah population trends mirror that projection (Perlich, 2009). The Latino population in one service area school district (a more affluent district, which has the lowest percentage of Latino students) has grown from 3% to 8.12% of the district population, whereas another local school district currently has a 26.56% Latino student population. Surveys and outcome data also indicated a need for more instruction to meet the changing demographic needs. With the intent to respond to this need, the school hired two faculty with doctorates in English as a second language (ESL) and multicultural education. We also have revamped the multicultural field experience component of the program. Historically, the program took students on a multicultural field trip to Mexico to observe schools in a small, rural community in Colonia Juarez.
240
LINDA E. PIERCE AND SUSAN SIMMERMAN
The objectives of the trip were to get to know a Mexican community so students could relate to the environment where many immigrant families originated, observe good teaching, and participate in the classrooms at a high level to increase understanding. None of which were accomplished adequately with this trip. A Denver, Las Vegas, and local option experience were eventually added to the field placement options. These were of much higher quality. Careful evaluation of these trips was conducted and the faculty arrived at several conclusions. First, students saw some outstanding teaching, however, too often the teaching in the Mexico option was not exemplary and did not assist in their training. The trips took an entire week, which included travel and only two days of observation. This was too expensive for many students and too much time was spent away from the classroom for the associated benefit. Additionally, the trips were hazardous, since they were scheduled in November with travel over mountain roads when weather was becoming treacherous. The decision to overhaul the experience and make it totally local was as much a tension issue for students as it was for faculty. Students had begun to expect the trip and viewed it as an annual senior trip – more as a vacation than a learning experience. A few faculty felt entitled to their connections to the host school systems. After extensive conversations and performance data evaluation, the faculty eliminated a long held practice that had outlived its usefulness. A new diversity field work experience was developed in area schools. The schools that were chosen for the experience had to meet certain criteria that included schools with 50% diversity and/or 50% free and reduced lunch. There were immediate benefits. All three weeks were spent in school; the experience was in the area where students would generally stay to teach (majority of students are place-bound); faculty could interact with more students at a much higher level; and students could complete assignments from courses, where with the out of town experience they had too little time to carry out assignments effectively.
Management When the elementary education program was initially created, it had two, one-credit hour classroom management courses, both taught in a workshop format. The first course was taught as a two-day workshop the week before the first semester of the senior year; the second course was also a two-day workshop before the beginning of student teaching. Recognizing that the
241
Accreditation
classroom management instruction was not sufficient, the management course instructor was invited to present a one-hour mini lesson in a methods course before the two junior year field work experiences. The instruction focused on the grade-levels in which the students would be participating (primary grades during fall semester, intermediate grades during spring). The lessons were developmentally specific, gave the students some effective guidelines for working with the particular age groups, but because of the format, only touched the surface of classroom management. The minilessons showed some success, however, evaluations from cooperating teachers for student teaching and field work still indicated that students were deficient in classroom management understanding and ability. On the basis of continuing data, a full program change was considered. The two workshop format courses were dropped from the program and were replaced with four, one-credit hour courses. One course was to be taught each semester in the program for the full term. We brought in practitioners to teach many of the sections because of their recent experience in the classroom. Later in the chapter, we discuss the results of the research study that was conducted at the time the program change took place. These two examples (the diversity field trip and the classroom management courses) led to success for the program and its assessment system. However, we are currently at another crossroads in program evaluation. For several years data has indicated that students need and request more information regarding math methods instruction at the elementary level and more instruction on working with ELLs at the secondary level. Here we have a tension that involves people and their feelings of ownership more than the courses themselves. The solutions for these challenges are not clear nor easily accomplished since we are working with people who are passionate about a particular course and possibly not looking at the whole picture as it relates to program effectiveness, student outcomes, and district needs. Presently we are still working on an acceptable solution to this dilemma.
Participation As illustrated in the previous examples, participation is an important element of the program. But, the people involved are one of our biggest challenges. As we have changed, eliminated, or added courses or experiences, people are directly impacted. How they or the program reacts determines the extent of the tensions associated with the changes. People have exhibited all types of
242
LINDA E. PIERCE AND SUSAN SIMMERMAN
behaviors associated with such a large venture as a comprehensive program evaluation and continuous improvement model. Participation runs the gamut from rigid to willing; participatory to just forget it. We feel that most of the faculty when they truly understand the depth and breadth of the system and the importance an outcomes assessment model provides are more than willing to make sure the program works at its highest level. Whether tensions come from a lack of understanding or downright unwillingness to work within the system, the tensions are real and faculty are continuously working to help everyone understand and participate in such a manner that the program works for everyone involved. A possible solution we have found is to continually work with faculty to help them understand what we know as a result of the data, help them feel validated regarding their work, and help them understand their ability as decision makers.
Instruments The program has endeavored to create instruments that were usable and produced practical results. At the time of the curriculum mapping, all new evaluation instruments and surveys were created, as well as a set of key assignments, which were developed to evaluate pertinent understandings we felt were essential to student success. Each one of these instruments were related to the INTASC standards, had be economical (be able to be used to evaluate more than one standard), have an associated rubric, and be sustainable over time. This took a great deal of time because not only instruments needed to be created but also the alignment had to be correct. Additionally, all instruments were field tested before they were ever implemented for use in program evaluation. Since at this time many of our courses were taught only once a year, it was a challenge to reevaluate the instrument after a change had occurred. The system included various instruments compiled to obtain data from multiple sources. It included admissions data, course grades (which we understand give a limited perspective on student accomplishments), PRAXIS test scores, key assignments, ratings, work samples, surveys, professional dispositions, and employment rates. We were quite pleased with many of the instruments since they purposefully asked the same question in multiple settings; hence, when we evaluated the responses, we were evaluating the same indicator in multiple settings across multiple instruments. For example, students were evaluated on their ability to select appropriated learning objectives and outcomes that meet the needs of all
243
Accreditation
learners on a field work observation, student teaching observation, several key assignments, work samples, and on the exit (self-report), alumni (self-report), and principal surveys.
Data Data submission and consistent evaluation have been a challenge since the beginning. Initially we handed everyone a spread sheet and asked for scores, but we did not receive complete data from every professor. We examined an online data management system that would allow us to have students submit material online, faculty score online, and the system retain scores and allow us to manipulate data as needed. We found that this particular system would not meet our needs in allowing us to score using our INTASC matrix as we had developed it. Additionally, it was expensive for our students, and rather confusing to use. Our next move was to put all the scoring spread sheets on the university share file system. This worked well for most faculty, but it was difficult for the data manager since faculty often resubmitted the whole page and not just the individual scores. Currently, we are sending individual spread sheets to faculty so the data managers can track submissions easier. In the future we plan to incorporate an ePortfolio system that should be in use by the university within a short period of time. To facilitate and encourage more participation, we created an evaluation team within the school consisting of the associate dean and three faculty members. This work is a committee assignment for faculty and is considered part of their service obligation for tenure and rank advancement. The committee approach has streamlined the submission and evaluation process, but the actual data evaluation is still in the hands of the faculty.
Sustainability A system is not useful if it is not sustainable. The school has gone through several iterations of scoring methods for the key assignments, work samples, and portfolios. For example, the rating forms and the consistent scoring of the TWS were initially too time-consuming to be sustainable, as were the scoring procedures for many of the key assignments. As we used these evaluation tools over time we realized that sustainability was critical to consistent use and reliable data. Modifications were made to the forms and
244
LINDA E. PIERCE AND SUSAN SIMMERMAN
assignments until we felt we had reached a point where they evaluated the standards in a way that gave us legitimate scores. In an effort to obtain reliable, valid scores on the TWS, the school employed scorers who read all of the elementary and secondary samples, and periodically participated in an inter-rater reliability exercise to maintain an acceptable level of consistency. Also, all key assignments are scored at the time they are completed by the assigning professor. Care was taken to assure that all individuals teaching the courses used the same assignment description and scoring rubric and were on the same page regarding the use of the rubric.
Program Evaluation Instruments Over time changes were made to various aspects of the instruments used for program evaluation. Any change now is made using evidence from assessment data and department research. For example, admissions to the professional program (at the junior level) became a point of evaluation in refining the assessment system. An interview was kept as a means to evaluate applicants, but the focus was changed from a ranking system to a screening system since it was found through department research that the high scorers on the interview were not necessarily the high performers in the program (Farnsworth & Simmerman, 2007). Other changes to admissions and key assignments have also been made, but only after consideration of the data from multiple sources. For example, one admission test that did not adequately discriminate was eliminated, ESL and multicultural assignments were modified to focus on more relevant understandings, and an early program key assignment was eliminated since it did not show a useable student outcome.
Program Evaluation Perspectives The faculty has endeavored to systematically evaluate the program using accepted research methods in addition to the assessment system protocol using scores and criteria for success. We would like to discuss two projects we have used to further evaluate the program. The first is a brief description of a study in process where the researchers examined writing instruction in partner elementary school districts and how those findings connect to a language arts methods course. The second is a complete description of a
245
Accreditation
research study undertaken following the major curriculum mapping project where a program change was employed that added a four-semester sequence of one-credit hour classroom management courses. These projects have given us a solid perspective on the value of continuous improvement based on data evaluation.
Language Arts Methods Course A research project examining time and practices relating to writing instruction in over 170 elementary classrooms was completed in 2010. Some of the conclusions lead us to believe that our language arts methods course could be a catalyst in preparing teachers to be more effective teachers of writing (manuscript in process; presentations have been at the International Reading Association, National Reading Conference, Association of Literacy Educators and Researchers, and the American Educational Research Association). On the basis of the preliminary findings, professors teaching this course have made changes to the course content to reflect more instruction involving process writing and beliefs regarding writing. This research was not originally intended to be used as a program evaluation tool, but the results have been valuable to the program.
Classroom Management Courses As we discussed earlier in the chapter, a major curriculum modification for elementary education took place in 2004 and specifically, the classroom management offerings were expanded. During this time, when the courses were changed, there was an opportunity for comparisons of the old and new program. The role of reflection has often been cited in conjunction with professional behaviors of effective teachers. Stronge (2002, p. 20) states that, ‘‘Effective teachers continuously practice self-evaluation and self-critique as learning tools.’’ Reflective teachers are students of learning; they are curious and constantly seek to improve their instruction. They think about student learning as the product of their instruction and strive to meet the needs of all learners. Studies have also shown that reflective teachers use many avenues to reflect on their own practice, including journals, portfolios, collaboration with colleagues, and video-tapes (Burbank & Kauchak, 2003; Gordon, 2003; Griffin, 2003; Hashweh, 2003; Yerrick & Hoving, 2003).
246
LINDA E. PIERCE AND SUSAN SIMMERMAN
Teacher candidates at UVU (then Utah Valley State College) are given many opportunities to reflect on their own teaching and its impact on student learning. Frequently, professors require a reflection component to field work assignments or other work. The major focus of this project is the reflection completed as part of a TWS (Renaissance Partnership for Improving Teacher Quality, 1999), particularly as it revealed student’s thoughts about classroom management. Participants Teacher candidates from the elementary education program at UVU participated in this project. The participants were not randomly assigned but were analyzed as two groups based on their graduation year from the professional teacher education program: Class of 2005 and Class of 2006. The Class of 2005 group consisted of 99 students, 12 male (12%) and 87 female (88%). The median age was 23–25 years: 3% were 18–20 years old, 22% were 21–22 years old, 35% were 23–25 years, 15% were 26–30 years, 5% were 31–35 years, 8% were 36–40 years, 5% were 41–45 years, 2% were 46– 50 years, and 2% were 51þ years old. Ethnicity was overwhelmingly white (99%), with one student of Asian lineage. All students’ primary language was English. Eight percent spoke another language. Over half (58%) of the students entered the program with an Associate’s degree. The students came to the professional program with a variety of education-based experience: 1% had classroom experience as a paid substitute, 19% had experience as a paid aide, and 13% had classroom experience as a paid pre-school teacher or administrator, and 8% as paid Head-start teacher or administrator. The Class of 2006 group was composed of 109 students, 9 male (8%) and 100 female (92%). The median age for this group was 21–22 years old: 12% were 18–20 years old, 31% were 21–22 years old, 18% were 23–25 years, 14% were 26–30 years, 2% were 31–35 years, 4% were 36–40 years, 2% were 41–45 years, 4% were 46–50 years, and 1% was at or above 51 years old. As with the previous class, the ethnicity was predominately white (96%), with 2% Hispanic, 1% Asian, and 1% other. Again, all candidates’ primary language was English, though 10% spoke another language. A majority (66%) had earned an associate degree. Their pre-program education-based experience included: 20% with classroom experience as a paid substitute, 20% with experience as a paid aide, 12% with classroom experience as a paid pre-school teacher or administrator, 2% as paid Headstart teacher or administrator, and 7% with experience as a Title I aide. To determine whether the two groups were statistically similar, we compared their program admission scores (GPA, ACT, a writing test, and
Accreditation
247
interview scores). A series of t-tests revealed no significant differences between the Class of 2005 and the Class of 2006 on admission criteria ( pW.05 in each case). Method To write the TWS, candidates receive instruction, starting in the first semester of their junior year, on the expectations for each section. The TWS is completed on two different occasions during the professional program, the first during the first semester of the program, and the final completed during student teaching. Their first encounter is during their first semester junior semester and serves as baseline data for the program. The second and final TWS, which is scored and used for program improvement, is completed during their student teaching experience. Students were required to implement their teaching plans in a school setting. Following their teaching, they were to respond to the following task: Reflect on your performance as a teacher and link your performance to student learning results. Evaluate your performance and identify future actions for improved practice and professional growth.
Success: Select one learning goal where your students were most successful. Provide two or more possible reasons for this success. Consider your goals, instruction, and assessment along with classroom characteristics that were under your control. Improvement: Select one learning goal where your students were least successful. Provide two or more possible reasons for this lack of success. Consider your goals, instruction, and assessment along with classroom characteristics that were under your control. Discuss what you could do differently or better in the future to improve your students’ performance. Possibilities for professional development: Describe at least two professional learning goals that emerged from your insights and experiences with this unit. Identify two specific steps you will take to improve your performance in the critical area(s) you have identified. After the submission of these TWS, the researchers read all the reflections from both groups of students and scored each reflection independently. They then compared scores for every student, checking for scoring
248
LINDA E. PIERCE AND SUSAN SIMMERMAN
uniformity. It was felt that complete agreement should be reached between the researchers in order to provide a more stable picture of the kinds of reflections on classroom management that were present. The resulting categories were: none (meaning there was no mention of classroom management in their reflection), mention (meaning they briefly referred to classroom management), plan (meaning that they spoke of a plan to improve their management), and action (meaning that they not only had a plan, but spoke of implementing it/acting on it). To further determine the possible change in classroom management skills based on the program modifications, the cooperating teacher was asked to submit a summative evaluation of student classroom performance for each student using the standard UVU student teaching evaluation form. University supervisors also evaluated these senior students. The scoring range was from 1 to 3, 3 being the most competent student. Observers responded to the following questions pertaining to classroom management: The student:
Communicates behavior expectations to students Uses time effectively Employs effective management strategies Maintains consistent behavior standards
Results When the students were analyzed based on the categories they were scored in, the Class of 2005 had an overwhelming majority that did not address classroom management at all in their reflections. In fact, less than 10% spoke of a plan for improvement or reported on implementing their plan. Fig. 1 shows the percentages of students that made connections about classroom management in their reflections of success or failure in the classroom. On the contrary, the Class of 2006 showed a significant increase in the numbers of students who had a plan to improve their management (18%), and those who acted on those plans (4%), and a corresponding decrease in the number who did not mention management at all in their reflections. Fig. 2 shows these changes dramatically. When we compared scores of the students from the two groups, a highly significant difference was found, with the Class of 2006 emerging with a higher mean (i.e., more attention to classroom management in their reflections) than the Class of 2005 (t ¼ 2.45, df ¼ 206, p ¼ .007). The evaluations on classroom management skills by the cooperating teachers and the university supervisors similarly showed an increase in
249
Accreditation Mention 20% Plan 8% Action 2%
None 70%
Fig. 1.
Connections Made in Reflections of Success or Failure in the Classroom with Management (Class of 2005). Mention 20%
Plan 18%
Action 4%
None 58%
Fig. 2.
Connections Made in Reflections of Success or Failure in the Classroom with Management (Class of 2006).
awareness and implementation by the students. To be sure that both evaluators were in agreement with each other, a correlation statistic was calculated. The Class of 2005 cooperating teachers’ and supervisors’ scores on the classroom management indicators of the student teaching evaluation correlated at 0.426, whereas those for the Class of 2006 correlated at 0.766, significant in both cases ( pW.05 and pW.01, respectively).
250
LINDA E. PIERCE AND SUSAN SIMMERMAN
On the basis of that agreement, additional analyses were performed to test the extent of the difference that was seen by the cooperating teachers and the supervisors for student teachers across the two years. For cooperating teachers, the mean score for the Class of 2005 was 2.84, and that for the Class of 2006 was 2.92. This comparison yielded a significant difference between the two groups (t ¼ 2.23, df ¼ 206, p ¼ .01). For university supervisors, the mean score for the Class of 2005 was 2.87, whereas that for the Class of 2006 was 2.93, a significantly higher score (t ¼ 1.92, df ¼ 206, p ¼ .03). Conclusions We feel that the increased emphasis on classroom management has significantly increased teacher candidates’ awareness of its importance and their ability to implement effective management strategies. Furthermore, this programmatic shift has significantly increased student evaluations on management criteria as seen by both college supervisors and cooperating teachers. Using the data to help improve our program and to evaluate the effectiveness of those changes has made a substantive difference in the quality of our teacher candidates.
WHAT HAVE WE LEARNED? Tensions of Understanding One of the most important lessons we have learned is that program evaluation and continuous improvement must be an ongoing process. We never arrive at an end point but continue to ask questions and seek answers to improve the educational experience of our students. This continuing effort takes time and energy and involves consideration of many points of view. It takes willingness to change a paradigm, to consider a view, and be willing to look at it again, always with an open mind. As we look at the process we do not place blame but look for solutions. If a perception is real to a person, look for ways to balance the perception with reality. Another real challenge we are facing is losing a group of faculty with institutional and program memory. These individuals were with the school when it began and when the accreditation/program evaluation process was initiated. A tension lies with passing the torch to a new group of faculty who will have the responsibility to maintain and improve on what they have been
251
Accreditation
given. Presently, the school is working with an evaluation team to bring them up to speed on the process and why the process looks the way it does. We have found that team-building, on a regular basis, is critical. Faculty learn that asking questions is healthy, it effects how the program is intertwined with other programs on campus and the partner school districts. The paradigm shift from total independence among faculty to a cohesive group in terms of accountability to the university, students, and the community is vital.
ACKNOWLEDGMENTS The authors to acknowledge the generous support of their colleagues in the School of Education at UVU and of their families, whose patience and care – throughout this project and with all our responsibilities ‘‘at school’’ – has been unlimited and true.
REFERENCES Burbank, M. D., & Kauchak, D. (2003). An alternative model for professional development: Investigations into effective collaboration. Teaching and Teacher Education, 19(5), 499–514. Farnsworth, B. J., & Simmerman, S. (2007). Are high interview scores connected to quality performance by teacher candidates? Paper presented at the Hawaii International Conference on Education, Honolulu, Hawaii. Gordon, J. (2003). Assessing students’ personal and professional development using portfolios and interviews. Medical Education, 37(4), 335–340. Griffin, M. L. (2003). Using critical incidents to promote and assess reflective thinking in preservice teachers. Reflective Practice, 4(2), 207–220. Hashweh, M. Z. (2003). Teacher accommodative change. Teaching and Teacher Education, 19(4), 421–434. Perlich, P. (2009). Changing demographics in the state of Utah. Address presented at the Utah State Board of Education ESL Directors Meeting, Jordan High School, Jordan, Utah. Renaissance Partnership for Improving Teacher Quality. (1999). Renaissance teacher work samples. Available at http://www.uni.edu/itq/RTWS/index.htm Shakespeare, W. (2007). Romeo and Juliet. In: J. Bate & E. Rasmussen (Eds), William Shakespeare: Complete works. New York: Random House. Stronge, J. H. (2002). Qualities of effective teachers. Alexandria, VA: Association for Supervision and Curriculum Development. Yerrick, R. K., & Hoving, T. J. (2003). One foot on the dock and one foot on the boat: Differences among preservice science teachers’ interpretations of field-based science methods in culturally diverse contexts. Science Education, 87(3), 390–418.
CHAPTER 15 WESTERN GOVERNORS UNIVERSITY: A RADICAL MODEL FOR PRESERVICE TEACHER EDUCATION Thomas W. Zane, Janet W. Schnitz and Michael H. Abel ABSTRACT The Western Governors University (WGU) educational model departs from most other postsecondary education models in two principal respects – it operates entirely online and is competency based. In fewer than 10 years, WGU has built a fully accredited (including National Council for Accreditation of Teacher Education accreditation) national teachers college offering 30 different programs to an enrollment of nearly 10,000 preservice candidates residing in all 50 states and several foreign countries. This chapter will describe how the WGU model differs from other institutions and how these differences both simplified and complicated the building of the teachers college, the accreditation process, and obtaining licensure for our students in different states.
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 253–270 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012018
253
254
THOMAS W. ZANE ET AL.
It is not every day that a group of educators is given the charge to build a new university from the ground up. For the past 13 years, we have participated in a rewarding journey of exploration, innovation, creation, and standardization. Western Governors University (WGU) began in 1996 with no campus, no faculty, and no students. The Western Governors Association charged us with developing an entirely new model for postsecondary education based on something other than courses and credit hours. The model had to reach the underserved and offer opportunities for postsecondary education irrespective of time and place, with degrees that were credible to employers, other postsecondary institutions, and to regional accreditation organizations. Soon after we began development, we were tasked with building a teachers college, which added the need for specialized accreditation from the National Council for Accreditation of Teacher Education (NCATE). Our journey from drawing board to accredited teachers college with over 10,000 students offered both opportunities and challenges, some of which were common to all preservice programs, some which our competency-based (assessment based) model was well-suited to meet, and finally, some that were unique to our situation. Like all preservice programs, we had to juggle issues related to governance, curriculum, and faculty. We had to address the same state licensing rules and national standards as any preservice program. Our programs had to fit into the higher education model to qualify for federal financial aid, articulation agreements, and the like. We will focus mainly upon challenges and tensions stemming from areas where our experience differs from what most preservice educators encounter because we are confident that readers will have either shared the more common experiences themselves or will find several good descriptions in other chapters. Because we had the freedom to build something totally new, we were able to reduce some of the kind of tensions typically found on other campuses. Our curriculum and programs are standards-based, assessment-centric, and shaped by centralized and transparent data gathering and reporting. This made it relatively easy to meet state and national standards dealing with content, assessment, and acting upon data. We could readily give accreditation team members access to the sorts of data and reports they needed. Also, we were a data-regarding and data-using culture even before any students arrived in that we organized and subsequently used internal metrics for monthly updates, quarterly tactical decisions, and yearly strategic decisions. The data we collect centrally applies to all students in all programs and is stored and reported similarly. Our decidedly top-down governance (structured like a business) and the fact that we set expectations
A Radical Model for Preservice Teacher Education
255
long before faculty came on board meant that it was relatively easy to gain faculty support for rules and new initiatives. Finally, our structure facilitated a collaborative collegial environment across the entire university. Our radical model also brought unique – and sometimes unexpected – challenges resulting in tensions that few (if any) other preservice programs might encounter before being accredited. For example, our unusual governance structure can be difficult to explain to accreditation organizations steeped in established higher education structures. Even something as simple as referring to our preservice programs as a college is something of a misnomer because in WGU parlance we do not have colleges per se. Rather we are organized along functional business lines to break down the silos or stovepipes often found in academic organizations. This too is unfamiliar to accreditors and requires explanation. Our faculty members live in nearly every state of the union and in some other countries so communications systems and protocols had to be established. Moreover, we have found that simply communicating our model and methods to new faculty and to other stakeholders is an ongoing challenge, so it should not be surprising that we would have similar or larger challenges communicating these issues to accreditors. In this chapter, we briefly outline our model for postsecondary education and then discuss some of the tensions that we have faced in the context of building the teachers college and associated preservice programs and treading the path to regional and then NCATE accreditation. We begin with a short outline of the overall model to give the reader a foundation for understanding why we built the teachers college the way we did. Then, we briefly itemize the steps necessary for building the college and preservice programs. The majority of this chapter focuses on the pathway to accreditation and beyond.
A RADICAL MODEL FOR POSTSECONDARY EDUCATION The WGU educational model departs from traditional postsecondary education in two principal respects – it operates entirely online and it is competency based. WGU has no physical classrooms; hence, students access program materials, learning resources, faculty, and administrative offices through the Internet and telephone. This is in keeping with the university mission statement which pledges to provide ‘‘a means for individuals to
256
THOMAS W. ZANE ET AL.
learn independent of time and placey’’ and enables us to serve a student population spanning all 50 states and a number of overseas locations. The Western Governors Association established the university as a nonprofit institution for the purpose of applying technology to expand access to higher education and to target fields needing greater numbers of graduates than existing institutions was producing. The governors determined that a competency-based approach would offer an effective and efficient way to achieve those goals. For each program, we define competence in terms of both content and the necessary elements of knowledge, skill, ability, and disposition a graduate must possess to be considered competent (Fig. 1). We also incorporate, track, and measure a set of enduring traits (also known as crosscutting themes or general learning outcomes in some institutions) across the entire curriculum and throughout the duration of a student’s experience at the university. This means that our programs focus on preparing students for what they will encounter in the real world upon graduation. It also enables our students, who are typically working adults with some degree of experience in their chosen field, to capitalize on what they already know and can do by proving it through assessment. Progressing through a WGU program means using self-directed learning resources followed by the demonstration of competence by passing many
Domain: Content, e.g., “Chemistry,” “Sales Management,” “Nursing Science,” etc.
COM An in PETENC tegra E: dime ted three nsion al cons truct
Tax Broa onomy o d-ba f Ski s comp ed evide lls: nce o eten “k f “abilit nowledge ce,e.g., ,” “sk ies,” “disp ills,” ositio ns,” e tc.
Fig. 1.
Crosscutting Themes: Broad areas of emphasis, e.g., “cognitive literacy,” “technology/information literacy,” “citizenship,” etc.
The Western Governors University Competency Model.
A Radical Model for Preservice Teacher Education
257
assessments. It is largely irrelevant whether the student’s knowledge, skill, or ability is the result of on-the-job experience, course work, independent study, or some other means, as long as the student can pass all the required assessments. We do not measure credit hours for using the self-directed learning opportunities and we do not award grades or calculate a grade point average (GPA) because students must successfully negotiate each assessment before moving to the next step toward graduation. Thus, we can warrant to potential employers and other stakeholders that all graduates possess the predefined competencies. An added benefit of the WGU model is its flexibility. Students enroll for a six-month term starting in the month of their choice and may proceed at their own pace. Thus, they are often able to complete their program far more quickly than they could in other institutions. Perhaps one student’s experience can illustrate how this works. A single mother of three was a teacher in a rural elementary school in a western state. Although she had some college credits, she did not yet have a degree or teaching certificate and had been teaching under various special arrangements for 12 years. She was a competent teacher. She knew the content and understood the related pedagogy and strategies. Her students were achieving. She had to get a degree and license or face losing her job. But she could not afford to quit work and attend school full time, and the nearest university was over three hours from her home. She enrolled at WGU and began working at her own pace, whenever she could make the time. Because she was already competent in many areas, she was able to perform well on her pretest, skip the associated learning opportunities for some domains, and go directly to challenge many of the assessments. Her only weak area was in mathematics; hence, she had to take a full course in quantitative literacy before challenging the assessments associated with that domain. She was able to graduate in just 18 months, performed extremely well on her required external preservice licensing exams, received her state license, and held onto her job. Although our model and structure facilitate this sort of success, and our students provide much of the self-directed drive for their own success, our mentors are the glue that holds it all together. The university assigns a mentor to each student to assist with program and work planning, to coach and encourage, to give feedback, and to facilitate access to university resources. With the mentor’s help, the new student develops a personalized Academic Action Plan (AAP) based on the student’s academic background, career experience, time available for academic work, goals, etc. This, together with a detailed Course of Study, becomes the student’s road map
258
THOMAS W. ZANE ET AL.
and is the basis for marking progress through the program. There are also specialized course mentors to assist with the different content areas and student communities to facilitate student interaction. A highly motivated student with a good plan, adequate time, and a dedicated mentor can cover a lot of ground in a term, making this an exceptionally cost-effective way to complete an education. To make this all work, WGU operates differently than a typical academic institution. We have a Board of Trustees consisting of educators, state governors, and industry leaders. The trustees appoint a full-time university president/CEO who is ultimately responsible for decision making, strategic planning, and the day-to-day operation of the university. The president also invites a diverse group of corporate and foundation executives to serve on the National Advisory Board. This body helps the university foster a broad, even visionary perspective and provides valuable guidance on development, marketing, and national-level advocacy. A cornerstone of university governance at WGU is the Academic Program Council – actually a series of functional-area- and program-specific councils that govern various aspects of academic operations. Members of these councils are part of the WGU faculty who oversee curriculum development and help ensure that program content reflects current real-world practices and requirements. This is one aspect of the WGU ‘‘unbundled’’ or ‘‘disaggregated’’ faculty model. We have organized university operations by functional area rather than by discipline. Major functions like academic program design and development; assessment development, administration, and grading; student, course, and community mentoring; academic services; etc. utilize experts in each area and operate uniformly across all disciplines rather than within any single discipline or program. This promotes both efficiency and a high level of consistency and quality. Now let us take a look at how the WGU Teachers College has developed and evolved.
Building the Teachers College In late 2001 WGU began building a standards-based national Teachers College. In many ways it would be familiar to any preservice program educator because of stringent state and national preservice program requirements, but in other ways, it would be unlike anything found elsewhere. The Teachers College is among the first preservice programs in the country with a truly national mission. From the outset, our goal was to produce
A Radical Model for Preservice Teacher Education
259
high-quality teachers to fill acute shortages in all 50 states. We developed programs that would appeal to working adults preparing for licensure as well as to current teachers seeking a master’s degree because its programs enable them to achieve their goals without leaving home or quitting their current jobs. Although our programs are all online, we incorporate preclinical and clinical practice programs that provide students local placements. Just 15 months after we started development, U.S. Secretary of Education Rod Paige, joined by senators and our academic leaders, publicly announced the launch of the Teachers College. At present, we have over 10,000 preservice candidates who reside in all 50 states and several foreign countries enrolled in one of our 30 (and counting) bachelors and masters degree programs in education. Reaching this point has been an exciting journey consisting of four distinct development phases, a concurrent program development process, and tackling the challenge of licensure.
Curriculum and Program Development Foundations – Domain Definition The first phase focused on defining exactly what a graduate had to know and be able to do and then organizing these data into a series of domains. Our initial step was to compile a database of research-based best teaching practices, job-task analysis data from independent preservice teacher testing agencies, state teacher standards, state preservice teacher education program requirements, state licensure requirements, standards promulgated by national organizations such as INTASC, ACEI, NCTM, NAEYC, etc., and other standards such as those forwarded by teachers unions. There were significant differences in the depth, coverage, and rigor of the standards and licensure rules across states. For example, one state’s teacher standards fit on one page in a notebook, whereas another state published a 350-page text describing its standards. Because we were determined to license candidates in all states, we elected to meet the highest standard/rule pertaining to any component of our programs. A massive cross-correlation of these 50,000-plus entries revealed that there is a surprising degree of agreement regarding what is expected of teachers and of preservice programs. However, there are some contentrelated differences involving local requirements for certain specialized courses or procedures. In those cases where the differences are minor, we simply add a small piece to the program for all candidates. For example, some states require a separate course and/or exam on the U.S. Constitution. It was a simple matter to split our planned domains and exams to keep this
260
THOMAS W. ZANE ET AL.
topic separate, thus allowing it to appear as a unique part of candidate transcripts. But where the differences are substantial – such as individual state history requirements – or only apply to a single state, we inform students of the additional requirements but do not require them for graduation from the institution. Students must complete such requirements on their own before submitting their paperwork for licensure. The result of this effort was a set of three large ‘‘licensure’’ domains that fully defined what a graduate needed to know and be able to do to graduate. These were foundations of teaching (assessment skills, child development knowledge, etc.), effective teaching practices (classroom management skills, pedagogical strategies, etc.), and demonstration teaching [senior seminar, preclinical experiences (PCE), student teaching, etc.]. In addition, we constructed a series of discipline-based content-specific domains such as secondary mathematics content or the elementary education content that are analogous to majors at other institutions. With domains in hand, we began three overlapping tasks to organize the defined competencies into coherent and deliverable programs, namely building assessments, creating courses of study, and creating the operational rules and processes that hold each degree program together.
Assessment Development WGU uses various assessment types to ensure that we measure all aspects of a given competency. These include traditional exams, sets of performance tasks (PTs), portfolio or capstone assessments, and several types of observations. To illustrate how we test competence, raise your hands up in front of you and interlock your fingers and thumbs. That ball of flesh represents a ‘‘competency.’’ Competencies can be deconstructed and categorized with a taxonomy of sorts including ‘‘know,’’ ‘‘know how,’’ ‘‘show how,’’ ‘‘do,’’ and ‘‘be’’ components. Now pull your hands apart. The fingers on one hand are components that we test in objective exams; these are usually the ‘‘know’’ and ‘‘know how’’ components. The fingers on your other hand represent components that we test through mastery scored performance exams. These are normally ‘‘know how’’ and ‘‘show how’’ components. The thumbs represent other sorts of measures such as live observation, portfolio, capstones, etc., that measure the integrated ‘‘do’’ and ‘‘be’’ components which hold all the others together. Thus, we measure the same competency multiple times using multiple testing modalities. Students who pass all the fingers and thumbs pass the competency. In addition, a series of
A Radical Model for Preservice Teacher Education
261
individual competencies is then combined into highly integrated task-based assessments that represent larger pieces of real-world work. Each degree program comprises several domains and, therefore, requires many assessments. For example, an elementary education preservice candidate must pass each of 42 different assessments before graduating. To assure consistency and alignment among the silos of the content disciplines, we thread across and through the domain structures a series of enduring traits (also known as crosscutting themes here at WGU and as general learning objectives at many other institutions). Because all students in a program are responsible for demonstrating the same competencies through successful completion of the same exams, we are able to measure the acquisition of competencies aligned to the crosscutting themes in each of the content disciplines. For instance, ‘‘appreciation of diversity’’ is not a course of study at WGU, but it is measured and students are held accountable for demonstrating an ‘‘appreciation of diversity’’ in our domains that cover content-based skills such as assessment and testing, classroom management, specific teaching practices (pedagogy and strategies), and during preclinical and clinical experiences, etc. This ability to point to specific, measurable traits across our curriculum was first built into Teachers College programs, but has since spread across the university as a way to further guard against reductionism – the fragmentation of learning that has sometimes been a criticism of previous attempts at competency-based education.
Courses of Study Development and Learning Resources Selection The second leg of curriculum development also flows directly from the domain definitions mentioned earlier. Although the domains define exactly what a graduate must know and be able to do, our courses of study define pathways for gaining the defined competencies. This was an area where we found just how differently we had to think to adhere to our model. We needed to shift our perspective from syllabus and course-driven learning to a self-directed, anytime/anywhere model. Our initial attempts at building courses of study consisted of simple study guides – similar to a course syllabus – that often directed students to online courses. Early student survey data told us that these early courses of study were not very useful and that online courses that revolve around a typical academic calendar were not useful to our anytime/anywhere learning model. Subsequently, we have rewritten our courses of study to be more engaging, and we now select learning opportunities that better fit our 24/7/365 model. Given this more
262
THOMAS W. ZANE ET AL.
integrated system of in-context links to learning resources and drawing upon post-assessment coaching reports, students are able to more easily access focused and aligned support materials.
Preclinical and Clinical Experiences Development Our Teachers College had to design an online, distance-administered PCE program that would prepare preservice candidates to succeed in student teaching and to become reflective teachers – across 50 states. The result was a competency-based program that strives to enhance the practice and retention of reflective teaching skills by embedding PCE throughout each preservice program. We created a pedagogy which links the body of published research and theory in the area of reflection to the practice of teaching and employs practicable methods to include reflection as a gradable part of the curriculum. Videos provide material for reflective experiences in the program’s early stages, followed by observations of live teaching situations, and finally – after teaching mini-lessons – in-depth reflection on personal teaching strategies, philosophies, and performance. Whether it is called fieldwork, preclinical and clinical, student teaching, or practicum, each state sets requirement for some sort of clinical experience. There are considerable differences in the length of time (from 0 to 14 weeks) and types of activities required, such as observations, reflections, teaching mini-lessons, and full-day student teaching. In addition, states differ widely in how they expect fieldwork to be documented. Some request that we document completion of a certain number of weeks of student teaching, whereas others have elaborate requirements for itemizing the hours, types of placements, etc. If that were not challenging enough, the requirements constitute a moving target that is continuously changing and evolving. As we developed these clinical program components, we quickly realized the magnitude of the placement and licensure challenge. We added additional placement office staff charged with working with school districts in or near each student’s home community to obtain the support of principals and other officials to identify and host classrooms for our candidates. We also created methods for recruiting and training supervisors (local master teachers) to serve as on-site student evaluators. With over 10,000 students today, this department keeps 12 full-time staff members quite busy year around.
A Radical Model for Preservice Teacher Education
263
Concurrent Overall Program Scope and Sequence Development Concurrent with the aforementioned activities, academic leaders organized each program into a coherent scope and sequence. Although domains are the foundation of our programs, the structure and sequence of the domains must facilitate a coherent preservice program. This often requires give and take. For example, early versions of our elementary education domains combined content with pedagogical strategies. However, mindful of the needs of our adult student population, we altered our initial plans. During development, we continuously compared our plans to the needs of ‘‘sailors on a submarine’’ who wanted to begin their preservice programs but who could not access a live elementary classroom to practice pedagogical strategies until they separated from the military. This required us to separate content from pedagogy domains and also to reorganize the PCE program into phases that started with online video-based observations of classrooms. Once the content domains were established for a program and assessments were under development, the leadership turned attention to the technology systems required to deliver, assess, and post student results. An integrated technology support system provides us with a mostly seamless environment for student demonstration of learning across all programs. Our disaggregated faculty model calls for the separation of evaluation from assessment development and the separation of mentor progress support from learning resources. Understanding the logistical challenges of hiring and managing a cadre of highly skilled and credentialed contract labor (our evaluators) from around the country was a difficult task in a quickly growing institution such as WGU. This continues to be a challenge. During this period of the university’s evolution, we also focused on the need to communicate about the progress of our students to outside institutions of higher learning and to employers. We had to devise a way to communicate that was understandable to institutions familiar only with grades and credit hours. We developed a series of consistent, replicable algorithms that allow us to better understand the cognitive rigor required to master the competencies aligned to a specific course of study. We then worked to equate that student effort with something we call competency units (CUs) that are analogous to semester credit hours. Also, we had to find a way to communicate with stakeholders accustomed to grades. Therefore, we calibrated our cut scores, so that passing was analogous to a ‘‘B’’ in a typical grading system. As a data-regarding institution, WGU has formalized a continuous improvement process which is supported by a centralized data and analysis
264
THOMAS W. ZANE ET AL.
group that supports our internal performance goals and academic program and progress reviews. Whereas a periodic review cycle brings a program into compliance and sharpness as often as investigation intervals allow, WGU makes continuous program review an on-going responsibility for our academic managers. Such things as personnel, resources, institutional research and evaluation, meetings, and monies are allocated and anticipated toward the goal of improving the student experience and maintaining the currency and efficacy of the degree programs.
Licensure Perhaps the most challenging aspect of being a national teachers college is getting graduates from so many different jurisdictions licensed as teachers. There are a myriad of requirements ranging from specified courses, background investigations, and exams to clinical experience and various state-specific mandates. WGU has developed and continues to refine its system for tracking these requirements and helping graduates meet them. The following examples will illustrate the scope of the challenge. Some states – by law – mandate a specific number of credit hours, certain types or combinations of courses, and (in a few cases) actual course titles. This became a problem for us because our transcripts were originally organized around domains of knowledge and skill rather than courses or credits. Thus, we were obliged to create what we call a CU and to show how these relate to credit hours. CUs are now central to how we report transcripts, handle federal financial aid, report to employers for tuition reimbursement, etc. Each state has its own set of evidentiary requirements. Some, for instance, require a full review of each candidate’s learning artifacts and/or academic records, whereas others are willing to accept our recommendation for licensure. States also rely on various different examinations for assessing the qualification of teacher candidates and even where they may utilize the same examination for a given program, they may set different passing scores for that exam. WGU’s approach is to hold all candidates to the highest standard required by any state that uses a given exam. And then there is the critical and complex issue of the actual field experience described earlier in this chapter. Meeting this bewildering array of requirements has led to a number of innovations at WGU. We have developed and now maintain continuously updated, database-tracking licensure guidelines for each state. We have
A Radical Model for Preservice Teacher Education
265
created a sequence of PCE and demonstration teaching that meets the highest standards across all states and is capable of rapid adjustment as things change. This has enabled WGU’s programs to gain acceptance in the majority of states because the programs meet or exceed requirements. We have, as previously indicated, built an administrative team that focuses solely on tracking state requirements for fieldwork and licensing and checking these regularly updated lists against each candidate’s program requirements and completion record. But, as most teacher education programs can attest, approval to license is as much about trust as it is fulfilling the letter of the law. Each state has different requirements and different routes to licensure. Some states automatically accept our graduates for application because we are regionally and/or NCATE accredited. Other states require that our students receive a Utah license first, and then apply for a license through reciprocity with their respective home states. For a few states (that are already licensing our graduates), we have gone through an approval process similar to an NCATE accreditation to gain continuing status. Our radical model for education is still disconcerting to some state legislators and administrators. Authorities in one state – that accepts NCATE designation as evidence to allow a university to recommend graduates for licensure and is willing to grant licenses via reciprocity – were for some time unwilling to approve WGU or license our graduates under any route to licensure because they were concerned that competency education means simply giving credit for life experience. Thankfully, these rare situations have been largely resolved.
Regional and NCATE Accreditation As previously stated, one of the Teachers College’s first tasks was to create a database of carefully selected state and national standards to serve as the foundation of the curriculum. This effort, together with the establishment of a competency-based model for building valid, reliable and legally defensible assessments, became the foundation for gaining acceptance by the states for licensure purposes and for seeking accreditation. As early as 2002, the university hosted its first site visit by an accrediting organization. Because the scope of the university was national, four of the regional accrediting bodies formed a special committee dubbed the Inter-Regional Accrediting Committee (IRAC) comprised of representatives from the four commissions. Because regional accreditation standards were then only beginning to evolve toward what we see today, the IRAC committee had
266
THOMAS W. ZANE ET AL.
some initial difficulty melding the various recognition standards from the four regions and applying them to an online, competency-based model. One of the major stumbling blocks at the time, for example, was the lack of a physical library and the stage of development (number of students, graduates, etc.) that the University had reached by that point. The committee recommended that the University take more time to develop the model and come back at a later date and reapply for accreditation. In the meantime, WGU pursued national recognition from the Distance Education and Training Council (DETC). DETC accredits a wide variety of educational and training institutions. Because of the DETC focus on distance education, its reviewers better understood our model and processes and WGU received initial accreditation in 2002. It was also during this time that WGU entered discussions with the Utah State Office of Education and the Utah Board of Regents to obtain recognition for our preservice programs and cooperation in licensing our teacher candidates. This became a very difficult process. WGU had few students at the time and our preservice programs had no graduates; hence, it was difficult to prove results and outcomes. After much discussion, our radical model proved to be too much for regulators to accept; hence, there was still was no path to licensure for WGU students. The state of Texas, however, reached an agreement with the university to allow one of its regions to recommend graduates for licensure and subsequently allowed WGU to make the licensure recommendations. In the meantime, Utah legislators in 2003 modified Utah state law to facilitate recognition of teacher licensure programs that were NCATE or Teacher Education Accreditation Council (TEAC) approved, or that were competency-based. This would soon open the door to licensure in Utah for our students. WGU’s growth continued to be slow, and the prospect of meeting our aggressive enrollment targets without regional accreditation seemed low. Having completed the task of blending their review criteria, the four regional accrediting commissions represented by IRAC again came together, by now with a better understanding of what WGU was, to reconsider WGU’s application. They decided to proceed, shortly thereafter conducting site visits and ultimately granting full initial accreditation in February 2003. By mid-2003, with regional accreditation and state recognition, WGU was on equal footing with many other private universities, and well on its way to achieving the status of a national teachers college. But it was still difficult to gain licensure for our teacher candidates, even in some of the western states that had participated in the creation of the university. In late 2003, WGU finally received approval to offer a clear credential to students
A Radical Model for Preservice Teacher Education
267
in California working as interns in California districts. This opened the door to funded opportunities for our candidates to work while they were finishing their degrees. For adult students, this was a very attractive option. After the California approval, it became easier for our students to gain licensure in some other states as well. However, we had still not reached our goal of national recognition of our preservice programs. During this part of our journey, the Teachers College started working on a program brief to submit to TEAC. We initially chose TEAC because of its streamlined process and the seeming willingness to work with alternative models for preparing teachers. Although we met many TEAC standards, our student population was still too small to make the necessary conclusions from the data. We began investigating requirements for accreditation under NCATE. By mid-2004, the university initiated serious discussions with the NCATE leadership and decided to work on a fast-track accreditation effort with NCATE. We set an aggressive 18-month plan to schedule a site visit from an NCATE Board of Examiners. We submitted our Conceptual Framework and pre-conditions in early 2005, we were admitted to candidacy, and our site visit was scheduled for April 2006. The prospect of NCATE accreditation inspired an intense period of selfexamination, writing, and program alignment for the Teachers College and the university as a whole. That we were already based on standards from all 50 states and many organizations, that we used multi-faceted assessments to provided ample evidence of competence, and that we were already a dataregarding culture were important factors in our success at meeting the NCATE requirements. These prior decisions somewhat reduced the work (and tensions) associated with the preparation. We had the data, we had the processes, and we were able to better align our programs and articulate the results and outcomes NCATE sought. As we approached the site visit, one of the first logistical challenges we met was to provide the examiners access to our candidates in their classrooms as they completed their clinical experience. Because our students live all around the country, the Board of Examiners would not be able to travel locally from our Salt Lake City offices to visit the desired number of students in their classrooms. We worked out an agreement with NCATE that covered the expense of sending Board members to Texas, California, and Georgia to visit with our students in their classrooms before the site visit. During the site visit, Board members were able to visit local students as well. NCATE Board of Examiner panels usually walk across a campus to talk randomly with students and faculty in addition to specifically scheduled
268
THOMAS W. ZANE ET AL.
focus groups. But because our students and faculty are widely dispersed, we arranged for our program coordinators (roughly equivalent to department chairs) to be in Salt Lake during the visit. Similarly, we worked out a series of conference calls and randomly selected individual calls with our mentors and students. A third logistical challenge was the technology. All our student systems, data, reports, and the majority of our learning resources are online within proprietary systems. WGU was among the first to submit all materials electronically. But NCATE Board of Examiner members were unfamiliar with the technology systems we employ on a daily basis to interact with our students and monitor their progress levels. To provide Board of Examiner members with the range of access and unfettered movement they desired, we worked with NCATE to provide a series of online training sessions to prepare Board members for the visit. In addition, we provided written guides, as well as multiple help-desk and other IT personnel, to help Board members with their tasks and data review. The full library of our submissions and data was made available both electronically and as hard copies in our exhibit room. During the visit, we also responded to a number of additional data requests. The Board of Examiner members were pleased with our ability to generate the reports they requested in minutes. One area where we had fewer challenges than some institutions might encounter had to do with organization, sequencing, and overall coherence of programs. Our business-style structure, with its absence of a common college architecture, allowed us to readily link together Liberal Arts, content majors, and pedagogical instruction as opposed to sending our candidates out to other colleges. This methodology helped us during the writing and review stages because we were able to work with content experts from all these domain areas to create a coherent scope and sequence for each program. So from a planning perspective, it was relatively easy to outline each degree program. From the accountability end, we had already documented the competencies each graduate possessed before licensure. Furthermore, each teacher candidate is supported through the entire program by a single mentor focused on student progress through the program to graduation which further solidified the coherence of program for each student. Other mentors with specific content area expertise support the student for the period during which the student is focused on their area of expertise. Following a relatively calm and successful site visit, NCATE accredited WGU in October 2006. Since that time, all our existing individual programs have gained recognition from the appropriate Specialized Professional Associations (SPAs). We continue to work with NCATE to
A Radical Model for Preservice Teacher Education
269
gain recognition for new WGU programs before our next NCATE reaccreditation visit in 2012. In 2007 we underwent reaccreditation visits from both DETC and NWCCU. And in 2008, we worked with the State of California on a unique ‘‘site’’ visit in Sacramento. State law in California does not allow travel to another state except under extreme conditions. Because we are a ‘‘virtual’’ online university, we brought the Teachers College to California, resulting in re-approval for licensure recommendation. With regional accreditation, home state approval, and NCATE accreditation of our unit and our programs, we gained access to National Association of State Directors of Teacher Education and Certification (NASDTEC) reciprocity agreements with other states. NCATE accreditation also made it easier to recommend our graduates for licensure in other states. We can now offer states a combined site visit with NCATE, which will be more efficient in saving time and resources moving forward, so that we can better serve the licensure requirements of the states and the expectations of our students.
SUMMARY AND CONCLUSIONS For the past 13 years, we have had the privilege of building an entirely new university and teachers college. To explain the type and magnitude of tensions we face each day, we use the metaphor of building cars while they are already rolling down the highway. Each car – each program – must be prepared to smoothly and successfully carry our students along the road to a degree and teaching license in any state in the union. Much of our time is spent developing and refining these programs. This requires a delicate balance between the need to provide consistent, reliable, and predictable systems and services, and the need to remain open to further change to improve the student experience. The ever-present need to effectively communicate our model and outcomes to stakeholders remains a significant challenge and source of tension for us. We have experienced open hostility to our model because it can be seen as a challenge to the status quo. The tensions related to all this are especially noticeable during our frequent accreditation and state approval cycles. Luckily, this added tension is offset by, among other things, the ability of our assessment-based model and centralized data systems to show direct evidence of student learning. Perhaps as important, we may be getting better at telling our story in ways that visitors can understand more readily.
270
THOMAS W. ZANE ET AL.
NCATE was the first specialized accreditation we sought. We recently successfully completed Commission on Collegiate Nursing Education (CCNE) accreditation for one of our nursing programs, and we are currently a candidate for Commission on Accreditation for Health Informatics and Information Management Education (CAHIIM) accreditation. The future may well bring other accreditations, such as our new nursing and business programs. The larger and more visible WGU becomes, the more demanding these regulatory agencies are likely to become. Some states are already demanding that we be accredited by their own education departments or that we meet certain criteria for operation within their state borders, like permanent office space, a resident faculty, and a physical library. Such requirements may seem a bit unreasonable for 21st-century education, but they will likely be realities for years to come. Whether we comply, obtain exemptions, see state and national organizations continue to evolve their requirements, or succeed in changing the regulations piecemeal; we will have no alternative but to devote time and money to addressing their requirements in exchange for the right to recommend candidates for licensure.
CHAPTER 16 TRANSFORMATION FROM TENSION TO TRIUMPH: THREE PERSPECTIVES ON THE NCATE PROCESS James M. Shiveley, Teresa McGowan and Ellen Hill INTRODUCTION Miami University is a mid-sized public institution in southwest Ohio. Regarded as a ‘‘public ivy,’’ Miami has always prided itself on its high quality, liberal arts-focused, undergraduate programs. Teacher Education has been an important part of that focus for over 100 years. Accredited by National Council for Accreditation of Teacher Education (NCATE) since 1954, Miami graduates approximately 600 educators each year across 35 programs at both the undergraduate and the graduate levels. This chapter represents the combined stories of three individuals who were heavily engaged in Miami’s 2009 NCATE accreditation process: Teresa McGowan, the unit’s NCATE coordinator; Ellen Hill, the unit’s Director of Clinical Experiences; and James Shiveley, the chair of the Department of Teacher Education. We each provide a brief contextual backdrop for our NCATE experience, explain the primary challenges we faced as we prepared for the NCATE accreditation review and how we worked to overcome these, and describe our perspective of the weeks leading up to and including the final Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 271–291 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012019
271
272
JAMES M. SHIVELEY ET AL.
Board of Examiners (BOE) visit. Many more people were, of course, essential in the preparation for Miami’s NCATE visit, and we do not imply that our views or contributions were in any way more critical than others. This chapter is simply our story.
EARLY TENSIONS James’ Story I became chair of the Department of Teacher Education during the summer of 2001. I had just obtained tenure the year before and was settling in as an associate professor when the department’s search for a new chair failed, and I was asked to assume the role. As is the case with many academic chairs, I came into the job with little or no experience or training. This was particularly true regarding NCATE. During our previous NCATE visit, I was a new faculty member and my contributions were superficial at best. In 2001, the division had also just hired its fifth new dean in eight years. One result of this lack of steady leadership was a degree of curriculum drift (Wiggins & McTighe, 2007) and an increasing measure of faculty independence. There was not a sense of divisional urgency associated with the upcoming NCATE visit in the spring of 2002. I recall my first semester as chair preparing for this NCATE visit as one of missed deadlines, frustrating efforts to locate or re-create needed documentation, and an emerging sense of panic once I realized that what had not to date been done, could not now be done. I soon learned that all of our Specialized Professional Association (SPAs) reports, most of which had not been started, were due early that September. Furthermore, because we had already been granted an extension, this deadline was final. Immediately following the SPA submissions, the next four months were spent compiling the NCATE Institutional Report, a report that should have taken four years to write. It became increasingly clear that what we were doing as teacher educators was not in sync with the required NCATE documentation process. Not surprisingly, our final report was reactionary and externally driven, and not, we believed, an accurate reflection of our work. While it did point to successes in certain areas, it did not adequately represent areas where we believed we thrived – areas in which we had failed to set up a system that could provide evidence to an outside party of our efforts and progress. And, after honest assessment, there were other areas in which we knew we needed to significantly improve.
Transformation from Tension to Triumph
273
Throughout the process, our frustrated faculty maintained that we were good at what we did. We always had been. Everyone from alumni to the principals who hired our graduates to our current students said so and believed it. However, if asked to prove this assertion, to provide actual data as evidence, we would hesitate before becoming a bit defensive and then indignant. We did have some data, yes, but it was not necessarily of the type being asked for by NCATE and, no, it had not been analyzed and reflected on. Yet, just look at our graduates. Was it not the proof in the pudding? The BOE Team arrived in the spring and there were three days of midnight meetings, frantic searches for data that we thought we had generated somewhere, and pep talks with faculty to convince them that this indeed did matter and was not the time to publicly complain about the process. The BOE report, in retrospect, was kind, passing us on standards I believed to be our weakest, while finding us failing to meet Standard II on the Unit Assessment System – a system we had in place, but had not yet begun collecting data for. When it was all over, we were exhausted, frustrated, and confused. In a word, it was ugly. After licking our wounds and successfully submitting a rejoinder, we vowed that we would never go through a similar ordeal again. We realized that changes had to be made in what we did and how we accounted for what we did. The good news was that we had time to do this right for the next visit. Further good news arrived the following year in form of the hiring of an NCATE coordinator for the unit (Teresa) and the hiring of a new Director of Clinical Experiences (Ellen). The final piece of the new team fell into place with the hiring of our current dean in the fall of 2006.
Teresa’s Story I was hired by Miami University and began working in the School of Education, Health and Society (SEHS) in the fall of 2003, approximately 18 months after the previous visit. I came to Miami with 30 years of experience as a K-12 educator. I had absolutely no knowledge of NCATE. What I did have was an understanding of standards and an understanding of the culture of accountability. As I learned about NCATE, I came to understand that at the heart of NCATE is an assurance that educator preparation units are meeting standards and are therefore accountable to those that they serve. Given my K-12 experience, I understood and felt comfortable with this notion. What I did not understand was the culture of the university. Every working culture provides incentives and disincentives to which the people in
274
JAMES M. SHIVELEY ET AL.
that culture respond. At the university, promotion and tenure are paramount to any professor’s career. Tenure, yearly raises, and, for already tenured professors, promotion to full professor are all closely linked to an individual’s performance in the areas of teaching, research, and service. And of these three, service is by far the least favorably regarded. Furthermore, individual work is often rewarded more highly than collaborative work. The end result is that any work on the part of a faculty member that does not directly lead to some measurable outcome in either teaching or research and publications is seen as a negative – something that takes them away from the work on which their careers depend. NCATE work did not fit neatly into any of these areas. This culture is widely understood by all those who work in it. To an ‘‘outsider’’ like me, it not only made my job all the more difficult, it seemed at odds with what we needed to do as professionals. I felt a sense of urgency to get things done. The faculty did not have this same sense. Deadlines were ignored. Discussions went on forever without any real decisions being made. This culture of work was so different from what I had left that I had a hard time not showing my utter frustration. I was fortunate to have key leaders in the faculty community who were willing to take time to help me understand what this phenomenon was all about. While understanding was helpful, this dichotomy created hurdles for me as the NCATE Coordinator. I was essentially hired to lead people through a process that in many instances they did not embrace and did not agree with. Faculty were in the early stages of understanding accountability in a performancebased system. No longer could they simply say, ‘‘I taught it so students must have learned it.’’ K-12 teachers had struggled with this in the same way about 10 years earlier. Compounding the difficulty of leading this charge was the pall left by the previous NCATE visit. It had not gone well for various reasons. Everyone knew the details of this except me. Over time, I learned more and more about that visit and learning this helped me understand some of the reluctance on the part of faculty. But, not knowing was also a good thing in some ways. I had no prejudices about the process. I was open to the changes that were taking place. I had a chance to frame how we would meet the requirements of NCATE accreditation. Because I was new to the NCATE process, I spent my first years at the university learning as much about what it was all about as I could. I attended every NCATE conference. With each conference, I learned more and more and began to develop a deeper understanding of what would be required. I focused on establishing relationships with people – people at NCATE, people at the state level, and people here at the university. These people became my resources and my guides. And, I continued to learn
Transformation from Tension to Triumph
275
more about Miami, its programs in educator preparation, and the people who served in those programs. Bringing my past experiences and meeting the challenges of learning the NCATE process together resulted in a deeper understanding of what would be needed for a successful accreditation review. Because of my background and inexperience in higher education, my credibility with faculty was tenuous. My first years were a continual sequence of validating myself. I learned all I could about the NCATE process and this helped me immensely. I was able to serve as a facilitator and a resource in the work done by faculty to meet the requirements of NCATE accreditation. This was especially true in the NCATE SPA process. Because we were part of the pilot for the revised SPA process, which we had to complete on Ohio’s timeline (four years before our visit), faculty had an early introduction to my abilities as a facilitator. My credibility with them increased and with that I gained a new sense of self-confidence. Part of this self-confidence came from the knowledge base I had developed for myself. With this as a base, I was able to meet other challenges. As I developed a greater understanding of what would be necessary for a successful accreditation review, I became more and more aware of the challenges in making this happen. There were times when it felt as if a ton of bricks were sitting on my chest, and I could not figure out how to remove them. The attitude of faculty toward the NCATE visit was that we had our visit in 2002, the next one would not be until 2009, so we do not have to worry about this now. But NCATE is a process – an on-going process that impacts what is done on a day-to-day basis. My challenge was to make sure that everyone understood this.
Ellen’s Story I was fortunate to have been hired at Miami University shortly after the last NCATE review; so I did not have a memory of what was done before to prepare for the visit. In this case, I believe ignorance was bliss because I did not feel the tension that I sensed from others in the unit. I had heard stories of how time consuming and difficult it was to collect the data that was necessary, but I could not imagine that it would be as challenging as I was told. As a former K-12 public school teacher, I was familiar with data-driven decision-making, record-keeping procedures and using assessment to guide instruction. What we were asked to do for NCATE accreditation seemed to follow along those lines and did not seem ‘‘over the top’’ from my perspective. When I arrived at Miami University, I quickly discovered that
276
JAMES M. SHIVELEY ET AL.
the past practices of data collection in my office were more informal. Up to that point, little or nothing had been stored electronically and most filed records were hand written and filed by semester in old cardboard boxes. My assistants and I began immediately to organize, design forms and templates, and build searchable databases. This allowed us to keep complete files on student placements for field and clinical experiences including school demographics, evaluation procedures, and supervision. We rewrote all handbooks and documents to bring them up to current state and program requirements. Once these things were in place, we were able to construct viable ways to collect data, insure the credibility of our supervisors and cooperating teachers, and keep track of the varied placements for our candidates as they progressed through their field experiences and clinical experience. It was my understanding that many seasoned faculty members were not at all pleased to be going through the NCATE review process again. I sensed that the previous visit soured them against the process. As a result of this attitude, it was sometimes difficult to get vital information in a speedy manner. Just because it was my ‘‘emergency’’ did not mean it was theirs. Yet, because the clinical experiences are entwined in almost every teacher preparation program, I needed to check and double-check each program to ensure that I had the correct hours and requirements for each field experience, faculty credentials, clinical faculty credentials, and a massive amount of other information. We collected this information on a regular basis, but often the faculty taught other courses or changed the syllabus within a course, not realizing the trickle-down effect each change can have on the overall program. Collecting these bits and pieces of data was like putting together a jigsaw puzzle with a strong fan blowing. I sometimes felt like I was annoying people by asking them time and again for the information, but it all came together in the end.
OVERCOMING THE CHALLENGES Teresa’s Story Shepherding an educator preparation unit through an NCATE review takes many different forms. From writing reports to creating DVDs, the work is essential for the success of the visit. Early in our preparation for the visit, we began sending teams of faculty to NCATE conferences to learn more about the NCATE process. Attending these conferences en masse gave us an
Transformation from Tension to Triumph
277
opportunity away from campus to begin dealing with issues related to our upcoming visit. This chance to learn and plan was invaluable. Taking time to learn and plan for the visit gave us a foundation for making decisions related to our accreditation review. Change is the one constant in education, and the NCATE process is not immune to this. Since our previous NCATE visit, changes in the state of Ohio and changes in NCATE’s accreditation review impacted the course of our progress in being reaccredited. The state of Ohio implemented a number of programs, including standards for K-12 education, reading, and value-added assessment requirements that Institutions of Higher Education had to embrace and include in their programming. This necessitated some change in curriculum and program completion requirements. Ohio also legislated that approval as a teacher preparation unit was determined by either NCATE or Teacher Education Accreditation Council (TEAC) accreditation. Before this, Ohio’s Department of Education was entirely responsible for program approval, which meant that Ohio trained reviewers for SPA-like program approval and as BOE members. Visits were conducted by state teams in schools that did not want to seek national accreditation. Regardless of whether a school wanted to be nationally accredited or not, the school still had to comply with Ohio’s requirements and on Ohio’s timeline. Because of this, changes that were happening at NCATE were not quickly embraced by the state department. This impacted Miami in the program approval process. When we were in the middle of trying to put together SPA program reports according to Ohio’s timeline, NCATE revised the program approval process. We petitioned the state to be one of the pilot institutions for NCATE’s new program approval process. That decision became a catalyst for many changes at the institution that had a huge impact on the eventual BOE visit. It impacted our benchmarking, assessments, and data collection. It eventually impacted our ability to write good SPA reports. All of this meant a re-education of faculty as we began the NCATE process. While the NCATE process is on-going, if there is a first step leading to the visit, it would be the submission of SPA reports for program national recognition. Facilitating this process involved learning about each of the different SPAs and what they required. While there were certain commonalities, there were also differences. Because of the commonalities, I was able to supply some of the text and some documents that were needed. I relied on program coordinators to understand the uniqueness of each of their SPAs, but I was there to review and support what they were doing. I read their reports as a reviewer would read them. Once they received my feedback, they were able to revise. In addition to support from me, our dean
278
JAMES M. SHIVELEY ET AL.
supported the faculty who had responsibility for writing the reports by offering a small financial honorarium for the work. Giving financial support for this process sent the message that this was important work that faculty was engaged in completing. The dean also supported faculty by funding attendance at NCATE conferences so that SPA report writers could attend sessions specific to their SPAs. We were very successful in our SPA submissions; all of our programs were nationally recognized and only a few of them were given conditions. The SPA review process also simplified the writing of the Institutional Report. Much of what was needed for Standard 1 was covered through the SPA reports. All in all, going through the SPA process proved to be very beneficial to the success of our visit. Another key to the success of our visit was the NCATE Steering Committee. We have always had this committee. Generally, it meets twice a year. In the two years leading up to the visit, our meetings were held more frequently and during the semester of the visit meetings were held weekly. Offshoots of this committee worked on particular aspects of NCATE preparation, including the revision of the Conceptual Framework (CF) and the Assessment Document. This group became the conduit for communication as we prepared for the visit. It included a representative from every program involved in the NCATE process as well as K-12 representatives. It was through this group that challenges and issues were discussed and solutions developed. This group ensured that everyone was on the same page; that everyone understood what was required and how we would meet those requirements. As a communication conduit, the members of this group were able to share with the wider faculty base that would be involved in the visit. Communication is essential for a successful visit. Communication with students and key stakeholders was also important. To this end, a series of promotional documents was prepared. A brochure outlining the CF was distributed to all students and K-12 personnel working with our students. In programs, this brochure was used as a teaching tool and continues to be used today. It is presented to students early in their programs to introduce them to the key concepts they will be learning about throughout their programs. We also developed a DVD that shares faculty perspectives on how we live our CF. Finally, posters were created and framed that illustrate key concepts related to the CF. Although these posters were created for the BOE visit, they still hang in various locations where educator preparation takes place. All of these tools greatly added to our ability to get the message out to all those who would be involved in the visit. Probably the most daunting challenge was the inclusion of the graduate programs for the continuing preparation of teachers in the NCATE process.
Transformation from Tension to Triumph
279
There was a statewide misunderstanding about the inclusion of these programs. In the previous NCATE visit, we had been told that these programs would not be reviewed by NCATE. That assumption carried through until about two years before the visit. When it became apparent that these programs needed to be included, we did not have much time to pull things together. These programs are housed in the College of Arts and Science (CAS) and the SEHS. To facilitate the process of bringing the programs in CAS into alignment, a meeting was held with the deans and associate deans from both divisions, the Provost, and myself. The purpose of this meeting was to inform and to elicit administrative support for what would need to be done. It was a successful meeting. The next step was to meet with the coordinators of the four programs to begin the process of educating them on what they would need to do to be ready for the NCATE review. This was much harder. It basically came down to informing those program coordinators that if they did not play ball, they would put the entire NCATE review in jeopardy, and that if we were not accredited, offering their programs would be at risk. One of the programs decided that they did not want to participate in the NCATE review and the program was suspended. The other three programs then began the difficult task of revising/tweaking their programs to meet NCATE requirements. While the journey to that point was fraught with struggles and some resistance, in the end, all three programs agreed that doing this work actually had strengthened their programs. In our own division, the struggle was a bit easier because there was a basic understanding of what NCATE was all about. However, there was resistance on the part of the graduate faculty as they worked to create/revise aspects of the programs to meet requirements. Luckily, the faculty member who led this charge was dogged in her determination to bring these programs into compliance. In the end, they, too, were successful in meeting the NCATE requirements. Another challenge in meeting NCATE requirements has been the technical aspect of the assessment system. This challenge began with the creation of our own data collection system. We decided early on not to invest in one of the commercial systems. Our system is based on entering key assessment scores into Blackboard (Bb) and then doing a Bb/Banner interface to retrieve the data. It took hours and hours of work on the part of information technology (IT) services and weekly meetings between a representative from IT services, the associate dean, and myself to develop the technical system. Faculty were trained in how to use the system. Once the system was implemented, problems became apparent and had to be dealt with. As greater understanding of the nature of key assessments and the information they needed to
280
JAMES M. SHIVELEY ET AL.
provide us with grew, so the data collection grew. Faculty saw this process as separate from their teaching/grading and many times did not enter the scores into Bb. There were some semesters when data was not resolved until weeks after the semester ended. As Data Loop Reports were implemented to bring the data collection process to completion, faculty responsible for conducting meetings about the data and writing the reports felt that this was an additional layer of work added to their already full workloads. While they understood the necessity of it, they still saw it as an imposition. Finally, as the NCATE visit grew closer, it became necessary to ask faculty to include certain information in their syllabi: standards alignment, key assessment statement, CF statement, and dispositions statement. Faculty did not embrace this idea. They considered their syllabi a reflection of themselves as teachers, and these ‘‘NCATE components’’ were not a part of this. Developing an understanding that these ‘‘NCATE components’’ do indeed represent what we as an educator preparation unit are trying to do to develop educators was a necessary component of establishing a culture of accountability. A compromise was reached. Because the CF is a reflection of who we are, faculty agreed to put this on the first page of their syllabi. The other components became an addendum to the syllabi. The issue of the syllabi pointed to a key concept for success in preparing for NCATE accreditation. Whoever is in charge of leading the process must continually communicate how all of the aspects of NCATE are there to strengthen our programs and should not be viewed as an ‘‘add-on’’ to what we do. Communication and compromise are essential to this process, and facing all of these challenges is very much a part of the NCATE process.
James’ Story When undergoing any change at the university level, faculty buy-in may be the single most important factor to consider. With it, almost anything can be done. Without it, the process is arduous and the results artificial. We ended our last NCATE visit with little faculty buy-in and that is where we picked up. Knowing the long road ahead and the many changes we hoped to incorporate if we wished to approach the subsequent NCATE experience differently, we realized that a sense of faculty ownership in the process was critical. Faculty needed to revisit the CF honestly; rewrite and add to many of the key assessments; collect and analyze data for data loop reports; communicate more precisely and regularly across programs, departments, divisions, and communities; and gain a better understanding of what they
Transformation from Tension to Triumph
281
were doing as part of a bigger whole. One clear tension was that we would be asking faculty to increase their level of commitment to NCATE work while simultaneously recognizing that the requirements for promotion and tenure (which seemed to value none of our NCATE work) were increasing. I wish I could say that by time our next NCATE visit occurred, we had total faculty buy-in and that I was responsible for that turnaround. Neither would be true. However, what would be fair to say is that we saw a significant increase in faculty participation and commitment and that this made all the difference. While total buy-in is not necessary (or in some institutions even possible or ideal), it is absolutely vital to have a critical mass of faculty who are on board, dedicated and supportive. Looking back, I have concluded that there were three primary factors that led to this positive change: a team concept, simple perseverance, and a sense of humor. The idea of a ‘‘team concept’’ sounds good of course – perhaps even trite. Are we not all part of the team? However, what I refer to in this instance is a small core of highly dedicated faculty members with differing skill sets that have a successful NCATE review as one of their top professional priorities for a period of several years. This group is smaller than the NCATE Steering Committee, consisting of perhaps four or five members. These members may or may not be officially designated as part of any core leadership team, but everyone knows that they are. This team drives each other, leans on each other, covers for each other, and generally, makes sure that everyone keeps their eye on the ball. Our team was an informal one and Teresa was at its head. She was the one member who breathed, ate, and slept NCATE for years on end. She was the member who, when faculty saw her walking down the hall, ducked into the first available open office for fear of having another key assessment or data loop conversation. She carried the load and bore the brunt of responsibility. But she did not do it alone. Several others were beside her (or, more often, a half step behind). These other team members acted as liaisons and interpreters for her. They critiqued and consoled her. They sometimes blamed her (with her knowledge) and other times took the blame for her. They were sometimes in the foreground of the work being done and often not seen at all. The two key points here are that, first, no one person can be responsible for assuring a successful NCATE visit at an institution of any size. It is not fair or possible. A dedicated team is needed, consisting of a collection of professional friends with a like focus and unalike strengths. Second, the team needs a clear leader; a person whose primary (or better yet, sole) job is to organize the NCATE effort. The team concept does not mean ‘‘NCATE by committee.’’ Without a clearly established leader, too many details will just fall through the cracks.
282
JAMES M. SHIVELEY ET AL.
Throughout, a large measure of perseverance is also needed. This perseverance is steady and relentless, while taking care to not come across as fanatical or desperate. There cannot be gaps in the work or inconsistency in the message. It is so easy to take a break when preparing for NCATE. The semester is winding down, people are gone for the summer, the semester is just getting started, there are other more immediate and pressing deadlines and agendas besides NCATE, and besides, is it not that so-and-so’s job? But the continual focus is what brings about a culture of accountability. The message is that this is not a one-time shot or something we are doing for someone else. This is work that we need to do, for us – to get better. When a culture of accountability does not exist (as was the case for us), perseverance makes the difference. The continuous discussions about our work eventually became routine – meeting after meeting after meeting. This is not going away. Get on board. Finally, a sense of humor was an essential ingredient in the process. This NCATE review meant a great deal to our unit. It was very serious work and, for some of us, personal. In such a climate, it was easy to take ourselves too seriously. Long hours, intense discussions, and the not uncommon setback had to be broken up with laughter and celebrations. As a result, the leadership team found that we sincerely liked each other and, as the process continued, we discovered many new and good friends who had been working near us all along. When humor was interspersed in such a climate, serious discourse and intense disagreement on important topics often proved productive
Ellen’s Story From my perspective, I saw three challenges: collecting and organizing data from the years before my arrival, encouraging faculty to embrace our key assessment for student teaching and to utilize the data from this assessment to enhance instruction, and lastly, to gather data from graduate programs and other departments that used alternative methods of data collection. As we prepared for the NCATE visit, we were able to pull the records from the past seven years and reconstruct them in the format that was required by NCATE. Because of the changes in technology in the past decade, some files were saved in formats that were not usable or were difficult to decipher, but the fact that we had the information made it relatively easy to reconfigure the charts and databases and produce the necessary records. In a university the size of ours, there are several programs that utilize field experiences but do
Transformation from Tension to Triumph
283
not necessarily channel those through the Office of Student Teaching and Field Experiences. Those courses required the most labor-intensive data collection because it had to be obtained from other sources, some of those completely unknown to my office. Once the records were obtained, they had to be reorganized to fit with the templates provided by NCATE. Even though the process took time, we felt that we were prepared to present our data rather early on. With each passing semester, adding additional documentation was simple because of the templates and procedures we had already established. In clinical practice, or student teaching, we have a key assessment. It was developed in-house in 2003 and has evolved from that early stage into what we now call Project Learning Curve (PLC). This assessment was published in NCATE’s It’s All about Student Learning; Assessing Teacher Candidates’ Ability to Impact P-12 Students (Wise, Ehrenberg, & Leibbrand, 2008). PLC was designed to measure our candidates’ effectiveness in the Pre-K-12 classrooms. The project came about as a response to an ‘‘area for improvement’’ from our previous NCATE visit and was written by the then Associate Dean and myself. The feeling was that there was not time to get faculty buy-in because we needed to implement this assessment immediately. As a result, faculty did not understand what this assessment was all about. This proved to be a real negative because faculty were not preparing candidates for what this assessment required of them. It took some work on the part of the Dean’s Office to correct this situation. PLC follows the Assess-Plan-Teach cycle, which is a data-driven instruction. Our candidates plan a general lesson, then they pre-assess the students with two different types of instruments, and use the information they take from those to design a lesson that is specific to their learners’ needs. They use research to determine the best teaching strategies and assessment models. After careful planning, the candidate teaches the lesson and then post-assesses. Candidates collect, disaggregate, and analyze the data to show how their students responded. They provide data charts and use these to compare and contrast knowledge gained from the pre-assessment to the postassessment. These projects are then scored by a trained and calibrated group of student teaching supervisors and then, of course, the data is collected and organized for review by appropriate faculty. We believe the results give us a very accurate picture of the effectiveness of our young candidates. Because this assessment has been in place for seven years and the data and samples have been collected for each cohort of students, it was relatively simple to gather that information for the NCATE review.
284
JAMES M. SHIVELEY ET AL.
The bigger challenge came when I realized that I needed similar data for the graduate programs and some programs in the College of Arts and Sciences. There were many more layers than I realized and, as we identified each one, another one would emerge. Even though these programs operated independently when they arranged their field placements and clinical experiences, we still needed to coordinate the data for Standard 3 of the report. Because of the nature of these programs, I came to realize that they did not collect data in the same way that my office does, and often, because the program was new to NCATE accreditation, they had little or no data to report at the time of the review. The graduate programs are often populated by teachers or administrators who have already completed the undergraduate teacher education program. For their graduate programs, the field and clinical experiences are determined by the position in which they already teach or are employed. Most teachers and administrative candidates complete these experiences in their home schools. Because of this, they do not go through the Office of Student Teaching and Field Experiences, leaving us without a ‘‘paper trail’’ of their experiences, evaluations, and other information. I had to collect this from several sources and then make sure it was represented accurately and succinctly. Clear and concise communication was the key to success.
FINAL PREPARATIONS Teresa’s Story Two weeks before the BOE visit, faculty, K-12 representatives, and students participated in an abbreviated mock visit. During this mock visit, interviews with faculty and a poster session were held. The purpose was to give participants an idea of what to expect when the BOE team came to campus. As I planned this mock visit, I knew that I wanted a ‘‘BOE Team’’ to conduct the interviews. In Ohio, the people responsible for leading NCATE work have established strong collegial bonds. We meet at least twice a year at state conferences to share ideas and to seek advice. We know that we can call on each other whenever necessary. I asked four of my colleagues and our state consultant if they would be willing to come to our campus as our ‘‘BOE Team’’ for the mock visit. I told them I could not pay them, but I would feed them and I would do the same for them in the future. There was no hesitation in their positive response to coming. Each was assigned to a specific standard and they conducted interviews with faculty,
Transformation from Tension to Triumph
285
K-12 personnel, and candidates. They had 30 minutes for each interview. During this time, they posed questions that the BOE might ask and then spent the last few minutes sharing ideas about how it would be best to respond. After the interviews were over, the poster session was conducted. This session served a dual purpose: a chance to rehearse for the real event and a thank you to all for their hard work in preparing for the visit. Beverages and hors d’oeurves were served. As the ‘‘BOE Team’’ moved through the exhibits, they continued interviewing and giving advice. As a result of this, many of the posters were revised and some additional posters were added. At the end of the poster session, I hosted a dinner for the ‘‘BOE Team.’’ During this time, they shared possible concerns and offered many suggestions. These proved to be invaluable. As this mock visit began, I watched as faculty arrived for their ‘‘interviews.’’ Body language and facial expressions can reveal much about how people are feeling. At the start what I saw was nervousness, resentment, resistance, and, in general, a kind of do-I-really-have-to-do-this attitude. As the afternoon and evening progressed, however, a marked change in body language and facial expressions occurred. People were smiling and were enthusiastic. Many stopped me to say they were glad that we had done this. By going through this experience, they realized that we as an educator preparation unit have a lot to celebrate, and by the end of the evening, the experience had become a celebration. Having the mock visit probably did more to reassure and bolster everyone as we headed into the visit; it went a long way to erasing the bad memories of the previous visit.
Ellen’s Story Preparing for the actual visit was stressful and exciting at the same time. Our NCATE Coordinator, Teresa, was very thorough. She communicated with us on a regular basis and gave us a time line for when certain pieces needed to be in place. We had weekly Steering Committee meetings that were purposeful and well planned. Each week, Teresa went over a different aspect of the NCATE review so that we each knew what to prepare for and what to expect at the visit. The Steering Committee was a hardworking group of individuals from all departments and programs. It was a good way to meet the leaders from other departments. Often I would speak to them on the phone but not in person. It was nice to put a face with a name. Communication was the key to success for the entire project. It was especially helpful for me because I like to plan ahead. It was also stressful
286
JAMES M. SHIVELEY ET AL.
because, as I pointed out to Teresa, this process was her job; however, for the rest of us, NCATE was in addition to our regularly assigned job responsibilities. Obviously, collecting data and following the standards are part of our regular jobs, but now we were adding meetings, poster-making and poster presentation sessions, interviews, meetings, web sites to maintain, meetings, video production, oh, and did I mention meetings? Everything was coming together but it was also difficult to keep all of the plates spinning at once. After the final written report was uploaded to the web site, things seemed to be a little easier to manage. It would have been infinitely more difficult to put it all together if we had not had frequent and clear communication from Teresa and each other. It kept us in check and all working toward the same goals. The mock poster session was a wonderful chance for many of our faculty and staff to get together and see what we were each doing in our programs. The Dean gave her approval for the event to be catered with finger foods and soft drinks and it became somewhat of a social event where everyone could go table to table and learn about programs that they might not have known about otherwise. It also gave each person a sense of pride when they were presenting their programs to the others. We do not often have a chance to talk about what we do in the big scheme of things, and for many, it was an opportunity to really show off the good work of their programs. Even though our mission and vision statements were clarified and put in place years ago, preparing for the NCATE process helped each person refocus on the goals of the CF to develop a deeper understanding of what these mean in the larger context of the university. Several faculty members from many different programs were excited to participate in the production of a video that highlighted the heart of our CF, ‘‘Preparing Caring, Competent, and Transformative Educators.’’ This video was shown in many of the teacher education classes and also at meetings and other gatherings. It was a creative and fun way to tie our programs together and say ‘‘This is who we are!’’
James’ Story In the final couple of weeks leading up to the official visit, my role shifted to that of cheerleader. The message was simple. Complete the race. Do not stop now. Be here, be honest, be positive. I sensed that the faculty were almost ‘‘NCATE’d out.’’ I needed them to hang on for one more week. As an incentive, I told everyone that they could take the entire week off after
Transformation from Tension to Triumph
287
the BOE visit. Sly smiles emerged as they realized that this was our spring break week. I stressed how much work had been done by so many people, naming many of them within and outside the department. Now it was our turn to take over and finish the job. Tell your story. Let your passion show.
THE VISIT Ellen’s Story Getting prepared for the visit took a lot of coordination and scheduling. Those of us involved in the visit basically blocked out our calendars for that week so that we could be prepared at a moment’s notice to meet, gather data, or leave town if we needed to. We held the poster session the first night of the visit. Each department was well represented, and because of our frequent Steering Committee meetings and other gatherings, it was a festive event and everyone seemed to exude pride in their programs. The evening passed quickly, and in the end, the poster session was a big hit. The next few days of the NCATE visit also moved at a rapid pace. We had to schedule interviews and meetings for the different departments and programs. Many types of interviews were required of the participants involved in clinical experiences, Standard 3. I spent a great deal of time coordinating interview schedules. The biggest challenge in this was asking the PK-12 school teachers and administrators to commit to coming to campus for an interview, many of which were scheduled during the school day. For clinical experiences, we had to invite student teachers and have them excused from their teaching duties; my advisory council that includes superintendents, human resource directors, principals, cooperating teachers, and student teachers; cooperating teachers from several schools; student teaching supervisors; and my office staff. For one of the interviews, we had a small group collected in the interview room and also a superintendent on speakerphone. In the future, it may be advantageous to use online video communications such as Skype for these interviews. It would save time and gasoline. Because parking is such an issue on this campus and we could not get cooperation from parking services, one of my assistants ran a shuttle van from a grocery store parking lot to make sure everyone got here on time. Before the interviews, we had the participants gather in what we called the ‘‘green room.’’ We had coffee and snacks for them. By doing this,
288
JAMES M. SHIVELEY ET AL.
we could have student workers take each group to their interview room at the designated time without having people get lost or wandering around the building. It kept the interviews on schedule and took away the anxiety that often comes with unfamiliar situations. I give a great deal of the credit for our success to the people who participated in the interviews. All were professionally dressed and arrived with positive attitudes. The interviewees were candid and did not just say that everything was perfect. They gave a realistic view of our programs and offered suggestions for continued growth. I think we were all tensed for a demanding visit and it was nothing like that. The NCATE team was professional and each person was very easy to work with. Getting ready for the visit was a challenge, but once it started, it ran as smooth as silk and was really the most pleasant part of the whole process. Waiting for the final report – now that was a little stressful!
James’ Story The day finally arrived that I had anticipated for several years – the actual BOE visit. I had looked forward to it primarily because it was something I wanted to put behind me. Unlike the last time we had gone through this when I was the chair, I felt strangely calm. I knew we had done all that we could do. But more importantly, I believed that whatever came out of this visit would be representative of what we did as a unit. I knew we were not perfect and did not feel the need to attempt to present ourselves as such. I did have confidence that we would accurately demonstrate our work and would no doubt learn ways in which we could improve. I actually felt myself looking forward to the next three days. The visit began with the poster session the evening the team arrived and it could not have gone better. It was so enjoyable to see faculty from across campus, candidates, partner school teachers, and university administration all together telling their part of our story. The poster session set the stage for the rest of the visit. As the days progressed, I found that my mind began to focus on other things – my usual daily work. After a while, I almost forgot that the BOE team was here. I only had three scheduled meeting times in which I had to be somewhere at a particular time. As I would get engrossed in my work, I would almost miss the meeting. Additionally, the word started filtering back of very positive meetings from all quarters. There were no frantic meetings, no last-minute searches for documents, and no defensive posturing. What emerged was a feeling of validation.
Transformation from Tension to Triumph
289
Teresa’s Story As the visit approached, I knew that all that could be done had been done in preparation. As I drove to the airport to pick up team members, it was with a kind of fatalism. What was going to happen was going to happen and I would not have any more control over it. My control had been abdicated to the BOE Team and the people on campus who would play a role in the visit. I had been told to expect to spend the actual days of the visit on the run to find documents that the team needed. This did not really happen. Each morning, meetings were held with the BOE Chair and then I went back to my office. Some days I did have to come up with more materials, but not much. I attended the interviews I was supposed to attend. Occasionally, faculty would stop by to share their experiences in their interviews. But, in general, it almost seemed as though everything was just going along as normal. I continued doing other work that needed to be done and everyone just went about their daily routine. In many ways, I felt like I was outside looking in. As the days progressed, everyone became more and more lighthearted. They were telling our story and they knew it was a good one. On the final morning, as the Dean, Associate Dean, and I walked over to the President’s office to hear what the team had decided, we were more curious than nervous. When we were told that all standards had been met with no areas for improvement, all of us, including the Provost and the President, cheered. It was an exhilarating moment. At lunchtime, all of the faculty were invited to a luncheon and to hear the result officially – of course, word had spread quickly throughout the unit. It was a day of celebration.
TRIUMPH AND MOVING FORWARD Between the poster session, written report, and improved communication between departments and divisions, we all became more aware of and proud of the CF of the SEHS. The idea of becoming transformative educators really hit home with our faculty and our candidates. There were many unexpected results of this process. Perhaps, the most meaningful was that many of us met faculty and staff from other departments and divisions of the university and began important dialogues with these individuals, which continue to this day. Through these meetings, we all became focused on the endgame, which is preparing our candidates to become effective educators as they step out into the world. Going through the NCATE process together
290
JAMES M. SHIVELEY ET AL.
gave us a renewed respect for the part that each of us plays in teacher preparation. For some faculty members who had not previously gathered or disaggregated data, the success and growth of our candidates affirmed their beliefs of themselves as educators and many experienced that ‘‘Eureka!’’ feeling as the data unfolded. Over the years, we have gathered quite a bit of data and artifacts. The NCATE process helped us narrow our focus to the information that gives us the most accurate feedback on our students and the education programs we have at Miami University. We can make better administrative decisions when we target the types of data we need to collect and eliminate superfluous paperwork. As we move to be more environmentally aware, we will also discover more efficient forms of collecting data using technology, thus reducing our dependency on paper and also reducing our physical storage needs. Now, almost 10 months later, the celebrating is over, but the work leading to NCATE accreditation continues. Going through the NCATE process and, especially, writing the Institutional Report, really brings home what it is all about. The importance of working together was crucial to our success. We used to joke that when the NCATE Coordinator came down the hall, it was time to run in the other direction. The reality became that we all needed to come together to do this work. It was indeed important work that validated who we are as an educator preparation unit. Our Institutional Report was the representation of that reality. Through writing the report, all of the nuances of the accreditation process came together. Having someone who sees the whole picture and brings all of the key stakeholders together is essential. Unfortunately, many times the person who leads the process only does it once and then the torch is passed on to someone else. Our goal in the years between this last visit and the next visit is to have everything in order so that whoever leads the process next time does not have to start from scratch. We will continue to use data to reflect and improve on our programs. The NCATE process has made all of us aware of the benefits of being reflective practitioners and making data-driven decisions instead of relying on gut instincts or institutional traditions. We will know where we want to go and how to get there. The doors between departments have been opened for communication and we will continue to check with each other about curriculum decisions, clinical experiences, and the big picture of preparing caring, competent, and transformative educators who will lead us into the
Transformation from Tension to Triumph
291
future of education. The process of NCATE accreditation is not the enemy; rather, it is a tool of continuous improvement with great benefits for all involved in the process.
REFERENCES Wiggins, G., & McTighe, J. (2007). Schooling by design. Alexandria, VA: ASCD. Wise, A., Ehrenberg, P., & Leibbrand, J. (2008). It’s all about student learning; assessing teacher candidates’ ability to impact P-12 students. Washington, DC: NCATE.
CHAPTER 17 REFLECTIONS ON THE SHARED ORDEAL OF ACCREDITATION ACROSS INSTITUTIONAL NARRATIVES Lynnette B. Erickson and Nancy Wentworth The chapters in this book present narratives from a variety of teacher education programs as they engaged in the accreditation process. These compelling accounts are characteristic of the shared ordeal of accreditation common among many teacher preparation programs. The authors document the way personal and programmatic frustrations emerge throughout the processes of accreditation revealing not only the tensions related to the accreditation process itself but also the concerns about the foundational concepts that undergirded the process. As we analyzed each chapter, distinct similarities as well as differences in the patterns and depths of tensions revealed in each of the narratives became apparent. (Throughout this chapter, we will reference each institution using the abbreviations in Table 1 found in chapter 1.) Within the chapters, authors described the tensions associated with their accreditation experiences using words such as conflict, friction, hostility, opposition, strain, stress, dread, worry, nervousness, anxiety, pressure, and apprehension. Such descriptions led us to believe that even though there have been benefits realized, reflections on the process reveal more about the
Tensions in Teacher Preparation: Accountability, Assessment, and Accreditation Advances in Research on Teaching, Volume 12, 293–317 Copyright r 2010 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 1479-3687/doi:10.1108/S1479-3687(2010)0000012020
293
294
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
contradictions and complexities of the emotions experienced than the benefits themselves. Differences in personal and programmatic tensions were influenced by institutional histories of accreditation, decisions to challenge existing accreditation and align with new or different accreditation bodies, sizes of programs, and levels of support extended toward accreditation. Although these differences may have contributed to what made the experiences of each of the programs and the accounts of them unique, it was the similarities in the tensions across programs that collectively informed our understanding of the accreditation process. In preparing to write this chapter, we analyzed the narratives to explore whether the tensions associated with accreditation served only as sources of personal anxiety and institutional disequilibrium, or if those inadvertent tensions actually contributed to the improvement of teacher education programs. We have organized the concerns that were uncovered in our analysis of the chapters into the following sections: (a) Top-Down or Bottom-Up Process, (b) Leadership, (c) Culture of Evidence, (d) Collaboration, (e) Cost, and (f) Lessons Learned. We have organized our discussion beginning with tensions we think are the most theoretical, moving to those that are more programmatic, and finally, we end with those that are more personal. The first section considers the process as a whole, arguing that accreditation is a top-down structure due to federal mandates leading to empowerment of accreditation bodies over teacher preparation programs. From there, the model becomes bottom-up as program faculty engages in planning and decision-making as the process unfolds. Next, we discuss the tensions of leadership, specifically the importance of competent and consistent leadership in the accreditation process. The ‘‘Culture of Evidence’’ section is more pragmatic presenting the tensions associated with developing and implementing assessment systems to satisfy requirements put forth by accrediting bodies and yet meet the specific needs of a program. The ‘‘Collaboration’’ section addresses the tensions experienced by stakeholders from teacher education, the arts and sciences, and the public schools as they cooperatively developed various components of the accreditation goals. Following this section, we highlight tensions that emerge as personal and institutional costs that result in the accreditation process. Finally, we discuss the lessons learned from our analysis of these narratives. These sections represent the most common concerns that emerged in individual narratives provided in this book. We acknowledge that these tensions are fluid and are highly interconnected. The tensions revealed in our analysis were not necessarily resolved, but our discussion of them communicate the uncertainties about the process that remain, the new
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
295
questions generated, and provide impetus for continued introspection for anyone undertaking accreditation.
TOP-DOWN OR BOTTOM-UP PROCESS Accountability requirements established by state and national mandates have positioned accreditation bodies as overseers of institutional compliance and quality control of teacher preparation programs. These bodies then dictate the procedures and criteria for how preparation programs will prove their competence in the preparation of teachers who are deemed highly qualified. This process of mandated accreditation, by its very nature, is imposed as a top-down structure even when it is couched in bottom-up processes. Nearly all of the institutions indicated that they had some type of bottom-up procedures for meeting the top-down requirements of accreditation. Strategic involvement of faculty from the beginning of the process made ‘‘it personal, create[d] faculty ‘buy in’, produce[d] commitment, and thus more investment’’ (Ackerman and Hoover, St. Cloud State University). As Pierce and Simmerman (Utah Valley University) pointed out that both requiring and allowing faculty participation in the decision making process and development of common goals, this bottom-up tactic helped to establish joint ownership of their faculty in the process. Hutchison, Buss, Ellsworth, and Persichitte (University of Wyoming) also indicated that successful accreditation processes require faculty support and input on both the process and the decisions that are made. Indeed, they acknowledged that their decision to include all college faculty involved with teacher preparation was stressful, but central in yielding positive dividends in the process. Utilizing a bottom-up task within a top-down structure positions stakeholders as worker bees to accomplish a project that may or may not be seen to them as having personal or professional benefit – thus tensions are fostered. Western Governors University was the only institution that did not describe a bottom-up system for accreditation. Zane, Schnitz, and Abel (Western Governors University) made it clear that their organizational structure was a top-down model. They explained that the online university system was created to align with the business model upon which their university functions. The teacher preparation program at Western Governors University was developed with the specific purpose of aligning with state and national requirements to meet the market needs and requirements for teachers across the United States. ‘‘Academic leaders organized each program into a coherent scope and sequence,’’ and was essentially created in
296
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
isolation of faculty inputs or buy-in. This approach to meeting accreditation requirements is completely opposite to the bottom-up organizations reported by the other authors in the book. Western Governors University’s faculty were not included in the decisions about curriculum, program, or assessment, because those decisions were made by the academic leaders in the creation and development of the program originally. This uniquely top-down structure appears to have the potential to predestine a program to meet the external accountability requirements for accreditation and eliminate much of the tensions associated with working with multiple individuals or groups to establish common norms and outcomes. In contrast, to meet accreditation requirements, traditional university settings have little choice but to establish ‘‘faculty ownership through participation in the decision making process and common goals’’ (Pierce and Simmerman, Utah Valley University) that are negotiated across the program. Whereas top-down structures like that of Western Governors University may negate many of the tensions associated with structure and organization, bottom-up processes may appear to allow for faculty influence and may indeed provide a modicum of control over the procedures and outcomes. Top-down institutional expectations paired with bottom-up delegation of responsibility for achieving accreditation created tensions regarding who was ultimately responsible and accountable for accomplishing the task. Marina, Chance, and Repman (Georgia Southern University) noted the conflicts that existed between state, federal, and university policies and how these inconsistencies lead to confusion and lack of direction on the part of faculty members, whether in the United States or internationally. Fallona and Jones (University of Southern Maine) expressed similar tensions as they questioned, ‘‘Do accreditation agencies empower or disempower faculties as they endeavor to cultivate informed responsible, and empowered teachers? Standards are important, but who gets to decide what they are and what evidence counts to demonstrate that those standards are being met?’’. Tensions of not knowing who establishes accountability and who is responsible may influence the levels of buy-in, mentioned earlier, by stakeholders in the teacher preparation program.
LEADERSHIP ‘‘Administrators in higher education may be accountable for that over which they have little responsibility or control’’ (Daigle & Cuocco, 2002, p. 4), which
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
297
therefore increases the responsibility of those who do have control of programs and assessments. Some authors in this book implied that accreditation of the teacher preparation program was a responsibility to be shared by all members of the university in some fashion. They acknowledged that the structure of unit governance is complex; yet, it is also the key to successful accreditation (Osguthorpe and Snow-Gerono, Boise State University). In the spirit of joint ownership, Neufeld (Lander University) identified the role of the university at large in the accreditation process to be one of support to the unit or program. Osguthorpe and Snow-Gerono (Boise State University) maintained that programs must be able to depend on their institutions to provide the assistance necessary to successfully accomplish the accreditation task. This interdependence of programs and institutions creates tension around the shared responsibility for accreditation because each has their own vision of what the other should be doing. Ackerman and Hoover (St. Cloud State University) went further to mention the importance of breaking down administrative boundaries. To meet accreditation requirements that might go beyond the scope of an administrator’s typical stewardship for a program, these boundaries need to be expanded to the unit level of teacher preparation programs. Accountability for achieving accreditation often falls to colleges of education, and thus to their deans. Tensions arise when college of education deans realize their limited power to influence deans of other colleges within the unit or program. The tension of who is accountable then becomes an issue of rank and power. Erickson, Wentworth, and Black (Brigham Young University) explained that Brigham Young University appointed a university vice president to assume the position of administrator of the unit. This administrative position superseded all other deans and department heads in directing the accreditation process and thus addressed the question of who was in charge and resolved some of the tensions. While administrators may be accountable for accreditation, programs and the faculty within them are often charged with the responsibility for interpreting, creating, developing, testing, and implementing all that is necessary to meet accreditation requirements. In addition, they must then provide evidentiary proof that the decisions they made were instrumental in preparing high-quality teachers. Accreditation expectations are often presented by administrators touting the benefits of institutional betterment and teamwork. They talk of ‘‘all for one and one for all.’’ Craig (University of Houston), however, realized that assurances of shared campus-wide ownership for the process were hollow and that ‘‘When push came to shove, I came to understand that responsibility for teacher education solely resided
298
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
with the Department of Curriculum and Instruction.’’ Tensions of being required to accept responsibility yet having little power over the process frequently leaves individual faculty members with a strong sense that the accreditation process was something done to them, rather than something that they had authority over. Ackerman and Hoover (St. Cloud State University) described the accreditation process as ‘‘precarious’’ and because of that, as Shiveley, McGowan, and Hill (Miami University) observed that the leadership team ‘‘drives each other, leans on each other, covers for each other, and generally, makes sure that everyone keeps their eye on the ball.’’ All the authors demonstrated clear understandings that accreditation requirements and processes are flexible and subject to change, which was unsettling to them and a source of frustration. Hutchison, Buss, Ellsworth, and Persichitte (University of Wyoming) expressed that ‘‘the change in rules, timelines, process, and expectations for NCATE and the SPAs created significant tensions for us as we prepared for our 2008 accreditation review.’’ As a result of the precariousness of the accreditation process, leaders play an important role in stabilizing and successfully guiding the stakeholders to accreditation. Consistency, knowledge, and experience of leadership during the accreditation endeavor influences the tensions around constancy by those engaged in the process. Consistency in Leadership The accreditation effort requires consistent leadership to provide stability in the process. Several authors shared the tensions they experienced of having leaders appointed who had only limited time and experience in the program or institution. Shiveley, McGowan, and Hill (Miami University) reported that during the preparation for their 2001 NCTE visit the fifth new dean in eight years was hired. The NCATE coordinator had been hired directly from the public schools and had not been at the institution during the previous NCATE visit. She reported that her ‘‘credibility with the faculty was tenuous,’’ until the faculty came to trust the coordinator, the accreditation process moved slowly. Knowledgeable Leadership Too often the task of accreditation is assigned to those who have little knowledge of, or experience with, accreditation requirements and processes.
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
299
There is some comfort to those charged with sorting out the complexities of accreditation in knowing that they can be guided through the process. However, tensions build when faculty members learn that their leaders are under-informed or are just learning along with them. Osguthorpe and Snow-Gerono (Boise State University) revealed tensions this program experienced as leaders retired, left, or were replaced during the preparation process ‘‘creating a void in experience.’’ Their tensions were compounded when they realized that ‘‘it is helpful to have a BOE [Board of Examiners – NCATE] trained institutional representative directing the accreditation effort’’ – a realization that came after the retirement of their BOE trained leader just a year before the accreditation visit. On the basis of their experience, Osguthorpe and Snow-Gerono (Boise State University) suggested ‘‘forward thinking would include potentially having multiple BOE trained institutional representatives on faculty or staff, while still honoring flexibility in retirement and job responsibilities with accommodating workload policies and appreciation for long-term employees.’’ Monroe-Baillargeon (Alfred University) recounted her experience as a newly hired department chair with instructions to have a TEAC brief ready in just over two years. As a leader, she revealed her understanding of the accreditation process when she observed, ‘‘In many ways, we were unprepared as a division to embark on such a challenging task, but then again, who is?’’. The learn-as-you-go approach to accreditation is one born of necessity and is a learning experience for leaders as much as it is for those whom they lead. Knowledgeable and experienced leaders lessen tensions experienced by all participants because they can better attend to the process of satisfying the mandates of accreditation bodies.
CULTURE OF EVIDENCE In June 2006, Educational Testing Services issued a paper titled A Culture of Evidence: Postsecondary Assessment and Learning Outcomes, which outlined accountability models for institutions of higher education. In that paper, Dwyer, Millett, and Payne (2006) made the following claim that supports the call for greater accountability in higher education, including teacher preparation programs: Postsecondary education today is not driven by hard evidence of its effectiveness y . The lack of a culture oriented toward evidence of specific student outcomes hampers informed decision-making by institutions, by students and their families, and by the future employers of college graduates. What is needed is a systemic, data-driven,
300
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
comprehensive approach to understanding the quality of two-year and four-year postsecondary education, with direct, valid and reliable measures of student learning (p. 1).
Key words within this statement help define culture of evidence: hard evidence, outcomes, informed decision-making, systemic, data-driven, comprehensive, valid, and reliable. These same key words and others like them are reflected in the language used by accrediting bodies in their requirements for teacher preparation program accreditation. Challenges associated with establishing a culture of evidence were apparent as the authors shared their personal and program-wide experiences and tensions associated with assessment systems, data management, data-based decisionmaking, and balancing data and professional judgment.
Assessment and Data Management Systems Marina, Chance, and Repman (Georgia Southern University) observed that in teacher preparation, quality assessment is important, but not clearly defined. Developing assessment systems is a tension-filled enterprise. As observed by Pierce and Simmerman (Utah Valley University), construction and implementation of these accountability systems can force immediate change in programs, content, delivery methods, supervision, or other issues directly related to a given course or program element. Shiveley, McGowan, and Hill (Miami University) commented that much of the data they collected for accreditation was externally driven and not an accurate reflection of their work. Responding to the need to create effective assessment systems that would also meet accrediting agency demands, each teacher preparation program indicated that they had to focus both on their particular program values and goals and on the components required for their accreditation. Monroe-Baillargeon (Alfred University) reflected, One of the greatest tensions with achieving accreditation was not to create a system of assessment and data analysis that ultimately controls your program, rather, the goal is to create a system that remains true to your beliefs about teacher development while validating the success of your practices.
In most of these accounts, programs were in the midst of creating and attending to the development of assessment systems while they simultaneously struggled to meet other additional accreditation demands. This tension is most poignantly described in the account of White, Tutela,
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
301
Lipuma, and Vassallo (Rutgers University). They created program documents and assessments, artifacts representing multiple levels and sources of data, while they also created a mission statement, conceptual framework, and program objectives. Several of the teacher preparation programs mentioned the frustrations they had creating systems to meet the different assessment requirements for state, national, and Specialized Professional Association(SPA) recognition. Erickson, Wentworth, and Black (Brigham Young University) provided an example of how the broad scope of their ‘‘unit’’ made developing a system for assessment a super-source of tension across 27 licensure programs and 7 colleges. Their challenge to create new assessment tools that met the varied needs of all members of the unit was a source of pressure and anxiety before and after accreditation. Unit governance responded to the concerns of the individual licensure programs by allowing minor modifications of commonly used instruments, as long as the integrity of the instruments and the data collected still provided the information required of all programs within the unit. This modification of common assessments required for national accreditation allowed Brigham Young University to adapt to SPA-specific and state requirements. The process of setting up assessment and data management systems is often one of trial and error, when there is no time to waste and errors are costly. Like many teacher preparation programs, Boise State University (Osguthorpe and Snow-Gerono) was on a tight timeline in preparation for their accreditation visit. After purchasing a commercial data management system, they found that it did not meet their needs, causing them to reconsider and design their own system, which put them behind in the process. Although they learned ‘‘it was better to start over than to move ahead with a flawed system,’’ experiences like these add to the already existing tensions associated with the process. Assessment and data management systems, their design, construction, implementation, and population with artifacts proved to be sources of frustration for other authors as they reported that after adopting or creating, their systems turned out to be cumbersome, slow, and time-consuming. While attention to the assessment system as a whole caused difficulty, individual components of it also contributed to tensions within the accreditation process. For example, Hausfather and Williams (Maryville University of St. Louis) found that their newly created clinical assessment had many items to rate, was unwieldy, time-consuming, and difficult to analyze for trend data. White, Tutela, Lipuma, and Vassallo (Rutgers University) found, after implementing some of their assessments, that the language used
302
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
was unclear between teacher educators and teachers and yielded inconsistent ratings of candidate competency and therefore questionable data. After investments of time, money, and more time, challenges around the whole and individual components were sources of frustration. Managing data collected through the various assessment systems was a huge issue. Traditionally, data used to make accreditation judgments were in paper form and was sorted, filed, and stored in ‘‘hard copy’’ to provide a paper trail of evidence. The recent move to using technology as the sole repository of evidence then became part of the new accreditation requirements demanding that institutions manage data from collection through retrieval electronically. Tensions around digital data management were articulated in all narratives, except Western Governors University. These were tensions related to the often time-consuming and costly decisions about a suitable data management system. Data management systems commercially produced were created to meet broad needs and outcomes typical to many teacher preparation programs, but usually were not specifically aligned to any one program. As mentioned earlier, Osguthorpe and Snow-Gerono (Boise State University) reported that their institution initially invested in a commercial data management program only to find it unhelpful and inappropriate. In the midst of the process and with limited time, they determined that to meet NCATE demands within their institutional setting, they would need to start over and create their own system that would allow them to use the data collected to make the data-based decisions required. As data management systems were aligned with program needs, benefits were realized and tensions quieted. Neufeld (Lander University) noted that after creating a useful data management system, there was increased accuracy, efficiency, maintenance, and dissemination of data, which made data reports more accessible to their various partners and available for use in program improvement. In their narratives, authors related tensions that consistently surfaced regarding these systems as programs needed to decide what and how much data was necessary to be archived. For example, Hutchison, Buss, Ellsworth, and Persichitte (University of Wyoming) wondered whether to report data for all of the assignments required in a course or just a few major assessments that deeply addressed program objectives and accreditation standards. Zane, Schnitz, and Abel (Western Governors University) had far fewer tensions associated with assessment and data management systems due to the nature of their program and institution even before they entered into the accreditation process. They already were ‘‘a data regarding and data using
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
303
culture.’’ Their ‘‘curriculum and programs are standards-based, assessment centric, and shaped by centralized and transparent data gathering and reporting.’’ Having had these systems in place made the issues associated with assessment and data management less demanding as they embarked on meeting the requirements for multiple state and national accreditations.
Data-Based Decisions The difficult tension inherent in designing and implementing assessment and data management systems were heightened when faculty were called on to shift their orientations from nonevidence-based program ‘‘inputs’’ to data ‘‘outputs’’ in aligning with accreditation requirements. In the accreditation process, the purpose for assessment and data management systems is to collect, archive, analyze, and disseminate data to provide a foundation for modifying or changing programs or practices to better meet established goals of a program or unit. This expectation for data-based decisions was new to teacher educators, causing some to resist this expectation and the accreditation requirements. Ackerman and Hoover’s (St. Cloud State University) insistence that data be employed in program change could be viewed by some as impinging upon faculty rights. In the accreditation process, agencies required that teacher preparation programs and their faculty not only provide and analyze the data they collected, but they also had to formulate it as actionable information, make appropriate programmatic changes and recommendations, and then demonstrate through appropriate evidence they had done this. As difficult as this might have been for faculty, Osguthorpe and Snow-Gerono (Boise State University) realized after engagement in the process ‘‘program faculty are not as resistant to change when the preponderance of evidence show that change is needed’’ and that ‘‘joining forces around data and quality is a foundational step in program improvement.’’ Tensions were often a result of programs having little direction or understanding of the process of data-based programmatic evaluation (White, Tutela, Lipuma, and Vassallo, Rutgers University). They found themselves trying to decipher what to do with the data they had or had not collected and were continually concerned with how they could demonstrate acumen in this process. Osguthorpe and Snow-Gerono (Boise State University) found that they ‘‘had more conclusions than evidence,’’ while Hausfather and Williams (Maryville University of St. Louis) found that their clinical assessments were more helpful to individual teacher candidates than they were as program data. They also found that their portfolio
304
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
assessments were ‘‘not helpful in making programmatic decisions’’ and that the results of the analysis of clinical assessments exceeded the usefulness of their course assessments. In some cases, these tensions diminished over time. For example, Hausfather and Williams (Maryville University of St. Louis) initially felt that the ‘‘data illustrated what they already knew’’ and they ‘‘didn’t need the assessment to know.’’ Yet, over time, they ‘‘moved from reviewing and modifying assessment instruments to more review of the data from those instruments.’’ Ackerman and Hoover (St. Cloud State University) acknowledged that their data ‘‘provides evidence and documentation for decision making and program change.’’ Although data-based decision-making was clearly a tension among the programs in this book, mostly all of them acknowledged that they were becoming more comfortable with using their data to monitor their progress toward their programmatic goals. Interestingly, Zane, Schnitz, and Abel (Western Governors University) mentioned no tension regarding data-based decisions for their programs. Their program maintains a process of ‘‘continuous program review.’’ Their program review is an on-going responsibility for the university’s academic managers and, because of their organizational structure, is not part of the stewardship of the program faculty, thus decreasing many tensions that would be common in a typical university structure. Data and Professional Judgment Several of the authors pointed out that quantitative data by itself might not be sufficient for all decisions that need to be made and expressed their support for wise use of evidentiary data tempered by professional judgment for making decisions. Tensions were realized as the significance of quantifiable data in the decision-making process was exalted, and the value of teacher educators’ professional judgment was marginalized. One program observed: The focus on students and the importance of helping prepare each child for healthy adulthood shifts to the documentation needed to prove that teachers (both K-12 and college) are well prepared and that institutions of learning (K-16) are of high quality. The ease with which the focus of education shifts from people to proof has exacerbated several frustrations related to the accreditation process. (Neufeld, Lander University)
Performance measures and data representations of candidate performance may not necessarily be indicative of the quality of the program and ‘‘data may cloud concern for individuals’’ (Neufeld, Lander University) and be
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
305
replaced with holistic, overarching understandings. In the example provided by Neufeld (Lander University), not all of their teacher candidates are able to demonstrate proficiency through a standardized test format and they are unwilling to seek appropriate accommodations for taking the test. Student decisions to forego needed accommodations resulted in the institution not meeting cut scores established by the state. The institution then had to bear penalties based on the performance data of their teacher education graduates. Use of data in this fashion, without any qualitative evidence or any opportunity for professional input, creates often irresolvable tensions for programs, their students, and their institutions. White, Tutela, Lipuma, and Vassallo (Rutgers University) point out that qualitative contextualized data is often needed to provide more complete evidence upon which to make decisions for changing or improving programs. Hausfather and Williams (Maryville University of St. Louis) argue, ‘‘When you remove the personal from the data it becomes more abstract than real – just an accountability requirement.’’ In frustration, many ask the same question posed by Powell, Fickel, Chesbro, and Boxler (University of Alaska Anchorage), ‘‘Is evidence always something that can be measured?’’.
COLLABORATION Collaborative efforts of individuals, departments, programs, colleges, and public school partners who had shared interests in the processes and outcomes were a central feature of most of the chapters and were generally recognized as a positive aspect of the accreditation process, even when they began because of accreditation mandates. However, collaboration was also a source of tension as stakeholders held differing views of teacher preparation. ‘‘We have a great program but all stakeholders do not have the same perception, and we need to meet [together] to bring that vision to everyone’’ (Osguthorpe and Snow-Gerono, Boise State University). The need to collaboratively engage in accreditation brought individuals and entities together to establish common visions, objectives, language, and assessments. These collaborations served as catalysts for dialogue and discussion that resulted in alliances of understanding and respect as stakeholders worked toward mutual purpose (Shiveley, McGowan, and Hill, Miami University). Pierce and Simmerman (Utah Valley University) reported increased understandings across their program as ‘‘faculty learned that asking questions is healthy, it effects how the program is intertwined with other programs on campus and the partner school districts.’’
306
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
Powell, Fickel, Chesbro, and Boxler (University of Alaska Anchorage) believed the benefits of collaboration went beyond the purposes of accreditation as they observed, ‘‘as these conversations continue they can evolve into professional learning communities working together and learning together for mutual improvement of education and schooling for both teacher candidates and the students they teach.’’ Tension related to collaboration occurred within and between teacher education programs, colleges of arts and sciences across the university, and the public school partnerships.
Collaboration within Departments of Teacher Education Craig (University of Houston) made an interesting observation in her chapter when she noted, ‘‘Challenges faced by teacher educators emanate as much from within the buildings where they work as they do outside of them in the broader university context and community at large.’’ This statement, referring to the accreditation process, reminded us that tensions develop among teacher educators as they participate in the accreditation process. Departments of teacher education typically bear considerable responsibility to collaboratively construct conceptual frameworks in short periods, create new or revised preparation programs, create assessment systems to align with national or state standards, and to mutually approve of all that is done. However, given appropriate conditions for success, collaboration within some departments reveals that the tensions can in fact be constructive. Monroe-Baillargeon (Alfred University) was a new department chair, half of her faculty were new to the department, and just two years from their accreditation visit. Her decision to establish common beliefs and commitments provided a common ground for collaboration within the department. The department faculty were able to then ‘‘use these conversations as springboards to agreement on what is being taught in our program.’’ The author recognized that despite tensions associated with working together with departmental colleagues, ‘‘a culture of professional learning and collaboration was an outcome of their accreditation process.’’
Collaboration between Teacher Education and Departments of Arts and Sciences While teacher education and arts and sciences faculties worked toward identifying and agreeing on common purposes, tensions associated with
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
307
ownership and enculturation were common. Neufeld related that at Lander University, ‘‘collaboration is interdisciplinary and university wide.’’ Pierce and Simmerman (Utah Valley University) stated, ‘‘Here we had to work with often strongly held paradigms regarding program evaluation and student outcomes.’’ Yet, as Osguthorpe and Snow-Gerono (Boise State University) pointed out, the collaborative ‘‘focus on accreditation offers a wonderful opportunity to work with colleagues across campus and produces a unity of purpose that stems from developing stronger ties with and understandings of programs across the unit.’’ The mutual attempts to work together toward commonly established goals for teacher preparation, regardless of personal differences, created opportunities for sharing and growing that had often been absent from the teacher preparation process. Faculty from teacher education and the arts and sciences at Lander University worked together to meet requirements of accreditation and the result was a more unified university community and program improvement for their teacher candidates. Monroe-Baillargeon (Alfred University) observed that collaborative teams working toward accreditation became ‘‘partnerships y across the university.’’
Collaboration between Teacher Education and the Public Schools Collaborative relationships between teacher preparation programs and public schools are critical to meeting accreditation requirements (Marina, Chance, and Repman, Georgia Southern University). Benefits realized from these collaborations could also be accompanied by tensions as university faculties and public school partners worked together toward establishing common outcomes of teacher preparation. Similar to collaborations between teacher education and arts and sciences faculties, teacher educators and public school teachers each were committed to their own beliefs and approaches to teaching and teacher preparation. When teacher educators and public school teachers acknowledged their interdependence and united their efforts to support the preparation of high-quality teachers, they mutually learned and benefited from the experience. Hausfather and Williams (Maryville University of St. Louis) reported that faculty members in their public school partnership worked closely with teacher educators to construct quality clinical experiences for teacher candidates. Public school partners provided support to their university partners in constructing and evaluating assessment and programs. They were then also helpful in accreditation reviews. Osguthorpe and Snow-Gerono (Boise State
308
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
University) recognized that the relationships between the university and the local school partners were a ‘‘dramatic benefit’’ in the accreditation review, as their school ‘‘partners rose to the occasion and highlighted our various program strengths.’’ Collaboration is an investment for teacher educators as well as for their public school partners. Yet, collaborative relationships may be as tenuous as the accreditation process itself. Powell, Fickel, Chesbro, and Boxler (University of Alaska Anchorage) had established a university/public school partnership before beginning the accreditation process. They had made significant investments over time to nurture relationships of trust and mutual respect that resulted in a mutual beneficial ‘‘vision of community working together and learning together.’’ They described their professional learning communities (PLCs) as a ‘‘radical collegiality between university facilitators and public school partners.’’ Tensions arose, however, for University of Alaska Anchorage faculty as they found that their participation in the accreditation process required time, energy, and resources, thus marginalizing their ability to maintain the level of partnership relationships that they had once valued and enjoyed. The time, energy, and resources demanded by accreditation then created potential rifts in the very collaborations needed for universities to successfully negotiate accreditation.
COSTS Perhaps, all tensions associated with accreditation could ultimately be subsumed under this aspect of the process. Indeed, accreditation is a perpetual process (Hutchison, Buss, Ellsworth, and Persichitte, University of Wyoming) requiring the support and dedication of institutional and programmatic funds, personnel, time, and facilities. The personal and professional resources of individuals and institutions are often spread thin and go beyond originally defined stewardships and purposes, to encompass the tasks, expenses, and processes of accreditation. The tensions surrounding cost are best grasped through discussions of faculty, autonomy, workload and time, and resources (Shiveley, McGowan, and Hill, Miami University).
Faculty Faculty shoulder the burdens of making accreditation happen; therefore, they are also positioned to bear the greatest opportunity costs. As demands
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
309
for meeting accreditation requirements are addressed, teacher educators are called on to go beyond their typical job profiles. Accreditation calls on faculty to develop new program goals and outcomes, assessments, materials, syllabi, and course descriptions; to collect and analyze data for program improvement, collaborate within departments and across the university; and to establish public school partnerships. Faculty tensions mount as increased responsibilities and expanded workloads, however inadvertently, draw time and energy away from their teaching, scholarship, and professional service for which the university holds them accountable. The accreditation process had a ‘‘personal impact on faculty members’’ (Hutchison, Buss, Ellsworth, and Persichitte, University of Wyoming) and on their relationships with others that created tensions not realized previously. Personal and collective losses of ‘‘recognition, compensation, position, space, time, and notoriety’’ (Craig, University of Houston) were noted as teacher educators began to question their practices and doubt their professional competence. Craig (University of Houston) observed, ‘‘how pervasive the fear of perceived incompetence was’’ and that ‘‘accountability procedures affect educators’ images of themselves as teacher educators and their relationships with others y.’’ In studying herself and her responses to the process she noted, Doubt in my competence as a professional was creeping in although I did not recognize and name it as such until much later y . I was quickly losing sight of my ‘best-loved self ’, which had previously found expression in my course syllabi and my subsequent teaching practices. In fact, I found my preferred self rapidly being replaced by the ‘automaton’ that the abstract institutional directives dictated I should be.
Autonomy One hallmark of higher education has traditionally been professional autonomy. This allows and encourages professors to establish individual agendas, pursue scholarship, and develop their teaching in such a way as to define themselves within the profession, and to pursue individually satisfying and unique programs of study that advance the perimeters of current knowledge. Accreditation opens up the private teaching practices of individuals and programs for public display and evaluation (Craig, University of Houston), a process that does not necessarily align with what is valued by the private or the public entities obliged to participate. These very processes of, and requirements for, accreditation embody
310
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
personal and institutional costs as they threaten traditionally valued autonomy, often the most important virtue of faculty experience in academe. Decisions to enter the accreditation arena were typically made by administrators rather than the rank and file members of programs or units who became responsible for achieving and living with the outcomes of accreditation. For most, ‘‘the need to change was not an individual decision but an administrative one’’ (Erickson, Wentworth, and Black, Brigham Young University), and a good number of faculty pushed back (Ackerman and Hoover, St. Cloud State University) and reportedly felt that while accreditation might have been important, they were forced to sacrifice some elements of their programs that made them unique and were representative of what faculty individually or collectively valued. As discussed earlier, University of Alaska Anchorage invested considerable time and energy to establish PLCs, but due to conflicting demands of accreditation, they had to largely abandon their PLC relationships and practices to align with the processes and demands of accreditation. In aligning courses and content, Craig (University of Houston) noted her personal loss of autonomy, ‘‘as my course outlines became increasingly standardized in response to the different sets of external criteria, I began to lose ownership of them and my ability to identify with them.’’ Faculties were sometimes disinclined to buy in to programs not only because of external demands to align and standardize but also because of the lack of faculty voices in the processes and requirements for change (Marina, Chance, and Repman, Georgia Southern University; Ackerman and Hoover, St. Cloud State University). However, threats of not being accredited and the penalties associated with that outcome (whether unspoken or otherwise) promoted at least superficial faculty compliance in the process (Fallona and Jones, University of Southern Maine). Craig’s (University of Houston) experience lead her to the realization that ‘‘job satisfaction and sense of agency wanes, as autonomy is lost in the accountability review process.’’ Tensions were apparent as the boundaries of autonomy were reigned in and universal alignment was demanded.
Workload and Time Workload and time were significant tensions expressed by all authors, except Zane, Schnitz, and Abel (Western Governors University). Osguthorpe and Snow-Gerono (Boise State University) expressed that time is an important and primary issue in the process, no matter what other support may be in place. White, Tutela, Lipuma, and Vassallo
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
311
(Rutgers University) talked about having limited time to prepare for their accreditation visit and that limited time created increased demands on faculty and leaders, creating workload issues. Often times, accreditation required that institutions extract more from faculty than was fair or realistic (Ackerman and Hoover, St. Cloud State University; Fallona and Jones, University of Southern Maine). Hutchison, Buss, Ellsworth, and Persichitte (University of Wyoming) furthered these thoughts and questioned the significance of the time spent in the process by noting, ‘‘some faculty felt this accreditation work was merely an exercise requiring significant amounts of time that would result in meaningless documents.’’ For nearly all of the institutions, faculty expectations for teaching, scholarship, and service remained essentially the same in spite of the additional workloads associated with accreditation. Faculty perceptions that leaders and administrators lacked awareness or sensitivity concerning the time required to accomplish accreditation work was reinforced as faculty were frequently not given course or other workload releases to work on the development of the accreditation documents or common assessments. With time being a limited resource, faculty efforts came not only at the expense of their teaching, research, and service responsibilities but also at personal and family costs. Craig observed, ‘‘The more time I spent appeasing the different agencies by tinkering toward utopia, the less time I had left to prepare for my classes and respond to student work. Time that might have been dedicated to a scholarship agenda was reallocated to accreditation concerns.’’ For some, the accreditation agenda replaced that of scholarly productivity leaving teacher educators falling short in terms of research and publication requirements. It should be noted that participation in the accreditation process may produce tensions and even a degree of paralysis, but in time, it can be an impetus for deep examination that, with planning and foresight, can be the subject of research and publications (Pinnegar & Erickson, 2009). Time also was a tension when it was necessary to bring all stakeholders together for professional development on new program goals and outcomes, to establish inter-rater reliability on assessments, and to discuss program improvement based on data. Each narrative account documents both the need to bring all stakeholders up to speed on the language, content, outcomes, assessments, and so on and the absence of appropriated time or funding to accomplish such goals. Of all the tensions of the accreditation process, all authors in this text believe that ‘‘the constraint on and limitations of time were foremost’’ (Hutchison, Buss, Ellsworth, and Persichitte, University of Wyoming).
312
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
Resources Financial support, facilities, technology, and the community were also tensions associated with accreditation. For most institutions, the ‘‘budget is slim’’ for accreditation and faculty and resources are limited (White, Tutela, Lipuma, and Vassallo, Rutgers University). As a result, resources to support accreditation often come from program and project budgets that, in these accounts, had often been either downsized or cut completely. While some institutions mentioned that they were able to hire qualified individuals to lead the accreditation undertaking as their designated job description, more often these accounts indicate that the work of accreditation was an add-on taken up in addition to fulfillment of demands from the positions they were hired to fill as faculty members or department leaders. Facilities became a source of tension as space was subsumed and reallocated to store evidence, conduct collaborative work meetings, and house institutional accountability offices. Technology was a significant source of tension for all programs in this book. Changes to earlier requirements for paper trails had literally been paper. Now there was a demand to use electronic systems that allow programs to store, manage, retrieve, analyze, and disseminate teacher candidate data to assess both student and program achievement. Several institutions found themselves experimenting with and trying several data management systems before they found one that met their needs. This process was not only costly, but also timeconsuming. Yet, this is a resource that meets the current needs and requirements for accreditation. Another resource in the accreditation process that was mentioned by all the authors was that of the public schools and their contributions to program accreditation. In the spirit of collaboration, public schools regularly contribute to teacher preparation programs by providing settings in which teacher candidates engage in practical teaching experiences. During the accreditation process, public school personnel engaged even more deeply. Teachers coconstructed clinical experiences with teacher educators, jointly developed assessment instruments, and assisted in evaluating teacher candidate clinical performances. These collaborations provided insights and appreciation between university faculty and public school teachers. This understanding led Fallona and Jones (University of Southern Maine) to observe, ‘‘we [teacher preparation programs] are faced with the exact dilemma faced by our K-12 colleagues: higher expectations and control coupled with fewer resources. A recipe for failure.’’
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
313
LESSONS LEARNED Hindsight in any endeavor can be a great teacher. The accreditation processes described by the various teacher preparation programs have both frustrated and enhanced their programs and faculty, sometimes simultaneously. Some authors describe how they fought the process, whereas others explain how they embraced it. Some reportedly felt that their teacher education programs emerged stronger because of the collaborative outcomes and assessments created with colleagues and partners in the course of working toward accreditation, whereas others revealed that the accreditation process threatened their autonomy to teach and evaluate their students based on their individual experience and professional judgment. Regardless of the similarities and differences between the experiences represented in this book, lessons were learned with the upshot being that successful accountability and accreditation is complex, multidimensional, and messy. Change requires commitment, work, and expectations and that the transformations that result from the effort will be better than what existed before. Authors’ narratives revealed that there were benefits associated with accreditation experiences; at the same time, they were cognizant of the persistent concerns and tensions that accompanied the process.
Benefits Benefits of accreditation realized by the various authors revolved around faculty contributions and growth, program improvement, and new understandings. Many credited their colleagues for their selfless contributions and for their commitment to accomplishing the undertaking of accreditation. Collaborative efforts of university and public school faculties were noted as benefits in the process. Neufeld (Lander University) noted that their university faculty were supportive, working together to extend themselves beyond their comfort zones to meet requirements set forth for accreditation. In doing so, they established collegial relationships that they had not previously enjoyed. Many of the institutions indicated that cross-campus faculty efforts contributed to a unified consistent commitment campus-wide to take responsibility for accreditation. Monroe-Baillargeon (Alfred University) declared that the accreditation process had ‘‘been invaluable’’ to them and had been accomplished ‘‘through the commitment of the outstanding faculty who willingly contributed to a study of themselves, their work and our collective work as a division.’’
314
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
Program improvement was mentioned by many of the authors as a benefit of their participation in the accreditation process. Woven in and through program improvement were specific benefits of ‘‘recognition, validation, greater articulation and consistency for candidate assessment, alignment with program standards, rubric development, inter-rater reliability’’ (Fallona and Jones, University of Southern Maine), among other benefits associated with the process. White, Tutela, Lipuma, and Vassallo (Rutgers University) noted that by the end of the process, they began to see the accreditation body, along with the process, as their ally rather than their enemy noting, ‘‘TEAC is our friend y They are telling us how we can improve.’’ Erickson, Wentworth, and Black (Brigham Young University) reflected ‘‘The shift to a data-based decision making paradigm has gone beyond mere compliance with accreditation requirements to meaningful application and function for stakeholders in the teacher preparation process.’’ Program improvement was valued as a significant benefit associated with accreditation. Along with program improvement came new understandings. Zane, Schnitz, and Abel (Western Governors University) learned that ‘‘developing and refining a program requires a delicate balance between the need to provide consistent, reliable, and predictable systems and services, and the need to remain open to further change to improve the student experience.’’ For Monroe-Baillargeon (Alfred University), the ‘‘challenging, frustrating, invigorating, and exhausting’’ aspects of accreditation were trumped by ‘‘the most valuable outcome – our new understanding.’’ New understandings shared by Craig (University of Houston) were very personal in nature, I came to know – once again, and in no uncertain terms – that liberation, not captivation, is the purpose of education – and that I am its agent (Schwab, 1954/1978). As a teacher educator, I cannot emphasize strongly enough the importance of this reinforced understanding, which brought with it renewed dedication to my profession.
Persistent Concerns Accreditation brings with a variety of concerns that permeate the process. For some, the accreditation process constrains the ability of a preparation program to respond in a timely fashion and to rapidly adjust to current educational needs (Marina, Chance, and Repman, Georgia Southern University). Other authors expressed concern that accreditation is associated with a compliance mentality and that many faculty resist accreditation not necessarily because of the concept, but more likely because they have limited or no voice in establishing their own requirements for
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
315
the process. The limited freedom allowed to faculty in requiring them to standardize their outcomes, syllabi, and assessments remains a source of tension as faculty try to reconcile issues of intellectual freedom with requirements for programmatic uniformity. As teacher preparation programs encounter competing requirements from multiple accreditation bodies, tensions are again significant. Osguthorpe and Snow-Gerono (Boise State University); Neufeld (Lander University); White, Tutela, Lipuma, and Vassallo (Rutgers University); Craig (University of Houston); Ackerman and Hoover, (St. Cloud State University); and Powell, Fickel, Chesbro, and Boxler (University of Alaska Anchorage) spoke of the confusion and frustration they experienced as they tried to meet the competing demands of multiple agencies. State departments of education, national accreditation agencies, and SPAs all become competing masters with contending requirements, formats, and deadlines that put individual preparation programs in the position of trying to appease all of them. Above all the concerns presented by the authors in this book, the personal costs of accreditation to teacher educators were the most poignant and should not be ignored. Craig (University of Houston) was the most honest in her narrative account of her personal struggle and the cost of the accreditation process to her and to her colleagues. Issues of autonomy, relationships, and teacher educators’ visions of their personal and professional selves are interconnected and are challenged in the process of accreditation. Teacher educators, once empowered to influence their own teaching agendas, are now expected to relinquish their professional autonomy for the cause of accreditation. They cannot help but experience ongoing tensions. It is also important to realize as Craig (University of Houston), and others, pointed out, ‘‘accountability procedures affect educators’ images of themselves as teacher educators and their relationships with others.’’ The process itself tends to encourage collaboration on one side while positioning individuals against each other on another. Faculty members often feel trapped into conforming, feeling they have no choice but to set aside practices that are personally and professionally fulfilling to align with the processes and demands imposed by accreditation requirements.
CONCLUSIONS As our nation has moved toward greater accountability for student learning, accountability, assessment, and accreditation have moved from mere
316
LYNNETTE B. ERICKSON AND NANCY WENTWORTH
terminology to the lived realities of institutions of learning and educators on every level. The current model of education reform, No Child Left Behind, equates accountability with teacher quality and further assumes that teacher quality is directly linked to teacher preparation programs. Teacher preparation institutions have been directed to align their programs with accreditation standards and individual state licensing requirements to create data-based, data-driven systems for teacher preparation and development. Teacher educators are now held accountable not only just for the traditional preparation of teacher candidates but also for providing acceptable evidence that these candidates are well prepared to meet the diverse needs of the unknown learners who they may one day teach. As teacher educators acquiesce before the gods of accreditation, we do so assuming, rather than hoping, that accountability, assessment, and accreditation will continue to be foremost on the education landscape. We are left to wonder if ‘‘through it all, the best defense is indeed a good offense, and a willingness to embrace accountability is perhaps the best means for tempering its effects and even benefiting from it’’ (Daigle & Cuocco, 2002, p. 12). As we come to realize the benefits to be found in the processes and products of accreditation, the more we find ourselves justifying the costs and demands associated with the process. From the shared ordeals of accreditation revealed by the authors, we are again reminded of the obvious – institutions are different from each other and no program or unit remains the same over time. Accountability requirements change, structures and organizations are modified, leadership comes and goes, partnerships are established and dissolved, and faculty members are entitled to and will continue to hold varying degrees of understanding, acceptance, and commitment to any endeavor. If we have learned anything about accreditation, it is that it is tenuous and fluid. In spite of our desire for constancy, requirements and processes for accrediting teacher preparation programs will always be responding to changing professional and public expectations for preparing teachers. Tension will always be present in the accreditation process as teacher educators attempt to balance our desires for autonomy and professional judgment with public calls for standardization and guarantees. Our challenge is not just to resolve these tensions, but also to draw on them to instruct us, teacher educators and accrediting bodies, in the noble pursuit of preparing excellent teachers for all children and youth.
Reflections on Shared Ordeal of Accreditation across Institutional Narratives
317
REFERENCES Daigle, S. L., & Cuocco, P. (2002). Public accountability and higher education: Soul mates or strange bedfellow? EDUCAUSE Center for Applied Research: Research Bulletin, issue 9. ECAR, Boulder, CO. Dwyer, C. A., Millett, C. M., & Payne, D. G. (2006). A culture of evidence: Postsecondary assessment and learning outcomes. Princeton, NJ: ETS. Pinnegar, S., & Erickson, L. (2009). Uncovering self-studies in teacher education accreditation reviews. In: C. Lassonde, S. Galman & C. Kosnik (Eds), Self-study research methodologies for teacher educators (pp. 151–168). Rotterdam, Netherlands: Sense Publishers.
ABOUT THE AUTHORS Michael H. Abel is the manager for Domain Quality and Development at Western Governors University (WGU) in the United States and assists faculty in developing detailed descriptions of the domains of knowledge, skill, and ability that serve as the basis for academic program and assessment development. As a co-developer of the WGU Teachers College assessment programs, Michael designed specialized databases for standards alignment and domain development and created and administered training for test item writers and editors. He also served as senior assessment developer and editor when the WGU Teachers College assessment program went university wide. Michael received an MA in International Relations from the University of Southern California and a BA in German from Brigham Young University. He is co-author of a test item development guide, The Art of Item Development. Elaine Ackerman is currently the director of assessment at Concordia College, Moorhead Minnesota. She joined the Concordia team on October 1, 2009. Before joining Concordia, Dr. Ackerman was the Assessment Director in the College of Education at St. Cloud State University in Minnesota. She earned her bachelor’s degree from Eastern Washington University in 1988 and completed her master’s degree in 1993 also from Eastern Washington University. Her doctorate was completed in 2004 at Gonzaga University in Spokane, Washington. Although she has many research interests, currently her area of focus is assessment. Sharon Black is an advanced writing instructor and college editor in the David O. McKay School of Education at Brigham Young University. She has been the editor for the Center for the Improvement of Teacher Education and Schooling (CITES) and the BYU A.R.T.S. Partnership (Arts Reaching and Teaching in Schools). She has been an assistant to the editor for the Journal of College Counseling (international) and a principal writing editor and reviewer for Contemporary Issues in Reading, a joint publication of the Utah Chapter of the International Reading Association and the BYU College of Education. Her primary research interests include gifted/talented education, early literacy, and arts in the elementary classroom, and she has researched and published in all of these areas. 319
320
ABOUT THE AUTHORS
Nancy Boxler has 19 years of experience working at all levels of K-20 schools. She has been very fortunate to have the opportunity to work with English language learners, teach Spanish, and help design and implement a district-wide induction and mentoring program. She is currently the Assistant Network Director of Alaska Educational Innovations Network at the University of Alaska Anchorage. She is passionate about facilitating professional networks that create conditions for educators to examine their practice, as well as give back to their profession by being resources to others. Alan Buss is the department head for Elementary and Early Childhood Education at the University of Wyoming. His primary area of instruction is in elementary mathematics and science methods, with particular emphasis on meaningful uses of technology to extend and enhance understanding. He has served as the director of the education outreach arm of the NASA-funded Upper Midwest Aerospace Consortium (nine institutions of higher education in North Dakota, South Dakota, Montana, Idaho, and Wyoming) for nine years, overseeing professional development activities for K-12 mathematics, science, social studies, and art teachers to incorporate the use of powerful digital mapping software, GPS technologies, and satellite imagery and data into their instruction. Before completing his PhD in Curriculum and Instruction, he taught second and sixth grade in New Mexico and Wyoming. Cindi Chance, professor of educational leadership, served as dean at Georgia Southern University and at the University of Louisiana at Lafayette and as assistant dean at the University of Memphis. Dr. Chance spent ten years as a classroom teacher and nine years as principal in Milan, Tennessee. She has written extensively and has given numerous presentations at national and international conferences on school improvement and teacher preparation, school reform, and professional development. In 2008, she was selected as a Fulbright Specialist and was honored to serve as a Specialist at Central China Normal University in Wuhan, China, during the fall semester 2009. She presently serves on the Board of Directors of the American Association of College for Teacher Education (AACTE) and the Governing Council of the National Network for Educational Renewal (NNER). Patricia Chesbro is director of the Alaska Educational Innovations Network at the University of Alaska Anchorage College of Education. She comes to the college having served as a high school teacher, principal, and district superintendent, which have allowed her the opportunity to learn much about Alaska and Alaskans as she has engaged with educators from nine diverse districts from around the state.
About the Authors
321
Cheryl J. Craig is a professor in the Department of Curriculum and Instruction, College of Education, University of Houston, where she coordinates the Teaching and Teacher Education program area and is the Director of Elementary Education. Her research centers on the influence of school reform on teachers’ knowledge developments and their communities of knowing. Craig has authored several handbook chapters and is a regular contributor to such journals as Teaching and Teacher Education, Teachers and Teaching: Theory and Practice, Teachers College Record, and American Educational Research Journal. Her book, Narrative Inquiries of School Reform, was published in 2003 (Information Age Publishing). Craig is the current co-editor of the Association of Teacher Educators’ Yearbook whose most recent issues were titled Imagining a Renaissance in Teacher Education and Teacher Learning in Small Group Settings. Judith Ellsworth is undergraduate associate dean in the College of Education at the University of Wyoming (UW). She is a faculty member in the Elementary/Early Childhood Department where her main areas of instruction are science and mathematics methods and formative classroom assessment. She has also served as the Director of the Science and Mathematics Teaching Center at the UW. Previous to her work at the University of Wyoming, she was a classroom teacher in elementary and middle school grades in Colorado and Wyoming. Lynnette B. Erickson is an associate professor in the McKay School of Education at Brigham Young University (BYU) in Provo, Utah. Before teaching in higher education, she taught elementary school in Payson, Utah, and Gilbert, Arizona. She holds a PhD in Curriculum and Instruction from Arizona State University. Her collegiate teaching includes undergraduate and graduate courses at Arizona State University and Brigham Young University. Dr. Erickson has served on university and professional committees including associate department chair, national and state program accreditation program review committees, BYU director of the National and International Teacher Education Program for teacher development and student teaching, and the National Council for the Social Studies Awards Committee. The focus of her scholarship is on studying teacher preparation, elementary social studies education, the moral dimensions of teaching, and university-public school partnerships. Catherine Fallona is associate dean and director of Teacher Education in the College of Education and Human Development at the University of Southern Maine. She also coordinates and teaches courses in the undergraduate teacher
322
ABOUT THE AUTHORS
education pathway, Teachers for Elementary and Middle Schools. The focus of her scholarship is on studying novice and expert teachers’ actions, intentions, and ways of thinking about teaching and learning related to the moral missions of schooling, particularly with regard to better understanding teachers’ manner in terms of their expression of moral and intellectual virtue and the ways the classroom environment serves as a moral curriculum. Letitia Hochstrasser Fickel is a professor of secondary education at the University of Alaska Anchorage. She is the principal investigator for the Alaska Educational Innovations Network, a school–university collaborative partnership to enhance pre-service teacher education and support high quality, job-embedded professional learning for K-20 educators. She is an experienced public school teacher, having taught social studies and Spanish in an ethnically, linguistically, and economically diverse urban middle school. Her current research interests include issues related school-university collaboration and networked learning, especially as it serves culturally responsive practice, educational renewal, and learning. Sam Hausfather is currently the dean of the School of Education at Maryville University of St. Louis, Missouri. He came to college teaching after eighteen years teaching in elementary schools in northern California. He has a master’s in science education from California State University Chico and a doctorate in Teacher Education from the University of Wisconsin-Madison. Dr. Hausfather taught for ten years in teacher education at Berry College (Georgia) while serving as director of field experiences and then served as Assistant Dean for Graduate Studies. During five years as Dean of the School of Professional Studies at East Stroudsburg University of Pennsylvania, he established school partnerships, strengthened diversity components, and led a successful NCATE reaccreditation. He came to Maryville University in 2006 and led NCATE reaccreditation efforts in 2008. He has been an NCATE examiner since 2003. His interests include school partnerships, teacher education program redesign, and conceptual change in students and teachers. Ellen Hill is the director of Clinical Experiences for the School of Education, Health and Society. She oversees the placements of teacher candidates for clinical field experiences, student teaching, and international student teaching programs. She earned her master’s degree from Miami University in Supervision and her undergraduate degree from Ohio University. Before coming to Miami University, Ms. Hill spent 20 years in the elementary classroom, earned National Board Certification, and has served as a
About the Authors
323
regional entry-year coordinator and curriculum specialist. Ms. Hill’s service efforts are focused on university/school partnerships, urban teaching cohort development, and international teaching opportunities. John H. Hoover is associate dean of the College of Education at St. Cloud State University in Minnesota. He earned his bachelor’s degree from St. Cloud State University in 1978, followed by a Master of Science degree (University of Illinois) in 1980 and a PhD (Southern Illinois University) in 1988. Dr. Hoover’s primary research interests are in bullying and child-onchild aggression, transitional services to students with disabilities, and assessment methods. Linda Hutchison is Department Head for Secondary Education at the University of Wyoming. Her main area of instruction is mathematics education at both the graduate and the undergraduate levels. As a former grades 4–12 mathematics teacher in California for 10 years, her main research interests are centered on the K-12 teaching of mathematics and mathematics teacher education. National policy trends including accreditation are relevant to these interests, as they have affected teacher education. Ken Jones is an associate professor in teacher education at the University of Southern Maine. He works with both pre-service and in-service teachers in courses that include mathematics methods, classroom assessment, action research, and foundational issues in education. His scholarly interests include school accountability, equity and democratic values, and the effects of neoliberalism in education. He recently chaired a program self-study that led to TEAC national accreditation. James M. Lipuma is a senior university lecturer in the Humanities Department at the New Jersey Institute of Technology (NJIT) and has been the Teacher Education Programs Coordinator for NJIT since 2004. He holds a BS in Chemical Engineering from Stanford University, MS in Environmental Policy Studies, and PhD in Environmental Science from NJIT and is completing a Masters of Education in Curriculum and Teaching focused in Science Education at Teachers College, Columbia University. He supports all NJIT teacher education initiatives including the NSF Noyce Scholars program and C2Prism project and has overseen several curriculum redesign projects for secondary and university-level programs. Since 2008 he has contributed to the Rutgers-Newark Urban Teacher Education Program curricular redesign project to meet the requirements of TEAC accreditation and State of New Jersey Department of Education certification as well as
324
ABOUT THE AUTHORS
improve the educational experience for those students enrolled in the preservice Urban Teacher Education Program. Brenda L. H. Marina is an assistant professor in Educational Leadership, at Georgia Southern University, in Statesboro, Georgia. She also serves as an advisor to and coordinator of the Higher Education Administration programs. She holds a PhD in Secondary Education. Dr. Marina has worked for the past 15 years in higher education administration. Her career as an educator includes teaching undergraduate and graduate students and service as an internship mentor for students pursuing higher education administration degrees. Dr. Marina’s research interests include leadership though mentoring, diversity in mentoring, women in leadership, multicultural competence in higher education, the first-year experience, and global education issues. She holds professional affiliations at both state and national levels. She also serves as a speaker for state, national, and international conferences on issues related to her research. Teresa McGowan is the director of Accreditation and Assessment for the School of Education, Health and Society at Miami University. She also teaches the classroom management course for the middle childhood program. She earned both her undergraduate and her master’s degrees as well as many hours beyond the master’s at Miami University. Before coming to Miami to coordinate efforts for NCATE accreditation, she worked in K12 education for 30 years as a high school English teacher and as a district instructional specialist in language arts. Ann Monroe-Baillargeon, PhD, is the chair and associate professor of Education at Alfred University, in Alfred, NY. Dr. Monroe-Baillargeon received her BA degree in Special Education from The University of Wisconsin-Milwaukee, her MS degree in Educational Leadership from the University or Southern Maine, and her PhD in Teaching and Curriculum from Syracuse University. Ann has been on the faculty at Nazareth College and The University of Rochester in addition to teaching internationally in West Africa, South Africa, Bangladesh, Bangkok, Thailand, Cairo, Egypt, and Mallorca, Spain. She teaches courses in Inclusive Education, Literacy Practices, Research Methodologies and Advanced Trends in Education. Her research focuses on inclusive education, teacher education, and literacy. Her work in teacher education is deeply informed by her 15 years of K-12 teaching and leadership. Judith A. Neufeld has served eight years on the faculty of Lander University in Greenwood, South Carolina, where she is currently the interim dean of
About the Authors
325
the College of Education. She holds a bachelor of arts degree in Elementary Education and Music Education from Tabor College, a master of education degree in Elementary Education from Texas Christian University, and a doctor of philosophy degree in Curriculum and Instruction from Arizona State University. She has taught second grade, sixth grade, and elementary music in public schools in Fowler, Kansas, and Arlington, Texas. Her collegiate teaching includes undergraduate and graduate courses at Arizona State University, the University of Idaho, and Lander University. Her research has focused on teacher preparation and the ways in which people learn. She also serves as a program reviewer for the Association for Childhood Education International. Richard D. Osguthorpe, PhD, is an associate professor in the Department of Curriculum, Instruction, and Foundational Studies at Boise State University in Idaho. His research examines the moral dimensions of teaching and teacher education, and he has recently published articles in Teachers College Record, Journal of Teacher Education, and Teacher Education Quarterly. Kay Persichitte began her career in education as a high school mathematics teacher in Colorado. She taught high school for 19 years while completing a master’s degree in Curriculum and Instruction. Upon completion of a PhD in Educational Technology with emphases in instructional design and distance education, she took a faculty position at a Colorado university working with graduate students and focusing on research interests in technology integration and online learning environments. Nine years later she became Director of Teacher Education at the University of Wyoming and served in that administrative position for five years before becoming dean of the College of Education. Her interests currently center on accountability and accreditation in teacher preparation and P-20 educational policy. She is passionate about teaching and the potential for every teacher and school administrator to improve the lives of students. Linda E. Pierce received an EdD in Reading from Brigham Young University, Provo, Utah. Dr. Pierce is the associate dean of the School of Education at Utah Valley University (UVU) and an associate professor of literacy instruction methods in UVU’s Department of Elementary Education. She is the accreditation (Teacher Education Accreditation Council – TEAC) coordinator for the School of Education and the accreditation site visit coordinator for the UVU accreditation through the Northwest Council of Colleges and Universities. Her current research interests include literacy
326
ABOUT THE AUTHORS
instruction in the primary grades with special emphasis on writing instruction, early literacy instruction, and teacher education programs that focus on the unique needs of Latino students. Dr. Pierce has extensive experience as a teacher in early childhood classrooms. James H. Powell is chair of the Department of Teaching and Learning and teaches ESL methods, language and culture, and curriculum theory courses in the College of Education at the University of Alaska Anchorage. His research focus for the past 14 years has been on the professional development issues faced by experienced teachers. He has spent the last two years participating in the Language Acquisition Network. Judi Repman is a professor and coordinator of the Instructional Technology Program at Georgia Southern University in Statesboro, Georgia. She holds a PhD in Educational Media and a master’s degree in Library Science. Dr. Repman has been a faculty member for 21 years and currently teaches in an online graduate program in instructional technology. She also serves as director of the Center for International Schooling in the College of Education and co-chaired the college’s most recent NCATE visit. Dr. Repman’s research interests focus on online teaching and learning, the use of Web 2.0 tools in educational settings, information literacy in the 21st century, and global issues in education. Dr. Repman is active in various professional organizations and serves as a speaker and author on these topics. Janet W. Schnitz is associate provost for Assessment at Western Governors University (WGU). Previous to holding that position, she served WGU as executive director of the Teachers College, the first (and so far, only) nontraditional education unit to be accredited by NCATE. During Dr. Schnitz’s tenure, the WGU Teachers College grew from 198 students to over 7,000, enrolled in over 30 graduate and undergraduate programs. In 2008, the United States Distance Learning Association (USDLA) honored Janet for outstanding individual leadership in the field of Distance Learning in recognition of her work with the Teachers College. Dr. Schnitz has published articles on technology implementation, professional development, and the impact of change on teachers and teaching. Of special interest to Dr. Schnitz are applications of current theory related to change leadership and change management to affect systemic transformation in public/private schools as well as post-secondary education. James Shiveley is the department chair and Condit Endowed Professor in the Department of Teacher Education at Miami University where he teaches courses in social studies methods and supervises student teachers. He earned
About the Authors
327
both his undergraduate and his master’s degrees from Miami University in social studies education before teaching high school social studies in Beavercreek and Wilmington, Ohio. He received his doctorate from The Ohio State University in the area of Global and Social Studies Education. His teaching, research, and service activities are concentrated on citizenship education for a democratic society, the development of school/university partnerships, and teacher education in a global society. Susan Simmerman, originally from upstate New York, received a PhD in Educational Psychology from the University of California, Riverside, and also holds two master’s degrees in Education (from Nazareth College of Rochester) and in Clinical Psychology (from United States International University). She has served as the department chair of the Elementary Education Department in the School of Education at Utah Valley University (UVU) and currently teaches in the elementary and masters degree programs. She is heavily involved with the accreditation efforts at UVU. Dr. Simmerman has taught in special education in New York, California, and Utah public schools, working with students with learning disabilities, mental retardation, emotional disabilities, and behavioral disabilities. She has presented and published her research on learning disabilities, mental retardation, emotional and behavioral disabilities, and the selection and training of teacher education candidates. Jennifer L. Snow-Gerono, PhD, is an associate professor and chair of Curriculum, Instruction, and Foundational Studies in Boise State University’s College of Education in Idaho. She teaches in the elementary education program and graduate program in curriculum and instruction. Her areas of research emphasis include practitioner inquiry, school– university partnerships, and professional development for educators. She has recently published articles in Teaching and Teacher Education, Teacher Education Quarterly, and The Teacher Educator. Joelle J. Tutela, PhD, is the director of the Rutgers-Newark Urban Teacher Education Program. She has been an educator for the past 13 years. From the front-lines of teaching social studies at a SURR public high school in Manhattan in the 1990s to designing, lobbing for, and implementing a small learning community, The Center for Social Justice at Montclair High School (Montclair, NJ), her breadth of expertise spans a wide range of disciplines. In addition, she has consulted for numerous high schools – magnet, singlegender, comprehensive, and early college – in Brooklyn, Bronx, Harlem, and Queens and for educational agencies in Manhattan; she developed and
328
ABOUT THE AUTHORS
executed creative strategies that aid teachers in developing lessons that engage their racially, ethnically, economically, and linguistically diverse students. The core of professional development of in-service teachers is the use of their students’ work to improve lesson and unit planning and curriculum mapping. Jessica Vassallo is the program coordinator for the Rutgers-Newark Urban Teacher Education Program (UTEP). Her primary responsibilities include advising and recruiting future urban educators. Jessica is also a contributor to the ongoing TEAC accreditation for the UTEP, which includes monitoring the collection of data and adherence to current federal and state standards. A former graduate, Magna Cum Laude, of the program, she is committed to the empowerment of urban teachers who can make a positive difference in the lives of children in Newark. Jessica is a member of Phi Beta Kappa and Golden Key National Honor Societies, and she presents and mentors in the Rutgers Future Scholars Program, which introduces first-generation, low-income, and academically talented middle school students from Newark to the promise and opportunities of a college education. Nancy Wentworth is chair of the Department of Teacher Education in the McKay School of Education at Brigham Young University (BYU) in Utah. She has taught exploration of teaching mathematics and adolescent development at BYU. She has been the principal investigator on a federal technology grant. Her research focuses on technology integration into mathematics and teacher education and accreditation in teacher education. Dr. Wentworth has served on several national, regional, and university committees. She served as the president of the Northern Rocky Mountain Educational Research Association, secretary/treasurer of the Utah Association of Teacher Education, co-chair of the Faculty Advisory Committee at BYU, and associate dean of the McKay School of Education. Carolyne J. White is a professor in the Rutgers-Newark Department of Urban Education. A social foundations scholar committed to decolonizing approaches to research, pedagogy, and service, her previous experience has included directing Upward Bound and Special Services projects to foster college matriculation with first-generation college students; nurturing community-based urban school reform projects in Cleveland, Ohio, including the creation of a Professional Development School in the Hough Neighborhood; and collaborating with the Navajo and Hopi Nations to create culturally honoring grow-your-own teacher education programs.
About the Authors
329
Her current work includes a collective life story community research project with residents of the Historic James Street Commons Neighborhood in Newark, NJ. Nancy Williams, associate dean of the School of Education at Maryville University in St. Louis, Missouri, has been an educator for almost 40 years. Her career as a teacher educator followed service in middle and high schools and work with at-risk adolescents. A bachelor and master’s degree from West Virginia University, an educational specialist from the University of Florida, and a doctorate from St. Louis University serve as a foundation for her work. Dr. Williams has been at Maryville University since 1988 where she has worked actively with both suburban and urban professional development schools and the National Network for Educational Renewal. She has been an NCATE examiner since 2001. Her interests include issues of social justice, particularly creating respectful places for GLBT students. Thomas W. Zane is director of Assessment Quality and Validity at Western Governors University (WGU). He has been a principal architect of the WGU Teachers College and their assessment systems. Dr. Zane teaches test development and psychometrics and conducts reliability and validity studies. He is the current chair of the Utah Teacher Education Accreditation Advisory Council and is a contributor to the WGU Assessment Council. His credentials include a PhD in Measurement from Brigham Young University, as well as an MLIS, and an MS in Research and Statistics. Dr. Zane speaks regularly at national conferences on issues related to assessment and accreditation in higher education including discussions of how to conceptualize integrated assessment models to measure broad educational outcomes, workshops on how to build assessment systems that address current accreditation standards. He has published articles on constructivist foundations for performance assessment and pedagogical practices for teaching professional reflection.