194 55 2MB
English Pages 278 Year 2018
“This book is useful for anyone within higher education interested in sustainable outcomes-based approaches with clear examples and tools drawn from the field.” —NATASHA JANKOWSKI, Director, National Institute for Learning Outcomes Assessment (NILOA)
“Marilee J. Bresciani Ludvik stresses the practical application of what for many is a daunting challenge: linking the worlds of outcomes assessment, program review, predictive analytics, and competency-based education. Her work is an absolute necessity for institutions like mine, which will soon undergo its 10-year SACSCOC accreditation.” —JAMES A. ANDERSON, Chancellor and Professor of Psychology, Fayetteville State University “Bresciani Ludvik’s second edition has now become the most comprehensive resource for effectively developing institutionally useful program reviews at all levels across a college, university, or system.” —PEGGY MAKI, Writer and Higher Education Consultant specializing in assessing student learning
A
mong the new topics Marilee J. Bresciani Ludvik introduces in this edition is how to appropriately connect outcomes-based program review (OBPR) to performance indicators and predictive analytics and develop meaningful new performance metrics to inform our understanding of the student experience. She also addresses the intersection of OBPR with competency-based assessment. MARILEE J. BRESCIANI LUDVIK is professor for postsecondary education and works within the newly created Office of Educational Effectiveness at San Diego State University. HIGHER EDUCATION | ASSESSMENT 978-1-62036-230-3
22883 Quicksilver Drive, Sterling, VA 20166-2019 www.Styluspub.com
OUTCOMESBASED PROGRAM REVIEW CLOSING ACHIEVEMENT GAPS IN- AND OUTSIDE
OUTCOMES-BASED PROGRAM REVIEW
“This new edition is a timely gift for higher education. Every person at a higher education organization can locate some way to contribute to achieving enhanced student success through the clear, comprehensive guidance it sets forth.” —TERREL L. RHODES; Vice President; Office of Quality, Curriculum, and Assessment; Association of American Colleges & Universities
BRESCIANI LUDVIK
“This book lays out a very effective case for using assessment in response to the cry for accountability from all constituents of higher education. It can help all parties better understand how institutions use assessment to ensure a quality education is being provided to their students.” —BELLE S. WHEELAN, President, Southern Association of Colleges and Schools Commission on Colleges (SACSOC)
THE CLASSROOM WITH ALIGNMENT TO
PREDICTIVE ANALYTICS AND PERFORMANCE METRICS
MARILEE J. BRESCIANI LUDVIK Foreword by Ralph Wolff
Second Edition of Outcomes-Based Academic and Co-Curricular Program Review
Praise for Outcomes-Based Program Review “As a former colleague of Marilee J. Bresciani Ludvik at both North Carolina State University and Texas A&M University, I can speak with authority about her expertise and commitment to the exploration of outcomes-based assessment in the context of program review. In her new book she has stressed the practical application of what for many is a daunting challenge: linking the worlds of outcomes assessment, program review, predictive analytics, and competency-based education. Moreover, she extends her analysis to a broader institutional context, suggesting that an institution cannot assert that it is a learning-centered institution if it does not link critical aspects of academic affairs, student services, faculty development, and professional staff development. As in her previous written work, Bresciani Ludvik asks the reader to formulate and consider key questions and to review the best practices and best models. Her work is an absolute necessity for institutions like mine, which will soon undergo its 10-year Southern Association of Colleges and Schools Commission on Colleges accreditation.” —James A. Anderson, Chancellor and Professor of Psychology, Fayetteville State University “Colleges and universities in the United States expanded rapidly in the twentieth century in response to the need to provide access to higher education and training to a greater percentage of an increasing population. While the most elite universities focused on scholarship and the production of knowledge, giving students an opportunity to learn was the most significant mission for most colleges and universities. However, they took no responsibility for the learning and success of their students. A common philosophy in the 1960s was ‘students have a right to fail,’ meaning that colleges should give students a chance, but institutions have no responsibility for their success or failure—that was up to the student. Over the past 30 years, there has been a significant shift in the mission of higher education. Today’s colleges and universities are being asked to be accountable for not only student access but also outcomes including student learning, degree completion, and equity. Measuring success outcomes is much more difficult and controversial than measuring student enrollment, and the efforts have sometimes been contentious. In this second edition of Outcomes-Based Program Review, Marilee J. Bresciani Ludvik provides roadmaps and lessons learned that can guide today’s educators in the use of data and predictive analytics to improve student success and to close achievement gaps. Bresciani Ludvik reminds us that as discussions about learning outcomes continue, educators should be careful not to fall into the trap of making their purpose just about a report to satisfy an accrediting commission
Ludvik.indb a
27-10-2018 00:23:20
or a state agency. Institutional improvement is much more important than mandated reports.” —George R. Boggs; Superintendent/President Emeritus, Palomar College; President and CEO Emeritus, American Association of Community Colleges; and Chair, Phi Theta Kappa Board of Directors “In the second edition of Marilee J. Bresciani Ludvik’s book on outcomesbased assessment program review, she brings a renewed focus to achievement for all students, as well as connections between outcomes conversations and predictive analytics along with other potentially performance-enhancing technologies. While there is much debate about the role of technology in higher education, outcomes in general, the apparently decreasing value of a degree, and the criteria upon which higher education should be judged and held accountable, Bresciani Ludvik’s book offers a breath of fresh air and a meaningful way forward. Viewing outcomes conversations as part of a learning organization that focuses upon asking the ‘right’ questions about our practice, she presents a nonprescriptive process focused on reflection, collaboration, and adaptability. She reminds us that it is through conversations, space, and time for reflection that we can foster processes not based in compliance, but in collecting, analyzing, and interpreting data in ways that matter to individual institutions. This book is useful for anyone within higher education interested in sustainable outcomes-based approaches with clear examples and tools drawn from the field. Presenting peer-reviewed criteria for good practice, useful appendices, and an honest discussion regarding ongoing questioning of processes along with barriers to good practice, this book is a welcome addition to the conversation.”—Natasha Jankowski; Director, National Institute for Learning Outcomes Assessment (NILOA); and Research Assistant Professor, Education Policy, Organization, and Leadership (EPOL), University of Illinois Urbana-Champaign, College of Education “If our institutions of higher education are truly institutions of learning, we must dedicate ourselves to learning from our primary practice. We must seek to improve continuously the extent to which our programs generate evidence of learning. This splendid book is a great place to start.”—Joe Johnson; Executive Director of the National Center for Urban School Transformation; and the QUALCOMM Professor of Urban Education, San Diego State University “Outcomes assessment in higher education is a magnet for criticism, in large part because too often the effort does not yield meaningful, actionable
Ludvik.indb b
27-10-2018 00:23:24
information. This refreshed volume is a timely antidote of sorts, distilling lessons learned about best practices from scores of schools doing assessment right to persuasively explain why and expertly illustrate how the essential work of collecting and using evidence of student learning can benefit students, faculty and staff, institutions, and the public.”—George D. Kuh; Chancellor’s Professor Emeritus of Higher Education, Indiana University; and Senior Scholar, National Institute for Learning Outcomes Assessment “For Bresciani Ludvik the systematic and reflective inquiry processes involved in developing a program review and the contents of that review are essential to an institution that has become, or seeks to become, a learning organization. Useful reviews developed according to Bresciani Ludvik’s guidance and a plethora of examples and resources provide lenses through which institutional leaders can continuously explore how well the departments, programs, and services of their institution are achieving two major goals: (a) fulfillment of institutional mission and purposes at demonstrably high quality levels and (b) preparation of all students to achieve the high quality learning outcomes necessary to contribute to the demands of the twenty-first century. Bresciani Ludvik’s second edition has now become the most comprehensive resource for effectively developing institutionally useful program reviews at all levels across a college, university, or a system.”—Peggy Maki, Writer and Higher Education Consultant specializing in assessing student learning “If you are looking to make your program review or accreditation processes more meaningful and useful, look no farther. Outcomes-Based Program Review, 2E provides plenty of practical advice on how to do exactly that. It will help you create an intentional process that makes sure the right questions are asked. It will help you ensure that your processes generate reflective, collaborative, evidence-informed dialogue. And it will help your institution enhance its impact.”—Linda Suskie, Assessment and Accreditation Consultant “The two ‘As’ by which higher education is evaluated today are accountability and assessment. This book has laid out a very effective case for using assessment in response to the cry for accountability from all constituents of higher education. From Congress to accreditors, employers and the general public, this book can be easily used to help all parties better understand how institutions use assessment to ensure a quality education is being provided to their students.”—Belle S. Wheelan, President, Southern Association of Colleges and Schools Commission on Colleges
Ludvik.indb c
27-10-2018 00:23:24
Ludvik.indb d
27-10-2018 00:23:24
OUTCOMES-BASED PROGRAM REVIEW
Ludvik.indb i
27-10-2018 00:23:24
Ludvik.indb ii
27-10-2018 00:23:24
OUTCOMES-BASED PROGRAM REVIEW Closing Achievement Gaps In- and Outside the Classroom With Alignment to Predictive Analytics and Performance Metrics
Marilee J. Bresciani Ludvik Foreword by Ralph Wolff Second Edition of Outcomes-Based Academic and Co-Curricular Program Review
STERLING, VIRGINIA
Ludvik.indb iii
27-10-2018 00:23:25
COPYRIGHT © 2019 BY STYLUS PUBLISHING, LLC. Published by Stylus Publishing, LLC. 22883 Quicksilver Drive Sterling, Virginia 20166-2019 All rights reserved. No part of this book may be reprinted or reproduced in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, recording, and information storage and retrieval, without permission in writing from the publisher. Library of Congress Cataloging-in-Publication Data Names: Bresciani, Marilee J., author. Title: Outcomes-based program review : closing achievement gaps in and outside the classroom with alignment to predictive analytics and performance metrics / Marilee Bresciani Ludvik ; foreword by Ralph Wolff. Other titles: Outcomes-based academic and co-curricular program review Description: Second Edition. | Sterling, Virginia : Stylus Publishing, LLC, [2018] Identifiers: LCCN 2018013858 (print) | LCCN 2018032971 (ebook) | ISBN 9781620362310 (Library networkable e-edition) | ISBN 9781620362327 (Consumer e-edition) | ISBN 9781620362303 (Paper : acid free paper) | ISBN 9781620362297 (Cloth : acid free paper) Subjects: LCSH: Educational tests and measurements--United States. | Competency-based education--United States. | College teaching--United States. | Education, Higher--United States. Classification: LCC LB3051 (ebook) | LCC LB3051 .B693 2018 (print) | DDC 371.26--dc23 LC record available at https://lccn.loc.gov/2018013858 13-digit ISBN: 978-1-62036-229-7 (cloth) 13-digit ISBN: 978-1-62036-230-3 (paperback) 13-digit ISBN: 978-1-62036-231-0 (library networkable e-edition) 13-digit ISBN: 978-1-62036-232-7 (consumer e-edition) Printed in the United States of America All first editions printed on acid-free paper that meets the American National Standards Institute Z39-48 Standard.
Bulk Purchases Quantity discounts are available for use in workshops and for staff development. Call 1-800-232-0223 Second Edition, 2019
Ludvik.indb iv
27-10-2018 00:23:25
CONTENTS
TABLES AND FIGURES
vii
FOREWORD
ix
Ralph Wolff PREFACE How to Use This Book
xv
ACKNOWLEDGMENTS
xxiii
1 WHY It’s About Becoming a Learning Organization 2 WHAT Defining Outcomes-Based Assessment Program Review
1
19
3 MORE WHY AND WHAT OF OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
41
4 CRITERIA FOR GOOD PRACTICE OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
75
5 HOW AND WHEN Key Questions to Consider When Implementing Good Practice Outcomes-Based Assessment Program Review 6 OVERCOMING BARRIERS TO IMPLEMENTING GOOD PRACTICE OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
91
125
APPENDIX A Documents Used to Determine Good Practice Criteria for Outcomes-Based Assessment Program Review
161
APPENDIX B List of Good Practice Institutions That Were Nominated by Scholars and Practitioners
167
v
Ludvik.indb v
27-10-2018 00:23:25
vi
Ludvik.indb vi
CONTENTS
APPENDIX C National Institute for Learning Outcomes Assessment High Performance for All Students: Comparable Learning and Development Outcome Measures and Performance Indicators
171
APPENDIX D Oregon State University Student Affairs Learning Goals and Outcomes Alignment Grid
179
APPENDIX E Hagerstown Community College Map Template
181
APPENDIX F California State University, Sacramento, Example of Annual Outcomes-Based Data Collection
183
APPENDIX G James Madison University Example of an Annual Assessment Report Template
187
APPENDIX H University of Hawai‘i-Mānoa Assessment Results and Improvement Plan Template
203
APPENDIX I Azusa Pacific University Five-Year Action Plan Template
205
APPENDIX J Outcomes-Based Assessment Plan and Report for Program Review Purposes Checklist
207
REFERENCES
217
ABOUT THE AUTHOR
223
INDEX
225
27-10-2018 00:23:25
TABLES AND FIGURES Tables Table 3.1 Alignment Table Table 3.2 University of St. Thomas Student Learning Outcomes Curriculum Map Psychology BA Program Table 3.3 Curriculum Map Example: Excerpt From a Sample Biology Program at the University of Hawai‘i-Mānoa
53 56 57
Figures Figure 1.1. Learning and development as neurocognitive skills. Figure 1.2. Semantic map of executive functions and related terms. Figure 1.3. Map of executive functions and related terms to intraand interpersonal skills. Figure 2.1. Fitzpatrick and colleagues’ (2011) steps in program evaluation. Figure 2.2. Assessment process used at Miami Dade College. Figure 2.3. Assessment process used at James Madison University. Figure 2.4. The iterative systematic OBPR/Gather Data cycle. Figure 2.5. OBPR can be implemented at multiple levels.
11 12 13 23 24 25 27 30
vii
Ludvik.indb vii
27-10-2018 00:23:25
Ludvik.indb viii
27-10-2018 00:23:25
FOREWORD
I
n 2006 when the first edition of this book was published, I wrote that a book like this was much needed. Since then, the need for an outcomesbased approach to program review is more needed than ever. That is not to say that program reviews have become more popular; indeed, too often program reviews do not lead to significant improvements, change, or new resources and are therefore viewed (and painfully undertaken) as a useless and time-consuming ritual. As well described in this updated edition, however, when program reviews are well designed and highly intentional, they are value adding and, in some cases, even transformational. I am grateful to the author, Marilee Bresciani Ludvik, and the colleagues who participated in in the development of this updated version for making such a detailed and comprehensive framework for program review available. Many of the challenges facing higher education have been around for years, and it would be easy to say that they will pass or be endured and business can be carried on as usual. But this is no longer the case. New challenges go to the very core of higher education and the belief that it is the key to the future social and economic development of the nation. In this context, program review has a critical role to play in the ability of higher education to respond to new challenges and demands, and an outcomes-based orientation to program review is more important than ever—can the outcomes of programs be adapted to the changing character of the discipline, the workplace, and the broader society in which students will function? This is the challenge that all programs face today and for which they are increasingly accountable. While accountability for higher education has been with us for more than a decade, it has recently taken on a much sharper and problematic character. In the past few years, for example, higher education has become increasingly polarized and politicized. Recent surveys have reflected that a substantial portion of the populace does not believe higher education is heading in the right direction, and, within this group, a significant number believes higher education is damaging to the future of the country. In a similar vein, free speech challenges have been promoted to highlight the perceived bias of the academy, raising questions about how open colleges and universities and programs are to differing political, ideological, and religious viewpoints. At ix
Ludvik.indb ix
27-10-2018 00:23:25
x
FOREWORD
the ground level, the teaching of controversial subjects is all the more difficult in an era of classroom recordings and social media. This has led to an increasingly strident and well-orchestrated campaign asserting that higher education is more about indoctrination than a balanced education. As such, the role and value of higher education has become hotly contested. At the same time, there is a chorus of voices questioning the sustainability of higher education due to technological advances; the growth and acceptance of new providers, such as coding academies; and the need for continuous learning rather than an outmoded concept of “one and done” upon the completion of a degree. Multiple short-term certificates are argued to be more appropriate and financially sustainable in terms of accommodating the changing character of work, the gig economy, and the multiple jobs and often careers today’s graduates are likely to experience. Financial sustainability has become a key issue as well, at both the institutional and programmatic level. Despite the recent strong economy, funding for public institutions is typically well below pre-2008 recession levels even as demand and costs have risen significantly. The business model of higher education with an increasing reliance on student tuition at all but the most well-endowed institutions can no longer be continued. Predictions abound that many small tuition-driven colleges will not survive the coming decade due to flat high school graduation rates and increased expenses and competition. In this context, many institutions have engaged in a program prioritization process to determine whether the existing configuration of programs can be sustained, leading to program closures and consolidations (or at least the recommendations for such actions). If combined with program review, or undertaken as a substitute for it, such prioritization exercises can create an environment of fear and threaten low enrollment or less economically valuable programs. This fundamentally changes the nature of the review from open-ended inquiry to defensive protectionist behavior, even if no dramatic changes ultimately take place. Yet another challenge lies in the need for institutions and programs to demonstrate the career preparedness and employability of graduates. While the mantra of higher education is the need to provide a broadening education along with preparation for a meaningful life and career, surveys repeatedly show that over 85% of students now attend college in order to gain a job. A recent Gallup survey found that there is a significant gap in the perceptions of the preparedness of today’s graduates (Busteed, 2014). In this extensive survey, 96% of provosts and academic leaders expressed confidence that their graduates were well prepared for the workforce, yet only 11% of employers so reported. Repeated surveys show that basic skills needed for success in the workplace are often lacking in new hires. Employers in turn
Ludvik.indb x
27-10-2018 00:23:25
FOREWORD
xi
feel excluded from curriculum development or opportunities to help faculty and students to understand the dramatic changes in the workplace, where content preparation alone is no longer sufficient for success. “Soft skills” are now essential for employability, in such areas as teamwork, oral and written communication, problem-solving, critical thinking, creativity, innovation, and entrepreneurship. The Quality Assurance Commons of Higher and Postsecondary Education created a pilot with 27 programs across multiple disciplines, modalities of instruction, and institutional types, and we found that few programs could articulate, in work-related settings related to the discipline, what the outcomes should be in these areas for all graduates (The QACommons, 2018). Most programs were also failing to assess proficiency in these areas. For example, even though all programs stated that written communication and problem-solving were key elements to their curriculum, there is a substantial difference between an extensive, well-developed research paper with footnotes and references and a one-page problem-solving and decision memo addressing an important business issue. We have characterized these skills as essential employability qualities and developed a certification process to ensure their achievement (theqacommons .org). We similarly found that many programs did not publicly identify career pathways for students early in their studies, and surveys show that too few students avail themselves of career services located somewhere outside the department, and too often at the end rather than at the beginning of their studies. Additionally, transparency has become an increasingly important dimension of program and institutional quality. For all institutions participating in Title IV, the Department of Education Scorecard (collegescorecard.ed.gov) identifies the graduation rates and median salaries of graduates with financial aid from the institution and compares these results to the national median. Such a focus reifies the importance of earnings as a central measure of quality, and for many it is the primary basis for determining the worth and value of a discipline or program. While the scorecard data are at the institutional level, making no distinctions between different majors or programs, the Georgetown University Center on Education and the Workforce periodically publishes salary data for each disciplinary area. Their reports have significant influence among many governors and policymakers, leading to questioning the continuation of majors that do not lead to salaries sufficient for graduates to repay student loans. Several governors have used these data to question the value of humanities and some social science programs in favor of science and technology programs that are seen as consistently more in demand and higher paying, giving their states a perceived greater return on investment.
Ludvik.indb xi
27-10-2018 00:23:25
xii
FOREWORD
Another dimension of the focus on transparency relates to completion— who graduates, how long does it take, at what cost, and what can be done to improve completion? Currently the Integrated Postsecondary Education Data System (IPEDS) collects and reports completion data at the institutional level for all full-time fall-enrolling first-year students (IPEDS, 2018). These data are reported and compared to the national median on the scorecard website for all institutions. New IPEDS outcomes data reporting will now include part-time and transfer students at the institutional level, giving a more comprehensive picture of completion. Increasing attention, however, is being placed on the need for programspecific data and reporting, and many states have initiated data collection at the program level to track completion and job placement, salaries, and so on. Efforts are under way for the National Center for Educational Statistics to develop program-specific completion data in a national database, though this is likely several years off. The upshot of all this is increasing attention on the productivity of programs, not just institutions, in terms of completion for different categories of students (i.e., first generation, Pell eligible, etc.). An early example of a robust system of reporting of salary data by discipline at each University of Texas campus, based on U.S. Census Bureau data, is found at SeekUT (seekut.utsystem.edu). We are likely to see many more of these transparent program-centered databases at the state level beyond a single system, and eventually they will include private institutions as well. The cumulative impact of these trends is the need for all programs, whether at public or private institutions, to demonstrate their value and connection to workplace and societal needs, in addition to those of each discipline. The signal importance of Outcomes-Based Program Review: Closing Achievement Gaps In- and Outside the Classroom With Alignment to Predictive Analytics and Performance Metrics, second edition, is that it not only highlights these issues but also provides a comprehensive guidebook to organize a program review process tailored to each institution’s and discipline’s context. It is neither a one-size-fits-all approach nor a cookbook with a single recipe. Its strength lies in its emphasis on thoughtful inquiry well before the program review process is even begun. Its focus on outcomes is beyond the typical program learning outcomes related to course learning outcomes, and it addresses institutional culture and the intended outcomes of a program review process in light of increasing transparency, accountability, and comparability. By providing samples of approaches and models from a variety of institutions readers can find tools to fit their needs. I especially appreciate the emphasis on the need to create an outcomesbased culture within the institution, not just in individual programs. As the former president of a regional accrediting commission, our goal was to embed
Ludvik.indb xii
27-10-2018 00:23:25
FOREWORD
xiii
outcomes-oriented thinking as part of the ongoing institutional culture, tied to ongoing improvement efforts, not just an episodic response to our requirement of self-study and attention to assessment of student learning outcomes. This goal was achieved by only a relatively small number of institutions. A well-designed and implemented system of program review, such as that developed by this book, can move institutions to achieve this important goal. Ralph Wolff President and Founder, The Quality Assurance Commons of Higher and Postsecondary Education; and Former President, The Senior College Accrediting Commission of the Western Association of Schools and Colleges (WASC)
References Busteed, B. (February 25, 2014). Higher education’s work preparation paradox. Gallup. Retrieved from https://news.gallup.com/opinion/gallup/173249/highereducation-work-preparation-paradox.aspx?utm_source=link_newsv9&utm_ campaign=item_185804&utm_medium=copy IPEDS. (2018). Use the data. Retrieved from https://nces.ed.gov/ipeds/use-the-data The QACommons. (2018). The EEQ pilot. Retrieved from https://theqacommons .org/the-eeq-pilot/
Ludvik.indb xiii
27-10-2018 00:23:25
Ludvik.indb xiv
27-10-2018 00:23:25
PREFACE How to Use This Book Follow effective action with quiet reflection. From the quiet reflection will come even more effective action. (Drucker, 2008, p. 64)
I
n 2006, the kindness and generosity of John von Knorring, James A. Anderson, Jo Allen, and many other institutional leaders and scholars, along with amazing Texas A&M University faculty, staff, and graduate assistants, provided an opportunity for Outcomes-Based Academic and Co-Curricular Program Review: A Compilation of Good Practices to be published. Much has changed since 2006. Today, there is confusion about whether the intended result of all that goes into the creation of one college degree should be called a performance indicator or an outcome. There is confusion about the roles of competency-based assessment,1 outcomes-based assessment, and program review. Regional accreditation is under question, as are many international quality assurance practices. Furthermore, there is debate about whether machine learning or artificial intelligence should guide institutional leaders in their decision-making processes. In addition, confusion exists about which set of institutional results (either those from evidence of student learning and development or those from institutional performance metrics) should be used for state and federal resource allocation and reallocation practices. If you add to this the growing number of questions on the value of a degree, whether colleges and universities are really promoting student learning and development, and the explosion of research from neuroscientists allowing us to better understand how humans learn and develop, this is a perfect time for a book redo. Having said all of that, the processes for good systematic inquiry remain the same. High-quality systematic inquiry remains relevant and, we would argue, is increasing in importance. The tools organizations use to collect data have changed or, perhaps more appropriately, have multiplied. The questions we ask and the way we ask them may or may not have become more sophisticated, depending on who is asking them. The point is that the good practice processes for inquiring into how well something is working have largely
xv
Ludvik.indb xv
27-10-2018 00:23:25
xvi
PREFACE
remained the same. However, the way data are collected, integrated, and used for decision-making has changed and must continue to change. We posit that more intention needs to be placed on constructing quality questions that guide the inquiry process. This is a key change in this new edition that has driven other changes within. The intention of this book is to utilize a systematic process to inform institutional self-reflection of how well the organization is achieving its intended purpose. This process is not prescriptive; rather, it is reflective, adaptive, and collaborative. To engage in it fully requires intentional action, reflection, and dialogue across organizational boundaries and hierarchies. We recognize that opportunities to engage in meaningful dialogue are being forced off many leaders’ calendars as they respond to a growing number of campus crises. In recognition of that, this book offers guidance and questions that can be adapted for use by leaders at various levels of an organization in order to aid in a recommitment to and a framework for collaborative, reflective questioning, inquiring, problemsolving, visioning, and planning. What is also new in this book is an intentional connection of outcomesbased assessment program review (OBPR) to performance indicators and predictive analytics. While we recognize that the implementation of machinelearning analytics may save leaders decision-making time, as Karyn Bright (2016) wrote, “Even using the most advanced analytics tools on the market, bad data can only provide you with bad insights.” She further espouses: “As the saying goes, ‘If you torture the data long enough, it will confess to anything’” (p. 1). To emphasize this point, Taylor Armerding (2013) showcases how many Big Data experts warn how “the temptation to let the computers do it all can lead to poor choices that are not contextualized properly” (p. 1). And this, he asserts, can lead to “damaging the personal lives of individuals” (p. 1). Many Big Data programmers and analytic experts are cautioning decision-makers away from dependency on analytics and back into dialogue around inquiring whether the “right” questions are being posited, exploring how the data are gathered and how they are analyzed, and encouraging contextualization of the interpretation of data. They emphasize that systematically collected data can inform decisions; however, without dialogue around what the data really mean and how they should be used, human lives can be harmed. A common saying I have used in my own institutional research and OBPR practice is, “Just because you can, doesn’t mean you should.” The challenge of asking good questions and systematizing data collection and ethical predictive algorithms is that good inquiry practices still take a notable investment of time. With the continued massification of higher education along with the desire to make college more affordable and more accessible, higher education leaders are finding increasingly less time for
Ludvik.indb xvi
27-10-2018 00:23:25
PREFACE
xvii
reflective, collaborative dialogue that is informed by a variety of evidence. We are so pressed for time that searching for the wise question sometimes remains elusive. Meanwhile, the value of a higher education degree continues to be questioned. Will the use of predictive analytics improve the quality of a degree as evidence by improved student learning and development? Or will it simply accelerate time-to-degree so that leaders can argue higher education is now more affordable? What will predictive analytics do for access, persistence among all students, and ability to close achievement gaps if we don’t also assess the intended outcomes of the programs designed to influence those institutional goals? How will we know how to increase the value of a degree without gathering the kinds of learning and development data that will prove or disprove stakeholders’ criticisms of it? Every researcher knows that good inquiry takes time as well as a great deal of dialogue among those who can improve the process of inquiry while also collaboratively interpreting the data. Why then is it so difficult for us to commit this kind of time to studying our own process of creating human transformation within our postsecondary educational communities? In 2006, OBPR was coming into practice as a form of institutional inquiry that moved the higher education organization away from a descriptive kind of review process into a collaborative inquiry of how well the institution was delivering what it said it did. Today, many question whether this process has systematically transformed higher education, making it more accessible and affordable to a larger number of diverse students while also providing evidence of how institutions are accomplishing the variety of student success definitions and resulting learning and development outcomes that employers expect. So, the logical first question is, are institutions engaged in outcomes-based assessment? If so, how are they engaging in it and what is that engagement producing? According to a 2017 National Institute for Learning Outcomes Assessment (NILOA) survey of 811 U.S. college and university provosts/chief academic officers or their designees (Jankowski, Timmer, Kinzie, & Kuh, 2018), 82% of survey respondents reported having adopted learning outcomes for all of their undergraduates. They also stated that assessment activity overall had significantly increased along with the range of assessment tools and measures that were used to evaluate learning and development. The intent of gathering and using that data was to improve student learning and development as well as to address equity concerns. While accreditation was reported as a primary driver, 64% of the provosts provided examples of internal policy, programmatic, or practice changes made that were informed by assessment results. This indicates that the use of these results remains primarily internal. Fifty-seven percent of the provosts reported remaining challenged by how to
Ludvik.indb xvii
27-10-2018 00:23:25
xviii
PREFACE
use this information to convey to the public that improvements were being made. Note that the emphasis was not on what outcomes-based improvements to share but how to share them with the public. If you associate this survey information with an increasing skepticism of the value of a degree and the increasing demand for using transparent performance indicators (e.g., whether the degree recipients were able to obtain a job in their desired field and earn a salary where they could repay their educational loan [if applicable] or performance indicators published by Jennifer Engle and the Bill & Melinda Gates Foundation [postsecondary.gatesfoundation .org/wp-content/uploads/2016/02/AnsweringtheCall.pdf ]), there is a need to better understand how institutions can take the good work they produce from their outcomes-based assessment processes and link that work with the interpretation of visible performance indicators in the domains of access, progression, completion, cost, and post-college outcomes. This book affirms good practice outcomes-based assessment approaches already in existence while offering meaningful advice and guiding questions on how to connect good institutional inquiry practice that may be used internally. It also offers performance indicators that are used externally to determine whether institutional quality and performance are meeting the public’s expectations. Returning to the 2017 NILOA provost survey, the survey findings informed recommendations such as a need to improve communication of how internal outcomes-based assessment results used for improvement could be better linked to external indicators of performance. Findings also indicated that it is imperative that institutional leadership link programmatic assessment results and subsequent decisions to governing board discussions about whether institutional accountability for high-quality student learning and development is evident. While emphasizing the need for the additional professional development that survey respondents requested, the survey results pointed to the importance of influencing all members of the postsecondary institution to engage in moving away from implementing outcomes-based assessment as an obligatory response to accreditation demands and more toward seeing how those processes could collaboratively inform and possibly even shift the institutional effectiveness dialogue altogether. It could only be through this process that performance indicators would become meaningful and all the work that contributes to whether those indicators go up or down would be understood fully by those responsible for their creation: the faculty, staff, students, community partners, and governing bodies of the institution itself. And this is exactly what this book focuses upon. Accordingly, rather than using accreditation requirements as a motivator, we emphasize that this kind of inquiry is what Peter Senge (2006) has popularized as the self-reflective process a learning organization embodies. To
Ludvik.indb xviii
27-10-2018 00:23:26
PREFACE
xix
fully understand how outcomes-based assessment could be integrated into program review, used in predictive analytics, and inform performance indicators, a new book needed to be written. To achieve this, this book shares the practices and evidence of institutions who are doing this well. While this book doesn’t claim that one institution is doing all of this well, it showcases sample practices from a variety of institutions to illustrate how good practice OBPR is implemented and includes dialogue prompts to discern how it can be created within your institution. To emphasize this point, we invite you to drop the notion that you may be picking up this book to satisfy an accreditation demand. We have no idea what will happen with the future of accreditation. Organizations such as the Quality Assurance Commons (theqacommons.org) are seeking to redesign accreditation by linking competency-based assessment, program review, and performance indicators that focus on student learning and development. These are interesting times for higher education leaders of all levels. The invitation then is to use this book to embrace becoming a learning organization, or to enhance or sustain your already existing learning organization by discovering or reinforcing the emergent good practices that have been identified from a critical document analysis of outcomes-based assessment literature (see Appendix A). This analysis both affirms and challenges many existing practices and identifies institutions who are moving toward more exemplary practice (Appendix B). If you are an organizational leader at any level of your postsecondary institution (faculty, staff, high-level administrator, student, concerned community member, alumni, parent, or guardian) or a member of a governing body, there is something for you in this book. The book is organized into the following chapters. Some of the material is similar to the 2006 version, with updates based on literature and evolving good practice processes, while the rest of the material is quite different. Chapter 1 “Why: It’s About Becoming a Learning Organization” In this chapter, we discuss what a learning organization is, what a good practice is, and how ineffective leadership can derail any good practice inquiry process. We also address how learning and development actually occur, thus reaffirming the importance of integrating solid inquiry practices into organizational decision-making. Chapter 2 “What: Defining Outcomes-Based Assessment Program Review” Here we define outcomes-based assessment and OBPR in a manner that affirms sound inquiry practice and the creation and sustainability of a learning organization. Furthermore, we introduce the linkage of performance indicators and
Ludvik.indb xix
27-10-2018 00:23:26
xx
PREFACE
predictive analytics to OBPR as well as to the organization’s ability to assure high achievement for all students (HAAS) or competency-based assessment. Chapter 3 “More Why and What of Outcomes-Based Assessment Program Review” In this chapter, we discuss the connection of OBPR to predictive analytics in more detail and outline and explain the components needed to document evidence of a learning organization using the OBPR process. Several good practice institutional examples are referenced in this chapter. In addition, we explore the questions organizational leaders face in the process of making documentation of data and decisions meaningful as they relate to organizational capacity and priorities to improve. Chapter 4 “Criteria for Good Practice Outcomes-Based Assessment Program Review” This chapter is informed by a document analysis of literature found in Appendix A and implementation of OBPR at institutions (Appendix B) judged to be exemplary. The criteria used were reviewed and critiqued by 33 purposefully selected assessment scholars, practitioners, and accreditation professionals. The intention of this chapter is to provide a way to evaluate how well your organization is becoming a learning organization using the OBPR process. Questions are posited that invite institutional leaders at all levels to consider how all of their decision-making processes are contributing to improving their learning organization. Chapter 5 “How and When: Key Questions to Consider When Implementing Good Practice Outcomes-Based Assessment Program Review” Given that continuous questioning of one’s progress toward improvement is critical, this chapter addresses the questions organizational leaders at all levels should consider as they implement good practice OBPR within their institutions. These questions are also intended to ensure the connection of outcomes-based assessment program review practices to performance indicators and the selection of predictive analytics. Chapter 6 “Overcoming Barriers to Implementing Good Practice OutcomesBased Assessment Program Review” While we assert that OBPR is not necessarily ineffective leadership proof (ILP), there are many barriers that come into play when postsecondary institutions are integrating established good practices. This chapter offers evidence-based examples of how to overcome common barriers. In addition, considerations for organizational leaders who face the challenge of providing
Ludvik.indb xx
27-10-2018 00:23:26
PREFACE
xxi
evidence of continuing improvement to all of their stakeholders for all of their institutional priorities are proposed. Each chapter closes with key learning points that you can utilize to design professional development for your learning organization members. Also included at the close of each chapter are key questions for organizational members at all levels to reflect upon and discuss as they implement OBPR. The intention is to provide you with the means to collaboratively figure out how your organization can transparently demonstrate evidence of how you learn and improve on the path to high achievement of student learning and development for all students while fostering human flourishing and overall organizational well-being. To get the most out of this book, we highly recommend you read it from beginning to end. If you choose to just read a chapter or two because you are in a hurry and need some sort of immediate guidance, you can. However, when you are able, please be sure to read all the chapters for your optimal learning and understanding of what we are trying to create with this book— a learning organization that connects its OBPR processes with its strategic planning, action planning, predictive analytics, and comparable performance metrics in meaningful and sustainable ways. In this way, we can empower all within the organization to understand just how they are creating what is being applauded or criticized by the public. Enjoy!
Note 1. “It seems evident, then, that wide-spread acceptance and adoption of the CBE [competency-based education] model will require high-quality competency assessments linked to meaningful labor market outcomes. . . . When developing competency assessments, there are two important stages. The first is assessment development and score validation—in other words, do scores on the assessment reflect the different levels of knowledge and skills that assessment designers are trying to measure? The second is determining how well a student must perform on the assessment in order to demonstrate competency—in other words, what is the cut score that separates the competent from the not-yet-competent?” (McClarty & Gaertner, 2015, p. 3).
Ludvik.indb xxi
27-10-2018 00:23:26
Ludvik.indb xxii
27-10-2018 00:23:26
ACKNOWLEDGMENTS
T
his book is in grateful acknowledgment of the hundreds of faculty, administrators, staff, students, and community partners, including accreditation representatives and government officials, as well as the curmudgeons whose commitment and/or communicated expectation to establishing good practice inquiry for learning organization reflective practice made this book possible. This book was also inspired by and offered in hope to every prospective student, alumnus, alumna, and beneficiary of a higher education degree who knows how higher education can positively transform lives. I am grateful to those 33 scholars, practitioners, and accreditation specialists who took time out of their incredibly busy schedules to share their wisdom and critiques. I am also grateful to Steven Butler, whose commitment to help me with my work on this and other projects continued even after his graduation from the master’s degree program at San Diego State University. His expectation that we collaboratively improve educational systems for his children and others’ children is the kind of passion that keeps me going. Finally, I am so very grateful to my husband, Robert Fanjas, whose belief in higher education faculty and staff ’s ability to inspire creative minds that make the world a better place motivates me every moment of every day. Thank you for being you!
xxiii
Ludvik.indb xxiii
27-10-2018 00:23:26
Ludvik.indb xxiv
27-10-2018 00:23:26
1 WHY It’s About Becoming a Learning Organization Education is the most powerful weapon which you can use to change the world. (Attributed to Madiba [Nelson] Mandela, 2007)
W
hile you may be tempted to begin without reading the preface, we invite you to read the preface first, because that is where we explain the context for this book. If you still desire to begin reading here, then simply know this: Your organization can no longer afford to engage in outcomes-based assessment program review (OBPR) for accreditation purposes only. While we understand from the 2017 National Institute for Learning Outcomes Assessment (NILOA) Provost Survey that accreditation remains the primary driver for demonstrating use of outcomes-based assessment evidence, provosts also state that authentic measures of student learning provide the most useful information to improve employer expected outcomes (Jankowski, Timmer, Kinzie, & Kuh, 2018). What is also a compelling finding from this survey is that provosts are “more interested [than they were in the previous survey results] in finding ways to help faculty and staff develop the attitudes and tools to produce actionable results to improve student learning” (p. 3). According to these survey results, there is an increasing internal desire to widen stakeholder involvement to show how student learning policies and practices are being improved and equity gaps are closing, all informed by the use of outcomes-based assessment data. As such, we invite the reader to consider how you might use the OBPR process to demonstrate that your organization is a learning organization engaged in this kind of inquiry.
1
Ludvik.indb 1
27-10-2018 00:23:26
2
OUTCOMES-BASED PROGRAM REVIEW
Higher Education as a Learning Organization Peter Senge popularized the notion of learning organizations in 1990 when he first published his book The Fifth Discipline: The Art and Practice of the Learning Organization, later republished in 2006. Senge defines a learning organization as an organization that leverages systems thinking. Within a learning organization are people who are committed to enhancing the influence or impact of what they create by seeing how what they do connects with others’ perspectives and ways of doing. For example, faculty may want to know whether what they teach or how they teach it really influences students’ learning and development. And student leadership program coordinators may want to know how well their program offerings impact students’ ability to obtain leadership positions and work within them positively and effectively. Learning organizations are where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together. The basic rationale for such organizations is that in situations of rapid change, only those that are flexible, adaptive, and productive will excel. For this to happen, it is argued, organizations need to discover how to tap people’s commitment and capacity to learn at all levels. (Senge, 2006, p. 114)
A learning organization is committed to monitoring how its decisionmaking processes, regardless of whether it uses predictive analytics, implementation of good practice literature, outcomes-based assessment results, or intuition, are resulting in quality creations. For higher education, this means we need to get clear on what it is we are creating and how we are all contributing to creating it. If we are creating new knowledge, good practice business processes, graduates with degrees or certificates in specific disciplines or interdisciplines, or successful transfer students, how will we know we are doing that well? How does it all interconnect? And how will we demonstrate our organizational growth from what we learned? If we only want to increase the number of bodies who graduate within four years, separate from creating new knowledge, and separate still from creating sound business processes, then perhaps further reflection and dialogue across the institution aren’t necessary. However, if we want to see how all those systems influence each other and, in particular, how they relate to graduating students who are ready and capable of applying their knowledge in new fields of endeavor, then OBPR is a wise approach.
Ludvik.indb 2
27-10-2018 00:23:26
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
3
OBPR is an intentional inquiry process designed to gather meaningful data that can inform and contextualize predictive analytic metrics and other comparable performance indicators. It can also inform the selection of new performance metrics. This, however, requires the OBPR process to be implemented in a deliberate and collaborative way. This book provides some examples of the thoughtful ways in which OBPR can inform performance metrics and inform the selection of those metrics that perhaps have not been widely considered before, such as metrics needed to close achievement gaps. Furthermore, this book also illustrates how OBPR intersects with competency-based assessment and infers a process of inquiry that can be connected to how we understand how students learn and develop and therefore would connect to improving an individual student’s experience as well as individual systems within an organization, if leaders choose to do so. It is true of any systematic inquiry process that its existence alone cannot fix problems within an organization where problems are allegedly caused by (for lack of a better term) ineffective organizational leadership. And many of us have experienced that ineffective leadership can occur on any level of the organization. The OBPR process will only shine a light on explicit problems. The humans within the organization are required to reflect on the problem, its root, and systemic causes; dialogue about potential solutions; and then make decisions (that are informed by what the OBPR process produces) to adress what created those problems or ask different questions to inform another round of data collection, analysis, interpretation, and decision-making. While this book discusses the how-tos of the process, it does not show how to make the process “ineffective leadership proof ” (ILP). In addition to institutionalizing a transparent OBPR reflective inquiry process (e.g., assuring appropriate resources and informed administrative support), we recommend that the following three preconditions are met to make your OBPR process align with the goal of becoming a learning organization (Blumenstyk, 2014; Bok, 2013; Mettler, 2014; Stevens & Kirst, 2015): 1. Make high-quality student learning and development for all students the first institutional priority. High-quality student learning and development for all students is not every higher education organization’s first priority. And even if an organization states that it is, evidence that this statement is fact must be found within the organization. Without consistency in policies and practices as well as appropriate resource allocation, evidence that high-quality student learning and development is the first order of organizational business will be hard to demonstrate. OBPR can and has pointed to the lack of student learning and development
Ludvik.indb 3
27-10-2018 00:23:26
4
OUTCOMES-BASED PROGRAM REVIEW
as an institutionalized priority as a problem in the ability to improve student learning and development for all students. If this is the case for your organization, leaders, at all levels, need to make the evidencebased choices that will move this priority back into its primary place of importance. 2. Personnel processes need to be in place that inform decisions to hire, review, promote, dismiss, and/or refer administrators, faculty, and staff for professional development on the basis of how each is collaboratively engaged in the (a) assurance of high-quality student learning and development for all students; (b) positive cultivation of human flourishing (we’ll explain what this means later); (c) meaningful advancement of creative solutions, whether through community engagement, various forms of culturally relevant expression, or development of new knowledge; (d) demonstration of good stewardship of resources without compromising human flourishing; and (e) procurement of new resources to advance some facet of organizational purpose, mission, or vision. The emphasis here is that personnel processes must be crafted in a manner that will advance a high level of achievement of the purposes of the organization. Currently, many organizations do not use any direct evidence of student learning and development to inform their decisions to hire, review, promote, dismiss, or refer administrators, faculty, and staff to professional development. OBPR has pointed to this as a problem that leaders—at all levels—need to address. 3. Resource allocation and reallocation of resource processes (time, space, personnel, professional development, adequate compensation and workload expectations, provisions for well-being, financial aid, etc.) should ensure that the purposes of the organization are achieved at a high level. Currently, many organizations operate on fixed budgets driven by fixed costs and fixed funding formulas, as opposed to being driven by the priorities of cultivating high-quality student learning and development for all students. The manner in which higher education is primarily funded is often based on formulas that revolve around number of students enrolled or number of units in which students are enrolled. While many districts/systems and states have engaged in various forms of performance-based funding, the metrics informing those decisions are often void of any student learning and development outcomes data. Lindert and Williamson (2016) illustrate how the United States has less of a “lack of resources” problem and much more of a “skewed priority distribution of resources” (p. 27) problem. Again, OBPR can and has
Ludvik.indb 4
27-10-2018 00:23:26
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
5
pointed to the inability to make improvements in learning and development due to a lack of resources directed at needed improvements. Leaders at all levels are needed to fix the “skewed priority distribution of resources” problems.
While we know that effective leadership can assure the establishment of a good practice learning organization within any collection of systems, we also must recognize ineffective leadership can completely subvert it (Kramer & Swing, 2010). As such, waiting for leaders to demand that you engage in OBPR because accreditation requires it is not useful in establishing and sustaining a learning organization. It takes everyone’s effort. I have served at the level of administrative leadership responsible for ensuring that OBPR is completed at my institution and am now serving as a faculty member who has been guilty of hiding in my office to secure tenure or simply to survive the teaching load; I can attest to the importance of this statement. So, I must say it again, activating these three preconditions for a learning organization requires everyone’s efforts. If you are reading this book, you likely are already aware of this truth. Now the question is, whom will you invite to read this book along with you so that you can collaboratively put a learning organization in place?
Defining Good Practice According to Merriam-Webster, a good practice is “a good or wise thing to do” (Good practice, n.d.). In reexploring what constitutes a good practice, the Food and Agriculture Organization of the United Nations (2014) formulated an applicable definition that includes criteria. Here, the definition of a good practice is not only a practice that is good, but a practice that has been proven to work well and produce good results, and is therefore recommended as a model. It is a successful experience, which has been tested and validated, in the broad sense, which has been repeated and deserves to be shared so that a greater number of people can adopt it. (p. 1)
The criteria for good practice in accordance with the Food and Agriculture Organization of the United Nations are listed in Box 1.1. Note that these definitions and their criteria provide no specific length of time to indicate how long a good practice process needs to be in place before it is considered a sustainable good practice. As such, we thought it was necessary to ask again what does it mean to identify good practice in OBPR? What might some specific criteria be with which we could report
Ludvik.indb 5
27-10-2018 00:23:26
6
OUTCOMES-BASED PROGRAM REVIEW
BOX 1.1.
Criteria for What Constitutes a Good Practice
• Effective and successful: A “good practice” has proven its strategic relevance as the most effective way in achieving a specific objective; it has been successfully adopted and has had a positive impact on individuals and/or communities. • Environmentally, economically, and socially sustainable: A “good practice” meets current needs, in particular the essential needs of the world’s poorest, without compromising the ability to address future needs. • Gender sensitive: A description of the practice must show how actors, men and women, involved in the process, were able to improve their livelihoods. • Technically feasible: Technical feasibility is the basis of a “good practice.” It is easy to learn and to implement. • Inherently participatory: Participatory approaches are essential as they support a joint sense of ownership of decisions and actions. • Replicable and adaptable: A “good practice” should have the potential for replication and should therefore be adaptable to similar objectives in varying situations. • Reducing disaster/crisis risks, if applicable: A “good practice” contributes to disaster/crisis risk reduction for resilience. Source: From the United Nations, 2014, p. 1.
institutions showing evidence of having obtained such OBPR good practice? To answer this, we began with what was used in the 2006 publication to determine good practice in OBPR and added to it. Appendix A contains a list of the documents that were examined to determine what criteria constitutes OBPR good practice or challenges it. Following a document analysis of the contents of these publications, a list of good practice criteria for OBPR was compiled (the original list is not included in this publication, as a confidential document). The list of good practice criteria was sent to 33 purposefully selected OBPR scholars, practitioners, and accreditation leaders to critique and edit. The results of that process (document analysis and external critique) are found in the list of criteria detailed in chapter 4.
Ludvik.indb 6
27-10-2018 00:23:26
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
7
The selection of good practice institutions included in this study also followed a different process than was used for the first edition in 2006. The starting list of institutions was derived from selecting institutions that had previously been cited in good practice OBPR publications (see Appendix A). In addition, a panel of 33 OBPR scholars, practitioners, and accreditation leaders was invited to recommend institutions that met or partially met the newly formulated list of good practice criteria. The 45 institutions that served as the study population for this book are listed in Appendix B. Specific examples used in this book are based on information that was publicly available on institutional websites in 2017. The reason for doing so is that a commitment to transparency is a criterion of good practice. By the same principle, some of the institutions in Appendix B do not have examples included in this book due to a lack of publicly available information. You will also note the adoption of new terminology. Throughout the book, we will explain terminology relevant to the OBPR topic being introduced or discussed. Prior to delving into this book, it is imperative to note that there is a great deal of new research on how a human being learns and develops. This is important to understand in OBPR because it underscores the significant need for collaboration across divisions (systems) that historically have referred to themselves as curricular (faculty) and cocurricular (student affairs and academic support services). The collaboration across these systems was reported in the 2017 NILOA survey as a necessity to improving student learning (Jankowski et al., 2018). How a learning organization manages itself (e.g., organizes itself into departments and divisions and functional areas) will influence the manner in which it engages in inquiry of how well it does what it is supposed to do (Senge, 2006). When considering a learning organization whose primary business is to employ the cultivation of high-quality human learning and development for all, the human and the way that the human actually learns and develops can’t be divided so easily. Consider this analogy. When a car is manufactured, several parts are sourced from a variety of suppliers. Just as higher education organizations recruit their students, working professionals, and faculty from across the United States and the world, the parts that make up a car originate in similarly disparate places. Those varying parts of the world have their own quality assurance processes for those car parts, as varying parts of the world have their own quality assurance processes for their secondary school systems. Car manufacturers are aware of and may specify the quality criteria when they integrate parts into their products. Attention to the quality of each part is essential to avoid the failure of one part leading to a recall of a whole model range. The aim of various departments and supply management processes
Ludvik.indb 7
27-10-2018 00:23:26
8
OUTCOMES-BASED PROGRAM REVIEW
is to eventually assemble an acceptable quality car that meets criteria of affordability, value, and quality for their customers (with the understanding that affordable, quality, and valuable remain relative terms with relative performance indicators). Cultivating high-quality student learning and development is dissimilar from car manufacturing, yet, its management and assessment share some similarities. Well-thought-out OBPR can inform which departments are effectively supporting students, which classes are heightening student learning, and which programs are cultivating employer-desired outcomes such as the ability to work collaboratively within groups. However, without decision leaders stepping back and looking reflectively, collaboratively, and holistically at all the parts of their organization and how they interact, these leaders won’t have data to inform the performance metrics that can then prioritize the actions and resource allocations to achieve significant and sustainable improvements in quality learning and development for all students. Even if organizational leaders assure that every academic program and service provider department is using the same OBPR process, the process won’t look exactly the same, because departments may be subject to varying professional accreditation processes and standards influencing quality assurance requirements. Furthermore, there are varying personnel review processes at play as well as budgeting and resource allocation processes. This is one reason that conveying the multitude of internal improvements that arise from engaging in OBPR is very difficult for institutions to do. And there is more. How many organization leaders are setting aside the kind of time it takes to examine holistically how students move through their organization and how they accidentally or intentionally interact with all of the parts of the organization in order to, for instance, graduate in two or four years (e.g., inclass learning, out-of-class learning and experiences, interaction with student services, mentors, coaches, alumni, potential employers, community service agencies, etc.)? Predictive analytics may be able to provide you with some useful data if you have set up your system in a manner where meaningful data have gone into the modeling and you are clear about all the assumptions that are in place. To thoughtfully engage in fully understanding the intricacies of all the human interactions that may be influencing two- and fouryear graduation rate performance indicators in a manner that also includes discovering whether expected learning and development outcomes are being met during these time frames requires access to outcomes-based data, reflection, dialogue, and prioritization of the decisions that will result. The critical importance of doing this kind of work must be driven by the science of how humans learn and develop.
Ludvik.indb 8
27-10-2018 00:23:26
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
9
Process of Learning and Development The process of learning and development and assuring its quality has often been referred to as complex (Bresciani Ludvik, 2016; National Research Council, 2000; Zull, 2011). Few will argue with that. Yet, we tend to keep choosing simple ways to measure how well an organization is able to deliver high-quality student learning and development using performance indicators that are void of any evidence of direct student learning and development such as time-to-degree. Time-to-degree may be a relevant institutional performance indicator of students’ persistence, grit, and resilience. Some argue that certain performance indicators are measures of access and affordability, such as admissions yield rates by various intersections of identities and student loan debt. Some institutional performance measures—such as persistence rates, time-to-degree, and graduation rates—may also be measures of how user-friendly the degree process is to navigate, how much support students receive just when they need it, or of just how well prepared students were to succeed in the type of environment provided, regardless of what those students experienced within the organization. However, where is the evidence that informed the validity of the selection of these performance indicators? Where is the evidence that they do indeed measure expected level of achievement for employee-expected student learning and development outcomes? This process of qualitative assurance uses a mixed methodology intended to understand how learning and development can be measured in a manner such that student learning can be improved. A similar methodology was used by the Center for Inquiry at Wabash College in the United States. While there were many findings from the Wabash study, following is a particular example: Our collaborative work with [49] colleges and universities over the last five years on the Wabash National Study has led us to wonder whether the advocates for accountability and improvement have a realistic sense—both in terms of student learning as well as in terms of institutional change—of what kind of change is possible over a four- or five-year period. (Blaich & Wise, 2011, p. 16)
Perhaps institutional performance indicators such as time-to-degree have nothing to do with the measurement of high-quality student learning and development. In order to anchor OBPR into a different dialogue—that of whether higher education is a learning organization—we (Kuh et al., 2018) need
Ludvik.indb 9
27-10-2018 00:23:26
10
OUTCOMES-BASED PROGRAM REVIEW
to discuss what we understand about cultivating meaningful learning and development. What follows is derived from a NILOA Occasional Paper (Kuh, Gambino, Bresciani Ludvik, & O’Donnell, 2018). Zelazo, Blair, and Willoughby (2016), in a summary of neuroscience research presented by the Institute of Education Sciences (IES), noted how learning and development are inextricably intertwined (American College Personnel Association, 1996) and, as such, can be referred to as neurocognitive skills that can be intentionally nurtured. They further clarified how those neurocognitive skills are often divided into two categories: (a) fluid intelligence or executive functions and (b) crystallized intelligence. Crystallized intelligence represents facts and knowledge that we can easily identify. We often use tests or some types of questionnaires that determine right or wrong responses when identifying whether this kind of learning is present. Fluid intelligence or executive functions are the learning and development that may appear different based on the context in which they are assessed. In other words, this type of learning and development is not easily identifiable. In order to make these malleable skills more identifiable, Zelazo and colleagues (2016) further subdivided fluid intelligence or executive functions into three areas: cognitive flexibility, working memory, and inhibitory control. Cognitive flexibility involves a person’s ability to think about any idea or circumstance in multiple ways; this would include taking into account someone else’s perspective or knowing how to solve a given problem through multiple approaches. This would be useful, for instance, in demonstrating an openness towards learning about others’ cultures, beliefs, or ideas when collaboratively problem-solving. Working memory involves both being able to recall known information in a relevant context and, applying it in a workable, meaningful, and appropriate method that is relevant to the task in hand. Inhibitory control is the process of intentionally directing attention away from a distraction, stopping an impulsive behavior, or not acting on a highly learned or engrained habit. (Bresciani Ludvik, 2018, pp. 3–4, emphasis in original)
See Figure 1.1 for an illustration of this conceptual alignment. Again referencing the NILOA Occasional Paper (Kuh et al., 2018), while cognitive neuroscience research enables us to see how learning and development are inextricably intertwined, it offers insufficient guidance for those who want to explicitly figure out how their service, course, or workshop fits into the overall design of a student’s degree as well as how to use the OBPR process to assess its effectiveness. Nonetheless, one can see the importance of gathering evidence of crystallized intelligence (facts, knowledge) as well as
Ludvik.indb 10
27-10-2018 00:23:26
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
11
Figure 1.1. Learning and development as neurocognitive skills. Cognitive Flexibility
Fluid Intelligence/ Executive Functions
Working Memory
Inhibitory Control
Neurocognitive Skills
Facts Crystallized Intelligence Knowledge
Note. Adapted from Zelazo, Blair, and Willoughby (2016).
the multiple contexts in which that crystallized intelligence is applied (e.g., fluid intelligence/executive functions) to determine whether a student has learned what is needed to succeed as an employee or graduate student. To further assist in the deconceptualization of learning and development, we return to cognitive neuroscience summary findings. In Figure 1.2, the terminology that educators are perhaps more familiar with begins to appear, and you see how neuroscientists are contextualizing executive functions into two categories: that of temperament and personality and that of positive goal-directed behavior. It is important to note here that while neuroscientists are separating these neurocognitive skills into these two categories, the underlying assumption is that all of the neurocognitive skills associated with these two categories are malleable. That means that educators, with a thoughtfully designed curriculum (which is now understood as the integration of what used to be considered two separate domains, the cocurricular and the curricular) and assessment measures, can identify how what they are providing to students is helping those students persist, graduate, and demonstrate necessary acquisition of crystallized intelligence and fluid intelligence in a variety of contexts. Again, we emphasize that this is not a process of labeling something as a specific kind of experience and then counting how many students are engaged in that experience. Rather, it is a thoughtful inquiry process—an outcomes-based assessment approach—that will help us
Ludvik.indb 11
27-10-2018 00:23:26
12
OUTCOMES-BASED PROGRAM REVIEW
Figure 1.2. Semantic map of executive functions and related terms. Effortful Control
Conscientiousness
Temperament and Personality Openness
Self-Control Neurocognitive Skills: Executive Functions
Grit
* Cognitive Flexibility
Reflective Learning
* Working Memory *Inhibitory Control Positive GoalDirected Behavior
Deliberate Problem-Solving Emotion Regulation
Persistence
Planning
Note. Adapted from Zelazo, Blair, and Willoughby (2016).
better understand whether learning and development are taking place. OBPR, if well designed, can examine how well and where the components of learning and development offered by specific and distinct departments are embedded into the curriculum or learning journey of the student and then assess how well each feature is cultivating specific learning and development outcomes of importance. Reflective student learning portfolios, for instance, provide an integrative approach to capture the dynamic way in which executive functions as well as crystallized intelligence become evident for each student. Reflective student learning portfolios are very useful in OBPR. To further emphasize how learning and development are so tightly intertwined, we thought that it might be useful to provide one more way to look at it. As such, we introduce a body of work that was recently released. Herman and Hilton (2017) edited a compilation of research compiled by a committee charged by the National Academies of Sciences (NAS) to gather relevant research to more clearly define interpersonal and intrapersonal competencies, to examine whether and to what extent a range of these competencies may be related to each other and to persistence and success in undergraduate education (especially in STEM) and to examine the extent to which these competencies can be enhanced through intervention. (p. 18).
Ludvik.indb 12
27-10-2018 00:23:26
13
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
Figure 1.3. Map of executive functions and related terms to intra- and interpersonal skills. Effortful Control
Temperament and Personality
Conscientiousness Openness
Self-Control Grit
Neurocognitive Skills: Executive Functions * Cognitive Flexibility * Working Memory
Reflective Learning
Growth Mindset
Deliberate Problem-Solving
Sense of Belonging
*Inhibitory Control Positive GoalDirected Behavior
Positive Future Self
Prosocial Goals and Values Academic Self-Efficacy
Emotion Regulation
Persistence
Planning
Note. Adapted from Zelazo, Blair, and Willoughby, 2016, p. 4; and Herman and Hilton, 2017, p. 6.
The summary of research, which explored primarily student-generated, published data on large populations, resulted in these definitions: Intrapersonal competencies “involve self-management and the ability to regulate one’s behavior and emotions to reach goals” and interpersonal competencies “involve expressing information to others, as well as interpreting others’ messages and responding appropriately” (Herman & Hilton, 2017, p. 22). To help the reader understand what the NAS committee was able to identify as inter- and intrapersonal skills and to align those skills with what we have already discussed, we offer Figure 1.3. In this figure, you may find some additional terms with which you are even more familiar. Also in this figure, you will note that the terminology from Zelazo and colleagues (2016) is represented along with the terminology used by Herman and Hilton (2017) Now that you have a better understanding of the variety of ways in which learning and development can be deconceptualized and contextualized, you may now be able to identify a variety of pre- and postassessment inventories that already exist, which could be used to evaluate whether desired learning and development outcomes are being cultivated in the compilation of your in-class and out-of-class curriculum (here again I’m assuming the integration of what we used to consider as separate educational experiences, the
Ludvik.indb 13
27-10-2018 00:23:27
14
OUTCOMES-BASED PROGRAM REVIEW
curricular and cocurricular) you already are offering on your campus. Such standardized inventories would include Dweck’s (2006) Growth Mindset Scale, Duckworth and colleagues’ (2007) Grit Scale, Jazaieri and colleagues’ (2014) Compassion Scale to assess prosociality, and Hoffman and colleagues’ (2002) Sense of Belonging Scale. You can find more ideas of these types of measures in Appendix C. We could easily add on to that list; however, the way to examine those scales and how they might be used in OBPR processes to evaluate both crystallized intelligence and executive functions/fluid intelligence in the variety of in-class and out-of-class offerings at any one institution is not the emphasis of this book. The point is to begin to examine—within your OBPR process—how well what you are offering to students integrates all that is necessary for their achievement of desired high-quality learning and development. The other point of this is to consider how each separate workshop, service, course, or experience is cultivating specific learning and development (using whichever terms and supporting research associated with those terms you might choose to write as a learning and development outcome). In this way, we begin to gather evidence as to the value of a degree or, at the very least, the value of the postsecondary educational experience, because we can articulate and assess learning and development outcomes that employers and graduate schools want to see in our students. This, we posit, is the conversation needed to determine whether there is value in a higher education degree. We also emphasize that this is an important opportunity to select performance indicators that are representative of actual learning and development (either crystallized knowledge or fluid intelligence) such as sense of belonging or overall well-being, rather than selecting performance indicators that don’t align with what we understand about how to cultivate high-quality student learning and development. As we move forward, you will note that your organization may still be allowing its current management or business model to drive how OBPR is implemented on your campus. That won’t necessarily be a wrong way to approach OBPR. It will just require more dialogue to piece everything together in order to more fully understand all of your students’ journeys to degree completion or transfer and understand what needs to be improved. If the institutional leadership begins to embrace how learning and development occur, you will begin to see how your OBPR can be designed collaboratively with student services/affairs, academic services, student success professionals, business services professionals, and faculty involved. The expectation is that this inquiry process will become institutionalized (and possibly even ILP) when its organizational members embrace evidence that their students’ learning and development is not as compartmentalized as the organization is currently managing its learning and development design
Ludvik.indb 14
27-10-2018 00:23:27
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
15
and delivery. If this happens, you will begin to provide plenty of evidence that you are indeed a learning organization. While our OBPR assessment practices might remain compartmentalized, the richness of bringing OBPR data together and reflecting on it, making improvement and resource reallocation decisions, and identifying how OBPR informs performance indicators and how it doesn’t is where the power resides. At the close of each chapter, key learning points that you can utilize to structure ongoing professional development within your organization are provided. In addition, at the close of each chapter or embedded within each chapter will be several questions that your organizational leadership, at all levels, is encouraged to discuss and act upon. Furthermore, we invite you to make that discussion and those resulting decisions as transparent as possible so that those coming into your organization or holding it accountable can benefit from knowing why your organization has selected a particular process and specific performance indicators as well as how it is leveraging that process to improve learning and development for all students; in essence, your organization can demonstrate that it is a learning organization.
Key Learning Points From the Preface and Chapter 1 1. “Learning organizations are where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together. The basic rationale for such organizations is that in situations of rapid change, only those that are flexible, adaptive, and productive will excel. For this to happen, it is argued, organizations need to discover how to tap people’s commitment and capacity to learn at all levels” (Senge, 2006, p. 114). 2. Time must be allocated for meaningful and useful inquiry to occur. 3. While higher education organizations serve many purposes, the primary purpose is to provide evidence that it is a learning organization delivering high-quality student learning and development to all enrolled students. 4. Processes to assure high-quality systematic inquiry into whether an organization is functioning as a high-level learning organization are required if the organization is to learn how and when it is performing well and how to improve when it is not performing well. 5. Ineffective leadership at any level can derail high-quality systematic inquiry processes if those processes are not institutionalized.
Ludvik.indb 15
27-10-2018 00:23:27
16
OUTCOMES-BASED PROGRAM REVIEW
6. Good practice is “not only a practice that is good, but a practice that has been proven to work well and produce good results, and is therefore recommended as a model. It is a successful experience, which has been tested and validated, in the broad sense, which has been repeated and deserves to be shared so that a greater number of people can adopt it” (Food and Agriculture, 2014, p. 1). 7. “The concepts of ‘learning,’ ‘personal development,’ and ‘student development’ are inextricably intertwined and inseparable” (American College Personnel Association, 1996, p. 2). As such, student learning and development must be collaboratively designed, delivered, and evaluated. 8. Some learning and development, such as crystallized intelligence, is easier to identify than other types, such as fluid intelligence/executive functions. We understand that both kinds of learning and development are malleable, and it is what employers and graduate schools want to see our students demonstrate upon leaving our institutions. 9. Taylor Armerding (2013) showcases how many Big Data experts warn how “the temptation to let the computers do it all can lead to poor choices that are not contextualized properly.” And this, he asserts, can lead to “damaging the personal lives of individuals” (p. 1). 10. Careful selection of institutional performance indicators must take place so that evidence of student learning and development can more readily align with those indicators, thus assisting institutional leaders to tell the public story of quality within their learning organization.
Questions to Consider 1. Since establishing the preconditions for OBPR requires leaders at all levels, whom will you invite to read this book along with you so that you can collaboratively implement a learning organization? 2. How well does our organization demonstrate that it is a learning organization? 3. How well do our organizational leaders (at all levels) prioritize the time to reflect on the interconnectedness of our systems (divisions, departments, programs) and then dialogue over the meaning of the data we have? 4. How well do our organizational leaders (at all levels) prioritize learning and development for all students? 5. What kinds of high-quality systematic inquiry processes are we engaged in within our organization?
Ludvik.indb 16
27-10-2018 00:23:27
WHY: IT’S ABOUT BECOMING A LEARNING ORGANIZATION
17
6. To what degree are our high-quality systematic inquiry processes ILP? 7. What would our organization consider its good practice systematic inquiry processes to be? How well are we sharing those with each other and with our stakeholders in order to advance our own learning? 8. How well does our organization collaboratively design, deliver, and evaluate student learning and development? 9. How well are we assessing crystallized intelligence and fluid intelligence/ executive function? 10. How well are we contextualizing performance indicators and our use of predictive analytics so that we don’t harm human lives? 11. How well does our selection of performance indicators align with what we understand about how we cultivate student learning and development within our organization?
Ludvik.indb 17
27-10-2018 00:23:27
Ludvik.indb 18
27-10-2018 00:23:27
2 W H AT Defining Outcomes-Based Assessment Program Review Every system is perfectly designed to get the results it gets (Carr, 2008, p. 1)
The Definition The intention behind outcomes-based assessment—that of quality assurance and external accountability in higher education—has been around for a while. There are many definitions for outcomes, assessment, and program review, particularly depending on your discipline. As such, it is important to remind readers what this book is referring to when these words are used. First, assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning. (Huba & Freed, 2000, p. 7, emphasis added)
Second, Palomba and Banta (1999) defined outcomes-based assessment as “the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development” (p. 4). In these definitions, you can see that there is particular emphasis on assessment of student learning and development outcomes. As we examined additional literature, we noticed a similar emphasis on student learning and development outcomes when the term assessment was used in publications such as those by Banta, Jones, and Black (2009); Maki (2010); Banta and Palomba (2014); Suskie (in press); Schuh and colleagues (2016); and Kuh and colleagues (2015). 19
02_OPR_C002.indd 19
07-11-2018 21:56:31
20
OUTCOMES-BASED PROGRAM REVIEW
Specifically, Maki (2004) posits that outcomes-based assessment is a systematic means to satisfy educators’ innate intellectual curiosity about how well their students learn what they say students are learning. This definition is mirrored in the Community College of Baltimore County’s philosophy of assessment, which is posted on their website: Assessment is a natural and ongoing component of the instructional process. All members of the institution share responsibility for student learning during their tenure at the College. Continuous improvement of learning is a collective enterprise upon which the success of instructional units depends on the organized support and cooperation of others. The process of assessing learning outcomes is a means to an end, that end being improved learning. As part of assuming the professional responsibility that goes with teaching, faculty identify, design, and implement specific learning outcomes assessments. The results, once analyzed, form the base for organized change that positively influences student learning. Learning outcomes assessment is neither precise nor perfect, and its data are interpreted with that in mind. It is a way of thinking about quality that comes from our willingness to continually examine, question, and, as necessary, alter what we do as an educational institution. In no instance are the results of learning outcomes assessment used in a punitive manner, neither in reference to students nor to personnel. The climate of cooperation and focused efforts to improve permeates the assessment process. Such an atmosphere relieves staff of fear and allows them to approach both instructional and program assessment with an open and creative mind. Learning outcomes assessment provides feedback to faculty that allows them to strengthen and improve the educational process, which results in more appropriate, more extensive, and/or higher-level learning. (www. ccbcmd.edu/About-CCBC/Accreditation/Learning-OutcomesAssessment.aspx)
Suskie (2009) writes that a contemporary approach to assessment is “carefully aligned with goals, focused on thinking and performance skills, used to improve teaching and learning and used to tell our story: what makes our college or program distinctive and how successful we are in meeting students’ and societal needs” (p. 5). However, it is important to note that not all scholars align fully with this definition. For example, Hernon and Dugan (2004) define assessment as ‘‘the process of gathering and assembling data in an understandable form’’ (p. 8). There is no reference to student learning here. There are several other definitions of assessment and outcomes-based assessment that also do not reference student learning; some vary a great deal
Ludvik.indb 20
27-10-2018 00:23:27
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
21
from one another as they are embedded in various industries (e.g., psychology, medicine, business, etc.). Even within the education industry, there are differing opinions about what assessment is and what outcomes are. For instance, Cuseo (2017) illustrates while there are many reasons students persist, the only outcome that is discussed as one of significance is student retention, as that appears to be the only thing many institutions are measuring. Scheerens and colleagues (2011) explain outcomes assessment within Astin’s InputEnvironment-Output framework (1991), where “I” represents focus on the inputs, such as the qualities and characteristics of all that goes into an educational system. In the case of student learning and development, this would be all the preparation the student has undergone prior to entering college as well as all the preparation the university has undertaken to ensure that student’s success. We also emphasize that it would include the rich lived experience that the student brings into the educational environment, which informs the implicit and explicit lens through which the student perceives the experience provided. The “E” indicates the environment within which education occurs, or it could be the environment in which the student chooses to engage or is required to engage (e.g., degree pathways, out-of-class experiences, etc.). The “O” signifies the outputs, which have been more commonly referred to as outcomes or performance indicators. Scheerens and colleagues (2011) differentiate the outputs into four categories: 1. outcome indicators, differentiated as (a) output (for example, measured by [comparable] standardized achievement tests), (b) outcome (various measures that are central in productivity and effectiveness interpretations of educational quality but also play an indispensable role in assessing the equity, efficiency and responsiveness [of those providing the education]), (c) attainment (such as the number of students that complete a certain period of schooling without delay [retention, time-to degree, and graduation rates] and are of a more administrative nature) and (d) impact indicators (indicators of the social status of students that have reached certain levels of educational attainment [such as level of degree, employability, and salary levels]); 2. process indicators (such as access, participation, transition to learning environment, and organizational processes, differentiated at three aggregation levels, national system, school [college/university], and classroom level teaching [also includes outside of classroom]; 3. input indicators (such as financial and human resources invested in education, student preparedness levels), differentiated between national system, school [college/university], and teaching levels [inside or outside of classroom]; and
Ludvik.indb 21
27-10-2018 00:23:27
22
OUTCOMES-BASED PROGRAM REVIEW
4. context indicators (such as demographic, social and economic context of education), differentiated between national system level indicators and the school [college/university] community. (pp. 37–38)
Institutional leaders may find that in using Scheerens and colleagues’ (2011) definition, a great deal of confusion could be cleared, for in today’s OBPR environment, outputs and outcomes are often confused with performance indicators and predictive analytics. Perhaps it is because within education there are a number of varying approaches to program evaluation. When colleges and university leaders consider engaging in OBPR, we noticed that many think of program evaluation processes that are often used in grant proposals or of measuring the value of an education degree using outcomes-based assessment. As such, clarifying these two processes is of great importance. Fitzpatrick and colleagues (2011) define program evaluation as a systematic process of gathering evidence to determine whether a program, product, policy, or system is meeting its goals and to gather the kinds of evidence that would also inform improvements. This is similar to the definition we have used in OBPR. However, the steps used in this process are different from those used in good practice institutions. In Figure 2.1, note the five key process steps recommended by Fitzpatrick and colleagues. Figures 2.2 and 2.3 are two examples of processes used in good practice institutions. Note the differences and similarities in the three figures. They all entail gathering data/information and using that data/information for decision-making. They all also identify a focus for the evaluation, however, Figure 2.1 refers to that focus as “client needs,” Figure 2.2 to “outcomes,” and Figure 2.3 to “objectives.” One apparent point of confusion for some is the assumption that student learning and development outcomes are not what the “client” needs. Therefore, those critics would prefer that a needs assessment be conducted to inform the student learning and development outcomes. And here resides a large and rather conflicted question of whether articulating program outcomes based on current needs will only prepare students to be able to respond to current problems as opposed to articulating program outcomes that prepare students to resolve present problems and design future possibilities. This consideration causes us to reflect on two quotes. When Wayne Gretzky, an accomplished hockey player, was asked about the secret to his success, he is understood to have replied, “I skate to where the puck is going to be, not to where it is has been.” Einstein is believed to have said, “We can’t solve problems by using the same kind of thinking we used when we created them.” This is not to say that good practice institutions ignore stakeholders’ needs, such as employee desired outcomes; they simply consider it as one piece of data in the outcomes articulation.
Ludvik.indb 22
27-10-2018 00:23:27
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
23
Figure 2.1. Fitzpatrick and colleagues’ (2011) steps in program evaluation. Gather information about client needs Share findings and recommendations with client and stakeholders
Analyze data to evaluate questions
Draft an evaluation proposal
The Evaluation Proposal describes the evaluation, context, and stakeholders; outlines evaluation approach; and aligns approach with the evaluation questions.
Collect data to answer evaluation questions
Note. Adapted from Fitzpatrick and colleagues (2011).
As you can imagine, organizations engage in many more steps than are depicted in these figures to determine whether a program or the institution is effective in the areas in which it says it is effective. So, in the case of Fitzpatrick and colleagues (2011), a logic model must be constructed. First, the logic model is intended to convey important features of the program being evaluated. The theory behind the program design is posited in a logic model, and the basic features include information about program inputs, activities, outputs, and outcomes. The definition of inputs is similar to that in Scheerens and colleagues (2011). Activities are similar to what Astin (1991) would refer to as what is provided within the environment. In the Fitzpatrick and colleagues model, outputs are countable items, such as numbers of participants or clients served each week, class meetings, hours of direct service to each participant, or publications. And outcomes are defined as the immediate, intermediate, or long-term goals for participant change after they finish the program. In the case of Miami Dade College and James Madison University, specific templates are provided that mirror Fitzpatrick and colleagues’ (2011) logic model, yet provide additional detail that decision-makers find very important for context setting. In addition, as opposed to being thought of as outputs, countable items such as number of class meetings would be considered inputs or would be described as a part of the experience, as referenced in Astin’s (1991) model. An example OBPR template that integrates Astin (1991), Fitzpatrick and colleagues (2011), and Scheerens and colleagues
Ludvik.indb 23
27-10-2018 00:23:27
24
OUTCOMES-BASED PROGRAM REVIEW
Figure 2.2. Assessment process used at Miami Dade College.
Identify Outcomes
Use Results to Improve
Analyze Data
Select and Design Measures
Plan for Data Collection and Implement Measure
Note. Adapted with permission of Miami Dade College.
(2011) and is based on the synthesis of good practice institutions is discussed in more detail in chapter 3. For now, we return to delineating OBPR. Because we are interested in how outcomes-based assessment can be used by an educational organization to improve itself, we illustrate how Banta and Palomba (2014) emphasize that outcomes-based assessment can “encompass the entire process of evaluating institutional effectiveness”1 (p. 2). To determine how that can be done was the intention of the 2006 version of this book, and it remains the intention today. Because there are so many varying understandings of what OBPR may mean, it is extremely important to have a group of faculty and student success practitioners (student affairs and academic support professionals, student success professionals, and other service providers) come together to reach a consensus on what OBPR is, what its purpose will be at the institution (conceptual framework), and to define a common language for what each component of it entails; if not, the reflective inquiry process won’t have any meaning to those who need to implement it
Ludvik.indb 24
27-10-2018 00:23:27
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
25
Figure 2.3. Assessment process used at James Madison University.
Establishing Objectives
Using Information
Creating and Mapping Programming to Objectives
Analyzing and Maintaining Information
Selecting and Designing Instruments
Collecting Information
Note. Reprinted with permission of James Madison University.
(Bresciani, Zelna, & Anderson, 2004). Some institutions, such as Guttman Community College and Indiana University–Purdue University Indianapolis (IUPUI), chose to borrow a definition of OBPR from published literature, while others, such as Miami Dade College and North Carolina State University, developed one from conversations with faculty, staff, and outsideof-classroom educators. You can examine those varying definitions by accessing the URLs listed next to each good practice institution in Appendix B. In the 2006 version of this book, we derived a definition of OBPR from definitions used in literature and at institutions identified as good practice institutions. As we examined literature published since 2006 and definitions
Ludvik.indb 25
27-10-2018 00:23:28
26
OUTCOMES-BASED PROGRAM REVIEW
used in good practice institutions in Appendix B, we noted that some scholars differentiated between processes used for accountability and improvement (Maki, 2010; Schuh et al., 2016; Suskie, 2009; Suskie, in press), while others did not (Banta et al., 2009; Banta & Palomba, 2014; Kuh et al., 2015). Given new literature, new good practice institutions, as well as the ongoing emphasis on performance indicators2 and the emerging prevalence of predictive analytics,3 we modify this definition to reflect necessary ethical guiding practices (Ekowo & Palmer, 2016) that assert the identification of outcomes and the assessment of institutional capacity when using predictive analytics to inform institutional decision-making. As such, the updated definition for OBPR is as follows: OBPR is a systematic reflective inquiry process in which program faculty, college/university professionals, students, and concerned community partners and scholars collaboratively articulate the intended results of the cumulative contribution of their program(s). In outcomes-based assessment, faculty and professionals document what the program(s) intends to accomplish in regard to its services, research, student learning and development, and faculty/staff development programs. The faculty, professionals, and community members then purposefully plan the program(s) so that the intended results (i.e., outcomes) can be achieved; implement methods to systematically—over time—identify whether the end results have been achieved; and, finally, use the results to plan program improvements or make recommendations for policy, recruitment, retention, resource reallocation, new resource requests, or additional faculty/staff development to assure human flourishing for all. Results can also be used to provide evidence of a learning organization, explain institutional performance indicator results, identify whether predictive analytic decisions are wise and skillful or causing harm, inform new data needed for improving predictive analytic processes, and select new types of performance indicators. This systematic process of reflective evaluation is then repeated at a later date to determine whether the program improvements contribute to the intended outcomes as well as the selected performance metrics.
It may be useful to illustrate this definition with a diagram. In Figure 2.4, notice typical components of an iterative systematic OBPR process, as well as the influence of Astin (1991), Fitzpatrick and colleagues (2011), Scheerens and colleagues (2011), and good practice institution modeling. Each one of these areas in this diagram will be unpacked in more detail in chapter 3. It is important to note here that in a systematic inquiry process, the organizational leaders are not ignoring other processes that are occurring within an institution, such as the strategic planning4 process, annual reporting, budgeting, and the performance indicators those processes may generate. Nor are
Ludvik.indb 26
27-10-2018 00:23:28
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
27
Figure 2.4. The iterative systematic OBPR/Gather Data cycle.
Gather Data Interpret Evidence Mission/Purposes Goals Outcomes/Competencies
/ ing nn y/ a l t i P ac p gic tics ate s/ Ca naly r t t S pu A In ictive / d e r ew s P evi alysi R l na e An v ter Ex arati mp o C
Implement Methods to Deliver Outcomes(Action Planning) and Methods to Gather Data
Document Decisions to Improve Programs; Enhance Student Learning and Development; Inform Institutional DecisionMaking, Planning, Budgeting, Policy, Public Accountability, and Performance Metrics
leaders ignoring action planning5 or the decisions that may be generated from predictive analytics. Rather, that information is a part of all that is informing organizational direction and the specific purpose of various departments, which is made evident in the articulation of mission/purposes, goals, and outcomes or competencies for each program area. Furthermore, the use of external review6 or comparative analysis can also be used to enhance the rigor of an organization’s self-reflection. Keep in mind, however, that not all institutions or programs within an institution are demonstrating evidence that they are a learning organization by collecting meaningful comparable data of student learning and development outcomes. Also keep in mind that not all institutions are prioritizing high achievement of student learning and development for all of their students. Readers are encouraged to dialogue among various institutional leaders and make clear the definition of OBPR, why you are engaging in it, and what you hope the process creates for your organization, as well as make explicit all of its components so as to avoid engaging in a process for process’ sake. IUPUI and Isothermal Community College, among other good practice institutions, have a glossary of terms publicly available on their websites
Ludvik.indb 27
27-10-2018 00:23:28
28
OUTCOMES-BASED PROGRAM REVIEW
to aid all of their organizational members in defining the terms used in their process. This is incredibly important to do in order to avoid confusion across disciplines and to keep community members focused on the purpose of OBPR and how it is intended to cultivate a learning organization that optimizes human flourishing for all.
The Purpose of OBPR The purpose of OBPR is to engage in a systematic reflective methodology that ensures evidence of a learning organization and that organizational purpose is achieved while also assuring high achievement for all students (HAAS). This is also the purpose of institutional effectiveness. Suskie (2014) has argued that there is a lot to consider when organizational purpose involves the cultivation of so many human beings’ futures. As such, many good practice organizations publicly share purpose statements for their OBPR processes, such as North Carolina State University in the following statement approved by the faculty senate in 2005. • The most important purpose of assessment is to improve programs systematically by monitoring student learning. We must also comply with accreditation standards, but it is more important to make it useful for the faculty. • We are committed to making assessment meaningful to the faculty first of all. We want assessment to be meaningful at the college and university levels, too, but it’s more important that it be useful at the department level. • We are also committed to making assessment as efficient as possible. Certainly, we don’t want the costs of assessment to swamp its value. That means that assessment should not detract from teaching or research or other core responsibilities. It also means that the University should look for efficiency in processes and staffing for assessment. • Assessment should be faculty driven. Only a program’s faculty should decide what learning outcomes should be for that program, and they are in the best position to determine whether students have achieved those outcomes. • Assessment is a collaborative activity among faculty, staff, and students. The most important role of faculty should be to determine outcomes, choose assessment strategies, evaluate results, and recommend actions. Wherever possible, staff should be used to carry out assessment strategies, e.g., constructing tools, collecting data, and preparing analyses for faculty consideration. If students are involved in the assessment process,
Ludvik.indb 28
27-10-2018 00:23:28
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
•
•
•
•
• •
29
they need to see that their contributions to assessment are useful and valued. Measurable or observable outcomes must be established for every program, and achievement of those outcomes must be assessed regularly and continuously. For academic programs, assessment must include student learning outcomes. Inputs (curriculum, faculty qualifications, GRE scores, etc.), process measures (graduation rates, time to degree, etc.), and outputs (number of degrees awarded, etc.) provide valuable information, but are different from learning outcomes. We understand that assessment of learning is inexact and difficult. That means we sometimes have to settle for proxy measures, quasiexperimental designs, and qualitative designs. The results might be a little fuzzy, but as long as we don’t over-interpret them, and as long as the results are informative to the faculty, assessment can be meaningful. It also means that each department’s assessment process will evolve and develop based on what the faculty learn from prior assessments. We want assessment to contribute substantially to evidence-based decision-making. At the department level, assessment results can inform decisions about curriculum, pedagogy, support, and advising. At the college and university levels, summarized results can inform policy decisions. While the results should be considered, there should be no direct or formulaic translation of assessment results into resource allocations or personnel decisions. That is, there should be no automatic reward for good results or penalty for poor results. Assessment plans, processes, and use of results should be written, shared, and periodically evaluated. Outcomes assessment should be recognized and rewarded. (assessment .dasa.ncsu.edu/academic-assessment/undergraduate-academicassessment-process/guiding-principles)
In this statement, you can see that the institution is declaring student learning and development as a priority. That doesn’t mean that this organization isn’t paying attention to monitoring its business practices, research and grant productivity, as well as other facets of its learning organization; rather, it means that when it comes to how student experiences are evaluated, this statement infers that high-achievement student learning and development for all students is the focus of their OBPR. This is a good practice. Senge (2006) and Senge and colleagues (2012) posit that organizations committed to their own learning examine themselves by gathering data to inform improvements at all levels. For higher education and student learning and development, this would be examining all the levels where the student interacts within the organization. A learning organization also
Ludvik.indb 29
27-10-2018 00:23:28
30
OUTCOMES-BASED PROGRAM REVIEW
Figure 2.5. OBPR can be implemented at multiple levels. Federal
State
Institutional
System/District
Division/College
Community Partners Alumni/Parents/Guardians, etc.
Department
Program
Course/Workshop/Service
examines itself at all levels where any human being interacts with other human beings so that the organization can ensure the cultivation of human flourishing for all (Goleman & Davidson, 2017; Senge et al., 2012). In this book, the process of learning organization self-examination that we are explaining is OBPR. In OBPR, the data are gathered at the level where the student interacts with the organization, and that data then feeds into the data gathered at the operational levels of the organization (where other human beings are interacting with each other). The whole point of this is to see how the entire organization is optimizing what it intends to create—high achievement of student learning and development for all students, among other aspects it values. See Figure 2.5 for an illustration of all the levels where data can be gathered—levels of organizational design where humans are interacting with other humans with the intention to optimize learning organizational purpose. The data gathered at all these levels must inform the choice of performance metrics the organization uses to transparently report accountability as to whether organizational purpose was achieved. The data can also inform the selection of variables that will be used in predictive analytics. If that all sounds logical, then we need to define human flourishing. Why? Because learning organizations can’t practice self-examination when the humans who are existing within them are not well resourced (Goleman & Senge, 2014; Senge et al., 2012). As such, consider that human flourishing is another way to view organizational success, and that OBPR is a methodology to demonstrate organizational accountability for ensuring human flourishing. According to the National League for Nursing (2014),
Ludvik.indb 30
27-10-2018 00:23:28
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
31
human flourishing is defined as an effort to achieve self-actualization and fulfillment within the context of a larger community of individuals, each with the right to pursue one’s own such efforts. It encompasses the uniqueness, dignity, diversity, freedom, happiness, and holistic well-being of the individual within the larger family, community, and population. Achieving human flourishing is a life-long existential journey of hopes, achievements, regrets, losses, illness, suffering, and coping. The nurse helps the individual to reclaim or develop new pathways toward human flourishing. (emphasis added)
Now, we invite you to reread this definition and replace the word “nurse” in the last sentence with your role within your organization. Does this definition represent organizational success? Does it represent a learning organization? Does it represent an organization healthy enough to provide evidence of high achievement of learning and development for all of its students? When you consider that you may have picked up this book to improve your organization using a specific methodology, we recommend you begin by asking yourself to what end you are improving your organizational learning. What is the whole purpose of the organization? Learning organizations such as postsecondary educational institutions exist within a context of humans cultivating other humans’ transformational learning and development experience (Bresciani Ludvik, 2016). We can focus on the performance indicators of how well we do that, such as number of levels of overall health and well-being of students, faculty, and staff; the percentage of and extent to which graduates have learned what was expected of them; refereed journal publications produced by faculty and students; number of performing art productions created; dollar amount of grants received; level of sense of belonging or social connection; fiscal health, community economic stimulation; community outreach services in place; degrees earned; or jobs graduates obtain. However, you can’t know how to improve those performance indicators without gathering data at the level of the organization where humans are interacting with other humans to make accomplishing all of those outputs possible. It doesn’t matter whether it is a human programming a computing network or a technological program that eventually interacts with humans or humans having direct face-to-face contact with each other. Regardless, we have to know how well it is all working and how it fits together to know how to influence those metrics, and that requires gathering data on all those levels—the kind of data that helps us understand human behavior and how to transform it as opposed to predicting behavior. It is also the kind of data that helps us understand operations and how to improve them. However, in learning organizations that intentionally seek to cultivate transformational learning, we can’t improve if we don’t collect the kinds of data that help us understand human and development transformation (Greer
Ludvik.indb 31
27-10-2018 00:23:28
32
OUTCOMES-BASED PROGRAM REVIEW
& Horst, 2014; Kline & Saunders, 2010; Marquardt, 2011; Senge, 2006; Senge et al., 2012). Still, why the focus on human flourishing? Tu Weiming (1984) noted that human flourishing involved learning to become human. Learning to become human involved “a creative transformation . . . an ever-expanding network of relationships encompassing the family, community, nation, world, and beyond. . . . Self, far from being an isolated individual, is experientially and practically a center of relationships” (p. 243). Historically, higher education has either been about serving the public good—a philosophical approach where impact may be difficult to measure—or stimulating the economy—an approach where impact is easier to measure (Cohen & Kisker, 2010; Greer & Horst, 2014). Greer and Horst (2014) attest that stimulating the economy does serve the public good; however, it wouldn’t if it was at the expense of human well-being. To support this notion is the research that declares that if an organization focuses on how well it promotes overall well-being or human flourishing, then that organization will indeed succeed in a manner where it is adaptable to the changing needs of the society or economy (Goleman & Davidson, 2017; Scharmer & Kaufer, 2013; Senge et al., 2012). It is a question of where the organizational leaders place their prioritized attention. If you picked up this book because you are concerned with restoring the value of the degree or the value of a postsecondary educational experience, then inviting your organizational leadership to discuss the role the organization has in promoting human flourishing is an important one. In OBPR, the process of systematic reflective inquiry weaves together seemingly individual areas of operation within an organization to examine how these humans and processes are interacting and potentially promoting or interfering with producing desirable performance metrics. So, in your organization, if you want to ignore the cultivation of human flourishing, you can. And you will likely find the results of that in your OBPR data, as well as in your performance metrics (e.g., retention of faculty, staff, and students). However, if you want to pay attention to how your organization is cultivating human flourishing as a learning organization, then your data will likely reveal direction toward closing achievement gaps, but only if there are outcomes at the level of where those students interact with your organization and assessment data gathered that can reveal what is happening within that organization as humans interact with other humans. “Every system is perfectly designed to get the results it gets” (Carr, 2008, p. 1). You decide; the purpose of OBPR comes alive only when you have declared the purpose of your organization (on whatever level you may be) and then align the purpose of OBPR to that.
Ludvik.indb 32
27-10-2018 00:23:28
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
33
What is the purpose of your organization or your part of the organization? Do you exist to graduate students into already existing jobs? Do you exist to transform students into global citizens? Do you exist to produce cutting-edge new knowledge? If you can articulate your organizational purpose (regardless of where your reside organizationally), then we can define what a program is and provide you with the methodology to help your organization examine itself on several levels. If you can’t articulate that, then we invite you to adopt human flourishing as the purpose of your organization. Why? If you adopt performance metrics as the purpose of your organization, such as the purpose of our organization is to facilitate gainful employment, then the metric is driving the organization. Think of it this way: A car company can make the purpose of their organization to produce functional cars. The performance indicator is the number of cars produced. Or, they can make the purpose of their organization to produce high-quality, affordable, and reliable cars. They are still counting the number of cars they produce; however, that is no longer an indicator of their perfomance; quality, affordability, and reliability are. There is rich literature about how humans learn, develop, and discover paths to meaningful and purposeful expressions of their lives in order to flourish in a profession (Astin, 1993; Bresciani Ludvik, 2016; Gardner, 1999; Goleman & Davidson, 2017; Herman & Hilton, 2017; Kuh et al., 2005; Kuh et al., 2018; National Research Council, 2000; Pascarella & Terenzini, 2005; Rendon, 1994; Senge et al., 2012; Terenzini et al., 1994; Tierney, 2000; Tinto, 1993; Upcraft & Gardner, 1989; Zelazo, Blair, & Willoughby, 2016; Zohar & Marshall, 2000; Zull, 2011). One form of measurement of human flourishing may be meaningful gainful employment. However, if you start with the metric as the purpose, you will need to constantly remind organizational members of the process that gets the organization to the metric. And you will need to constantly remind them of why collecting those other kinds of data that actually help you improve this indicator is important. If you make human flourishing your purpose, then you can adopt a number of measures and metrics that actually inform the improvement of the effectiveness of the entire process that cultivates human flourishing, crystallized intelligence and executive functions, and subsequently meaningful gainful employment. And that will ensure greater organizational buy-in because it will be meaningful to other humans within your organization. Regardless of where you are in your organizational engagement of OBPR good practices, consider the following questions at all levels of your organization. Responding to these questions may help you reveal the purpose of your organization, regardless of the level at which your organization resides.
Ludvik.indb 33
27-10-2018 00:23:28
34
OUTCOMES-BASED PROGRAM REVIEW
1. What are we intending to create here? 2. How well are we organized to create it? 3. What do we need to specifically do in order to create that so that all our diverse community members (students, faculty, and staff ) can be successful? 4. How long do we think we will need to do what is necessary to create what we expect to create? 5. How will we know whether we have achieved the level of quality of the creation we intend for all ? 6. How well will the ways in which we know how we have achieved help us improve how well we are organized, how we support those who are providing it, how we are resourced, or with whom we need to collaborate to improve? 7. How well will we be able to use that data to inform others of what we might need to improve the creation? 8. How well will we be able to assure those who have provided the resources for this creation that we are using those resources responsibly? 9. With whom else do we need to collaborate or communicate to make needed changes?
To ensure your ability to meaningfully apply these questions, refer again to Figure 2.5 for examples of levels of your organization. On which level do you have decision-making authority? As you examine each level, note that each of these levels will contain at least one or more of the following (the details of each are explained further in chapter 3): 1. Mission, goals, and purpose statements 2. Outcomes 3. Planning to be able to assure outcome achievement is completed for all community participants 4. Humans organized and educated/trained/nurtured/rewarded to assure outcome achievement 5. Resources allocated to assure outcome achievement for all 6. Data collected, analyzed, aggregated, and disaggregated by groups and subgroups of human beings and interpreted to determine whether outcomes have been achieved 7. Reflection and discussion of how those results inform organizational priorities and decisions 8. Following collaborative, reflective inquiry practice, action taken in the form of recommendations, changes, and/or requests for additional
Ludvik.indb 34
27-10-2018 00:23:28
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
35
resources, policy changes, and so on in order to continue to assure outcome achievement or to attempt to meet it for the very first time for all 9. A public/transparent reporting out of what was learned within this organization about how well outcomes were achieved and what will be done to improve them if outcomes were not achieved at expected levels for all
Defining What a Program Is Before moving to the next chapter, since this is OBPR, consider what a program might be for you. Many good practice institutions leave it to the members of their organization to determine how a program is conceptualized, and yet many define program as a collection of courses and required outsideof-classroom experiences that form a degree or an “institutional experience.” Texas A&M University, for example, defines program for a service-providing area as one that “should identify the key functions of their office and measure the extent to which they are achieved” (assessment.tamu.edu/What-isAssessment). You decide how to approach this, and make it clear for all of your community members. In 2006, we noted that some service or programming departments may want to subdivide their programs by thematic ways of delivering the programs, such as leadership development and programming for the general student body. In other cases, administrative units that have to evaluate both programming and facilities use, such as student unions and residence halls, might find it useful to divide their program review processes to accommodate such varying roles and missions. However, some departments may want to incorporate all of these types of responsibilities into one program review as one facet of the program and its outcomes as dependent on the other. For example, a program may not be able to illustrate fully why its educational learning outcomes are hindered if it doesn’t refer to limitations discovered in the facilities (i.e., classrooms, internship sites, technology, etc.) where learning is intended to occur or identify how organizational practices and policies hinder expected learning outcomes. It is also important to note any limitations in the assessment measures themselves. As mentioned, some academic programs may choose to have a program review document for each degree program; others may want one for the entire department. General education programs may be considered as a program while other institutions will want general education to be included within a degree program’s OBPR process because the manner in which students engage with general education may vary significantly by degree. Those institutions using predictive analytics may want to see outcomes-based
Ludvik.indb 35
27-10-2018 00:23:28
36
OUTCOMES-BASED PROGRAM REVIEW
assessment implemented for specific interventions and may also choose to have the interventions that particular groupings of students use encompassed into one OBPR process within a degree or transfer trajectory. This is done particularly well at Guttman College through the use of their ePortfolios. Readers may also want to see how Texas A&M University is looking holistically at the student out-of-classroom learning experience as well using ePortfolios. Other institutions are studying particular sets of students’ pathways to degree attainment and may consider each pathway to encompass one program review process. Again, there is no one “right” way. What is important is to make this process meaningful to your learning organization so that data can inform improvements and help the institution measure the extent that it is achieving its purpose. As such, the program or departmental leaders need to engage in OBPR in a manner that makes the most sense to them. Therefore, one department may have three degree programs, but since they all share learning outcomes, they may choose to have only one OBPR document; another department may have as many OBPR documents as it has specializations. And still another may choose to use OBPR among all of its units and be able to consolidate and roll up data from OBPR to inform a college or university-wide process, ensuring leaders’ ability to step back and fully examine capacity/inputs and overall experience in order to determine where policy, practice, and resource allocation changes need to occur. Your organizational leadership decides your organization’s most meaningful approach.
Key Learning Points 1. OBPR is a systematic reflective inquiry process in which program faculty, college/university professionals, students, and concerned community partners and scholars collaboratively articulate the intended results of the cumulative contribution of their program(s). In outcomes-based assessment, faculty and professionals document what the program(s) intends to accomplish in regard to its services, research, student learning and development, and faculty/staff development programs. The faculty, professionals, and community members then purposefully plan the program(s) so that the intended results (i.e., outcomes) can be achieved; implement methods to systematically—over time—identify whether the end results have been achieved; and, finally, use the results to plan program improvements or make recommendations for policy, recruitment, retention, resource reallocation, new resource requests, or additional faculty/staff development to
Ludvik.indb 36
27-10-2018 00:23:28
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
2.
3.
4.
5.
6.
7.
8.
37
assure human flourishing for all. Results can also be used to provide evidence of a learning organization, explain institutional performance indicator results, identify whether predictive analytic decisions are wise and skillful or causing harm, inform new data needed for improving predictive analytic processes, and select new types of performance indicators. This systematic process of reflective evaluation is then repeated at a later date to determine whether the program improvements contribute to the intended outcomes as well as the selected performance metrics. OBPR can occur on all organizational levels. Before proceeding, it is important to determine at which level you want to implement it and consider on which level(s) decision-making authority to improve the program resides. The purpose of OBPR is to engage in systematic reflective methodology that ensures evidence of a learning organization and that organizational purpose is achieved while also assuring HAAS. It is a reflective, collaborative way to ensure institutional effectiveness. Organizational leaders must clearly articulate the purpose(s) of their organization. Because higher education primarily involves humans cultivating other humans’ transformational learning and development experience, consider that one of the purposes of your learning organization is to foster human flourishing, which includes providing high achievement of student learning and development for all students and all service providers. Organizational leaders must define what OBPR is to their organization as well as create a statement for its intended purpose (e.g., conceptual framework). Different definitions and perspectives will result in confusion. With varying discipline backgrounds, organizations must also invest the time to establish a common language—a glossary of terms to use across disciplines—to minimize confusion. Organizational leaders at all levels must transparently convey the purpose of engaging in OBPR—to provide evidence that the organization is a learning organization committed to cultivating human flourishing for all. Organizational leaders at all levels must transparently convey the results of their engaging in OBPR.
More Questions to Consider There are several questions already posited in this chapter that we invite the reader to revisit. Chapter 5 also lists questions that are relevant to this chapter’s content. Some additional questions are as follows:
Ludvik.indb 37
27-10-2018 00:23:28
38
OUTCOMES-BASED PROGRAM REVIEW
1. What would it mean to your organization to commit to becoming or providing evidence that you are a learning organization? Who needs to make that decision? 2. What would it mean to your organization to commit to becoming or providing evidence that you are a learning organization committed to cultivating human flourishing for all your community members (students, faculty, staff, alumni, etc.)? Who needs to make that decision? 3. How might your organizational commitment to providing evidence that you are a learning organization influence how you define, explain the purpose of, design, and implement your OBPR process? 4. How might your organizational commitment to providing evidence that you are a learning organization influence your selection and use of predictive analytics and performance indicators? 5. How might your organizational commitment to providing evidence that you are a learning organization influence the manner in which you transparently report out the results of your OBPR process?
Notes 1. Institutional effectiveness is a college/university’s ability to provide evidence that it is meeting the following responsibilities: (a) its articulated mission and goals; (b) high achievement of learning and development for all students; (c) stakeholders’ needs; (d) the expectation to cultivate human flourishing while deploying resources effectively, prudently, and efficiently (e.g., stewardship); (e) contributing to the public good through creative expression, and economic stimulation, as well as advancing new knowledge and/or creative solutions; and (f ) demonstrating a measurable and actionable plan when evidence conveys it is not doing this at a high level of quality for all (adapted from Suskie, 2014 and other literature listed in Appendix A). 2. “KPIs are metrics that provide leaders information to evaluate an institution’s success or its progress toward a strategic goal. Examples of KPIs for higher education are enrollment, retention, and graduation rates” (Association for Institutional Research, 2017). 3. “[The] use of predictive modeling and other advanced analytic techniques to help target instructional, curricular, and support resources to support the achievement of specific learning goals” (Bach, 2010, p. 2). 4. “Strategic planning is an organizational management activity that is used to set priorities, focus energy and resources, strengthen operations, ensure that employees and other stakeholders are working toward common goals, establish agreement around intended outcomes/results, and assess and adjust the organization’s direction in response to a changing environment” (Balanced Scorecard Institute, 2017).
Ludvik.indb 38
27-10-2018 00:23:28
WHAT: DEFINING OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
39
5. Action planning is “a sequence of steps that must be taken, or activities that must be performed well, for a strategy to succeed. An action plan has three major elements (1) Specific tasks: what will be done and by whom. (2) Time horizon: when will it be done. (3) Resource allocation: what specific funds are available for specific activities” (Business Dictionary, 2017). In OBPR, action planning also includes measurements to determine whether the action plan has successfully been implemented to achieve the outcomes it was intended to influence or a reference to ensure its efficacy is assessed in a future OBPR cycle. 6. An external review is intended to provide an objective outside (to the program and preferably institution) perspective regarding the quality and effectiveness of a unit’s programs, services, resources, processes, and operations. It can also provide the institution with a comparative analysis/benchmarking of its performance with similar organizations or programs.
Ludvik.indb 39
27-10-2018 00:23:28
Ludvik.indb 40
27-10-2018 00:23:28
3 MORE WHY AND W H AT O F O U T C O M E S BASED ASSESSMENT PROGRAM REVIEW .
If I had only one hour to save the world, I would spend fifty-five minutes defining the problem, and only five minutes finding the solution. (Attributed to Einstein, as quoted in Calaprice, 2010)
S
o far, we have defined OBPR and explained its purpose along with a few terms used within it. We have also described some of the context that drives OBPR—to demonstrate the existence of a learning organization, particularly one that promotes student learning and development and cultivates human flourishing. In this chapter, we unpack the components of OBPR; however, before doing so, we need to spend some more time exploring the context of American higher education so as to ensure your organizational commitment as a learning organization.
Why OBPR When We Can Just Use Predictive Analytics? American higher education has been expected for decades to demonstrate evidence of how it delivers high-quality student learning for all the students it serves (Banta, Jones, & Black, 2009; Banta & Palomba, 2014; Bresciani, 2006; Bresciani et al., 2009; Kuh et al., 2015; Maki, 2010; Schuh et al., 2016; Suskie, in press). In addition, it is expected to produce cutting-edge research, as well as graduates who can resolve social problems of the world and/or secure gainful employment while earning enough money to repay their student loans in a timely fashion (Bok, 2013; Bresciani Ludvik, 2016; Kuh et al., 2015). Furthermore, postsecondary institutions are to do all of this with a balanced budget while generating additional resources that 41
Ludvik.indb 41
27-10-2018 00:23:28
42
OUTCOMES-BASED PROGRAM REVIEW
are not dependent on tuition and fees or state and federal funding sources. Meanwhile, various stakeholders hold the belief that there is little value to a higher education degree while neuroscientists, psychologists, and educators attempt to explain the complexities of learning and development with emerging neuroimaging data, replicable behavioral tasks, and standardized inventories. If we add to this the demand from students that the intersecting of their characteristic identities (e.g., race, ethnicity, gender, sexual orientation, disability, etc.) be considered when planning for their educational pathways and evaluating the effectiveness of their learning and development, we can begin to clearly see that no one process can provide all the evidence needed to improve this industry we call higher education. As the popular saying goes, one size does not fit all. One of the challenges that higher education has also experienced is “initiative fatigue” (Kuh et al., 2015; Suskie, in press). Initiative fatigue is the organizational experience of revolving door leadership (at all levels) changing focus on which processes should be used to determine institutional effectiveness. In 2006, we dedicated a chapter to reviewing the history of many of these approaches as well as reviewing regional and professional accreditation demands. In this book, we do not update this chapter, particularly because we trust that history is now common knowledge and also because accreditation and quality assurance processes around the world have fallen under the accusation that organizational leaders are “gaming” accreditation and quality assurance processes. When an organization begins its OBPR process because their accreditation review is two years away, the integrity of a continuous improvement process is questioned. While we keep reminding the reader that no one process will provide your organization with the ability to demonstrate that you are a learning organization (Senge et al., 2012), rather, a learning organization is committed to cultivating the learning of all of its members in order to continually transform. What we offer here is an inquiry template with several questions for you to consider as you create or continue to refine your own learning organization processes that assure institutional effectiveness. Whether you begin with performance indicators or predictive analytics to discern where you need to implement meaningful OBPR or whether you begin by using OBPR to inform what your performance indicators and predictive analytics should be, or explain the reason for what they are, just begin or continue as the good practice institutions have (see Appendix B for a listing of good practice institutions). One thing that good practice institutions respect is that the industry of higher education must demonstrate effectiveness of purpose while also demonstrating that institutions are learning organizations, particularly in regard to ensuring that all students are performing at high levels of achievement.
Ludvik.indb 42
27-10-2018 00:23:28
MORE WHY AND WHAT OF OBPR
43
Collectively, this is called institutional effectiveness (see endnote 1 in chapter 2 for a more comprehensive definition). Good practice institutions recognize that key performance indicators are used comparably to determine institutional effectiveness. They also acknowledge that understandable confusion resides around the meaning of the terms benchmark indicators, performance indicators, dashboard indicators, scorecard indicators, and predictive analytics. As such, just as it is important for the members of your organization to determine the definition of and purpose for OBPR (e.g., an explanation of why you are engaging in OBPR and what you hope to gain from it), as well as developing a common language for engaging in OBPR (e.g., define what each term used in the process means), it is important that the terms benchmark indicators, performance indicators, dashboard indicators, scorecard indicators, and predictive analytics are also defined. While good practice institutions define these terms (recall IUPUI’s and Isothermal Community College’s glossary), for this chapter, we will use benchmark indicators, performance indicators, dashboard indicators, and scorecard indicators interchangeably and refer to them as performance indicators. Dolence and Norris (1995) defined performance indicators as “measures that are monitored in order to determine the health, effectiveness, & efficiency of an institution” (p. 35). For practical application of what this means, consider the following analogy. I used to drive a red Jeep Wrangler that only had three indicators on the dashboard: (a) the temperature gauge—an indicator of how hot or cool the engine was, (b) the speedometer—an indicator of how fast or slow the Jeep was going, and (c) the gas gauge—an indicator of how much fuel was in the tank. These three indicators were broad signals of three areas of Jeep performance. As such, they informed some decisions I could make in order to optimize the performance of the Jeep. For example, if the gas gauge became low, I would determine that the Jeep needed more fuel, and I knew how to respond to that. However, I learned through trial and error by recording some data points in a notebook which kind of fuel provided the most optimal performance for my Jeep. I also learned that the Jeep ran out of gas before the indicator actually recorded the fuel as empty. So, the decision I would make was to put more fuel in the gas tank when the indicator reached one-quarter full as opposed to one-eighth full. This was true about my Jeep Wrangler, but I don’t know that this is true for all Jeep Wranglers because I never collected the data for anyone else’s Jeep. Taking this analogy further, if the temperature gauge indicated the engine was hot, I needed to make a decision to take the Jeep to a mechanic who would lift the hood and conduct diagnostics in order to determine why the Jeep was no longer performing well. Often during this process of gathering additional data, the mechanic would discover other things that were not performing optimally for which
Ludvik.indb 43
27-10-2018 00:23:29
44
OUTCOMES-BASED PROGRAM REVIEW
decisions needed to be made—things about which there were no dashboard indicators to provide early warnings. Outcomes-based assessment is like the diagnostics that the mechanic runs. In essence, we needed more data than the dashboard indicators could provide in order to make decisions about how to get that Jeep to perform at its most optimal levels. Outcomes-based assessment done well provides us with the kind of data that informs decisions for how to empower our students or our colleagues to “perform” optimally. It might also reveal things we never even thought to explore that are very useful in advancing all students’ learning and development. While cars now have much more sophisticated dashboard indicators, and we have a growing number of institutions engaged in using predictive analytics to determine optimal points in time to intervene and influence student and institutional performance, without outcomesbased assessment, we risk failing to gether the kinds of data that will ensure an understanding of what is actually going on in any one learning and development intervention for a particular group of students, whether it is a statistics course or a workshop intended to cultivate student’s attention regulation. For all four of the performance indicator–like terms (benchmark indicators, performance indicators, dashboard indicators, and scorecard indicators), members of the public often assume that the data used (e.g., indicators) to inform decisions are comparable across programs and comparable across institutions. If the data are comparable across programs, then it may be that the institution is using outcomes-based assessment measures (e.g. rubric scores, specific institution- or system-designed tests and inventories) and/ or the results from some programs to predict success for other students who have similar characteristics and are also participating in similarly designed programs. While it may be efficient for an institution to engage in this predictive analytic practice, the organization must pause and question how the use of predictive analytics is transforming students and serving students in a way that cultivates human flourishing for all. For instance, is the predictive analytic process assuming that other variables involved in learning and development can be constantly and consistently controlled? In essence, are the predictive analytic processes removing the human being and all of the human’s unpredictable ways of interacting from the analysis? If so, the organization will be creating systems that are perfectly designed for some students to succeed and others to fail. For another analogy, consider this example. My father is a retired biochemist. When I began engaging in educational research, he would understandably often question my research methodologies. One night, we had a phone conversation after he came home from work. He was clearly frustrated with one of his lab assistants. The lab assistant had mixed up some chemicals
Ludvik.indb 44
27-10-2018 00:23:29
MORE WHY AND WHAT OF OBPR
45
and was unsure which chemicals had been placed into which sets of Petri dishes. My father had been engaged in some complex studies to determine how certain combinations of chemical interactions would influence a particular strain of bacteria. And now, the very controlled experimental environment no longer had its controls in place. His costly experiment was ruined because of the choices of one human being. On top of it, he was experiencing intense stress from how he was going to explain such a huge investment of time and money with no results to show. My heart could really empathize, and I thought quietly, “Welcome to my world.” There are very few controls when humans are cultivating other humans’ education. When we say that all four of these performance indicator terms assume that the data used to inform decisions need to be comparable across programs and comparable across institutions, we mean that the instruments used in data collection must be the same or the definitions for what is being collected must be the same even though the ways all these human beings interact to influence specific student learning and development will be different. For example, James Madison University has worked diligently with faculty to develop discipline-specific tests as well as competency-based tests that are used across the institution. In addition, they have published their findings in refereed journal articles. These articles can be found on their website (see Appendix B). Sinclair Community College uses rubrics across their programs to assess common learning in their general education competencies (see www.sinclair.edu/about/offices/provost/assessment-of-student-learning/ general-education-outcomes-assessment/). And the University of Wisconsin at Whitewater is working with faculty to align their rubrics to the American Association of Colleges & Universities’ (AAC&U) Liberal Education and America's Promise (LEAP) Outcomes (www.aacu.org/sites/default/files/files/ LEAP/EssentialOutcomes_Chart.pdf ) for comparability use. For good practice, it is imperative that comparable data are collected at similar points in time, such as collecting learning and development outcomes during finals week or term-to-term persistence data with the enrollment figures after the drop/add deadline. It is also good practice to use similar definitions such as the IPEDS definition of first-time, first-year students. Many institutions such as James Madison University, University of Wisconsin at Whitewater, and IUPUI have also implemented assessment days to collect comparable direct evidence1 of student learning using tests and rubrics. Without OBPR to accompany these kinds of comparable data reporting, these good practice institutions would not be able to explain the subtle or not so subtle differences in what they discover. Using comparable data gets complicated in a discussion about ensuring that all students succeed at a high level, as we never know which identity
Ludvik.indb 45
27-10-2018 00:23:29
46
OUTCOMES-BASED PROGRAM REVIEW
or the intersecting of characteristic identities (e.g., race, ethnicity, gender, sexual orientation, disability, etc.) is most prominent for that student at the point of data collection. For example, the National Center for Education Statistics (Musu-Gillette, 2016) reported that the 2013 “6-year graduation rate for first-time, full-time students was highest for Asian students and students of two or more races (71 percent and 68 percent, respectively), and lowest for Black and American Indian/Alaska Native students (41 percent each)” (p. v). However, we have no idea about how the socially constructed definition of race is influencing the performance indicator of graduation rate without collecting more information. Well-facilitated OBPR can help to provide more information. But first, it may be useful to understand graduation rate by race, ethnicity, gender, first-generation status, and so on, and even more meaningful when reported by the intersecting of those characteristic identities (e.g., performance indicator informed by characteristic identities). For example, Fletcher and Tienda’s (2010) research in Texas discovered how high school quality increases race and ethnic inequality in postsecondary achievement. Texas A&M University could then use that information to assess how well interventions designed to support students matriculating from low-performing high schools were working as designed. This assessment data became even more meaningful when rolled into OBPR, where students’ pathways could be examined to determine what other factors (specific types of degrees, course enrollment trends, student organization involvement, sense of belonging, meaning-making, etc.) were influencing students’ success. The useful dashboard indicator (e.g., graduation rates by the intersecting of race and gender and other characteristics and experiences) requires outcomes-based assessment to determine what is happening within that organization in a manner that can inform improvements so that all students can achieve high levels of success. Now, in order to connect OBPR data to performance indicator data, program outcomes must be aligned with performance indicators. In a moment, we will illustrate how good practice institutions do that with a very basic template. For now, consider that this also means we need to examine the feasibility of performance indicators to inform improvements and, as such, your institution may need to consider using new data collection methods. Appendix C provides some examples of performance indicators that can be used in a student learning and development HAAS2 OBPR approach if the student identifier data have also been collected in a trustworthy and reliable way (student identification card tracking systems can make this possible). In addition, we posit some learning outcomes measures that can also be used for comparable performance indicator data and that we hope institutions will seriously consider in order to cultivate human flourishing and HAAS.
Ludvik.indb 46
27-10-2018 00:23:29
MORE WHY AND WHAT OF OBPR
47
In order to improve organizational performance for all students, we must also understand individual student experiences. Good practice institutions do not ignore this type of data even though they may not transparently share their disaggregated data. Well-executed OBPR can provide meaningful data if data outliers3 are taken into consideration to inform decisions as well. For example, borrowing a modified dialogue example from San Diego State University, if an institution is losing 13% of its commuter Latinx firstgeneration students in the first year, understanding those students’ experience is obviously important. That means if you only have 1 commuter Latinx first-generation student in your class and that student is not learning what he or she needs to learn—particularly in comparison to what the rest of the students are learning—then we have to see what decisions need to be made for that student. However, organizationally we are often looking at acceptable performance as an 87% pass rate. That means we are indicating to the other 13% (for which there may be 1 commuter Latinx first-generation student in this particular example) that it is acceptable to this institution that they fail; however, at San Diego State University, that is not acceptable. Again, a performance indicator informs you of a problem that you then need to examine on a finer, more granular level. However, that takes time, and most faculty and administrators report that they don’t have time to accommodate all of their students’ needs. However, at San Diego State University, the instructors who serve these commuter students in their university seminar class have engaged in a conversation that is informed by OBPR data to figure out how to better serve all their students, especially the ones that appear in reports as data outliers.
What Are the Documentation Components of an OBPR Plan and Report? In 2006, many good practice institutions did not readily make their OBPR plan and reporting templates publicly available. Today, there is a wealth of resources available online, much of which can be accessed via NILOA at www.learningoutcomeassessment.org/CaseStudiesInstitutions.html or at Assessment Commons located at assessmentcommons.org. As such, in this section, the intention is to provide one location where an organization can examine some aspects of a combined template to determine which pieces it may find useful to replicate and which ones it would prefer to adapt. As we conducted our online research, we noted that many institutions utilize tables and word documents, such as the examples contained in this chapter, or they collect data utilizing outcomes-based assessment management technology such as Blackboard (help.blackboard.com/Learn/
Ludvik.indb 47
27-10-2018 00:23:29
48
OUTCOMES-BASED PROGRAM REVIEW
administrator), Campus Labs (www.campuslabs.com), TaskStream/TK20 (www1. taskstream.com/testimonial/solutions-testimonial- wesleyan/), TracDat (www.nuventive.com/nuventive-improvement-platform), or WEAVE Online (weaveeducation.com), to name a few. Sacramento State University uses SharePoint to collect assessment plan data as well as report data, while other good practice institutions have designed their own technological means to collect evidence of OBPR, manage OBPR processes, and use that information for decision-making. Similar to any technology, no one system will be met with approval by all users, especially those users with very specific professional accreditation reporting needs, so we caution the readers to avoid the temptation of adopting a system that a good practice institution may be using. Rather, just as the good practice institutions have done, we encourage you to engage with your community who will be using the technology to determine which system may be best for you, if you choose to use one at all. Before we outline what good practice institutions document to demonstrate that they are a learning organization that is engaged in meaningful inquiry to inform how to improve and expand what they intend to create, the consideration of how often a program should document this process comes into mind. Many good practice organizations have published time lines for when programs or departments are required to submit their documented OBPR information. In chapter 5, we provide some questions for your organization to consider in determining how frequently you would like to see programs submit their documentation so that leaders at all levels can dialogue on how to support that particular program in its improvement process. Some good practice institutions align the frequency of their OBPR process to particular professional accreditation review time lines and others do not. Some good practice institutions allow their management to really guide frequency of the review process so as to align it with budgeting and strategic planning processes and others do not. Some good practice institutions implement an annual reporting process (that often includes outcomesbased assessment results) that is then rolled into a five- or seven-year program review and others do not. The point is to make your time line for OBPR review meaningful and transparent and to make the reasons for the selected time line clear. The following is a meta-synthesis of the specific OBPR components that good practice institutions document. Some institutional examples are utilized to illustrate what some components look like. We invite you to remember that some good practice institutions collect more data than what is shown in the following, while others document less. Still others name these components something different than what is listed. In addition, some programs within good practice institutions may vary on their documentation practices
Ludvik.indb 48
27-10-2018 00:23:29
MORE WHY AND WHAT OF OBPR
49
based on their professional accreditation reporting requirements. Our goal here is to illustrate each component with as much clarity as possible so that readers can adopt and adapt this documentation template for meaningful use within a particular organization.
Program Name As we mentioned earlier, it might be difficult to determine the entirety of a program. See chapter 2 for examples on how to proceed here. Keep in mind that many institutions, particularly those engaged in pipeline or pathways projects,4 may be combining many service providing areas into one degreeattainment pathway program review. Others may want to include only areas over which they have direct decision-making authority. In addition to the program name, include the primary contact information of the person or people who can answer questions about the OBPR plan and report. Some good practice institutions invite programs to list all the people involved in the OBPR process in an appendix to demonstrate how shared the inquiry process has become.
Program Mission or Purpose List the program mission or purpose statement. Explain how this program mission or purpose aligns with the mission of the department, college, division, or university where it is organized. For many programs, articulating a mission or purpose statement is the first step in the process of being able to articulate outcomes. If an organization can articulate a general and brief description of what the program is about, that description may help in the articulation of general goals and, later, more specific outcomes (Bresciani et al., 2004; Bresciani et al., 2009). Doing so will also help explain how the program aligns with institutional values and priorities. Another way to do this is to simply provide an alignment matrix that illustrates these connections. Appendix D is an example from Oregon State University (OSU) that illustrates one way in which a program can align its program outcomes with higher level department and institutional learning goals.
HAAS Statement Indicate how this program has been designed to advance high achievement for all students and list the performance indicators (including college/ divisional level and institutional indicators—see Appendix C for ideas) that will be used to demonstrate advancement of HAAS. Approach this as if it is your “elevator statement.”5 Be sure to include the actual performance indicators that will be used to measure HAAS. If the current program shows no
Ludvik.indb 49
27-10-2018 00:23:29
50
OUTCOMES-BASED PROGRAM REVIEW
evidence that it is demonstrating HAAS, then this section will be brief, as you will be describing how the program could shift its outcomes and design in order to demonstrate that it is contributing to high achievement goals and performance indicators for all the students it serves. An example from San Diego State University reads as follows: The overall HAAS’ goals of the SDSU Strategic Plan and the Commuter Success Pathways are to increase the academic success (cumGPA) and persistence of commuter students. The primary program goals of the USEM course are to foster sense of belonging and community within students as well as nurture overall well-being, as doing so should promote the achievement of both high achievement goals (cumGPA and persistence). . . . If the course is meeting these outcomes, it should be supporting the academic success and persistence of commuter students by cultivating sense-of-belonging and overall well-being and thus contributing to the achievement of the higher-level goals outlined in SDSU’s strategic plan. We can comparably measure sense of belonging using Hoffman’s Sense of Belonging scale and overall well-being using the WEMWBS scale. (See Appendix C for scale citations.)
Descriptive Overview This section is a succinct overview that describes the program being assessed and introduces any learning, development, and engagement theories that undergird the program goals and outcomes. In addition, the manner in which these theories connect to advancing HAAS students is briefly described. This section also describes a brief history of the program and introduces other information indicating why the program exists and what it is intended to accomplish. This overview may also encompass a vision statement about why the program came into being and/or market research, needs assessment, or needs analysis on the importance of the program’s existence. Furthermore, this is the place to indicate how the program mission, purpose, goals, and outcomes were derived. For instance, one can state whether professional association or other types of studies were completed about future needs that informed why this program has the outcomes it has, or if community partners or other constituents were consulted in order to create the programs’ mission, goals, and outcomes or specific design. This is also a relevant place to list assumptions about the program, as in, “It is assumed that when each workshop or course is offered, even though the workshop facilitator changes, the outcomes will be the same.” Many programs will refer to information that already exists on program websites, citing relevant URLs. Some of the information included in this section may circle back into discussion when the results of the OBPR are analyzed and interpreted for
Ludvik.indb 50
27-10-2018 00:23:29
MORE WHY AND WHAT OF OBPR
51
decision-making. For example, some institutions require labor market analytics or needs analysis to inform their program review processes. The questions that speak to the intention of using comparable data such as this may be missing, so they should be included in the descriptive overview or in the discussion of findings and recommendations section itself. For example, what is the guiding question in your program review process that invites this kind of comparison? Are you inviting program directors to compare their own graduate job placement rates with the region’s, nation’s, or international historical data and/or a consulting company’s trend projections? Or are you asking program directors for a market analysis of the region, nation, or world to demonstrate need of the program being offered by the time students graduate? And then, is the information used to assess how well students are providing what the market analysis says is needed in the way of learning outcomes demonstrated by graduates? Is the academic program using these data to determine how well the program is preparing graduates for jobs that the market analysis is demanding? Or are the career services offices using this data to determine how well they are preparing graduates to secure jobs? Or both? Or none of the possibilities mentioned here?
Program Goals Program goals help program faculty and administrators focus on delivery of the program. This may aid in both planning and assessment as program faculty and administrators can identify how they intend to achieve their program mission through various steps and methods. Program goals can also capture the vision and/or value statements that inspire us to do what we want to accomplish (Banta & Palomba, 2014; Bresciani et al., 2004; Bresciani et al., 2009; Suskie, 2009). Broad statements such as “students will be exposed to opportunities to appreciate the arts and literature” or “students will value diversity” embody what we want to have happen as a result of the learning in the program; thus, we need to make sure students have had opportunities to learn about art and literature and how society benefits from art and literature. We cannot necessarily identify “appreciation” or “value,” but this is not the level where we need to be able to distinguish that type of detail. Likewise, goals help researchers make statements such as “to provide cutting-edge discoveries in the area of blood cell tissue regeneration.” Goals can provide performing artists with the opportunity to generate statements such as “to improve understanding for how the arts cultivate desired life skills.” Such goals help the decision-makers to design a lab that will be conducive to this kind of discovery or a learning experience that will cultivate expected learning. But to determine whether discoveries have actually been made or whether the lab is properly equipped and staffed, the faculty and
Ludvik.indb 51
27-10-2018 00:23:29
52
OUTCOMES-BASED PROGRAM REVIEW
staff teams need to articulate outcomes to identify whether the goals have been met and to implement the means to evaluate these outcomes. Goals are broad, general statements of what the program expects participants to be able to do or to know. Generally, goals are not directly measurable. Instead, they are assessed by measuring specific outcomes derived from the goals (Banta & Palomba, 2014; Bresciani et al., 2004; Bresciani et al., 2009; Suskie, 2009). The alignment of each program goal to department, college, division, and university goals and performance indicators, which include HAAS indicators for all students, strategic initiatives, and typical performance indicators, assists with the communication of priorities and allows programs to illustrate how they are operating according to stated priorities. As such, provide clear alignment of each program goal to department, college, division, and university goals, or strategic initiatives. In addition, the alignment of each goal with professional accreditation standards, if applicable, allows you to determine how this program may meet higher level organization goals and strategic planning initiatives, so clearly articulate that alignment if it exists. Once data are collected on the outcomes that are aligned with these goals, you can describe the extent to which the program is contributing to meeting higher level organization goals and strategic planning initiatives. Goals should also be connected to the program mission. Box 3.1 provides an example of all the documentation components that need to be aligned within OBPR in order for it to be used to inform performance indicator data interpretation and usage. It also provides a simple example of one program outcome that is aligned with higher level strategic initiatives and performance indicators, as it lists those in parentheses following the outcome. An example of another way to document this alignment comes from San Diego State University. The example represented in Table 3.1 also demonstrates outcome to goal alignment, which is further explained in the next section. BOX 3.1.
Alignment Is Important
• Outcomes to goals (state, institutional, division, program) including high achievement for all student performance indicators or predictive metrics; for example, “students will self-report improved overall well-being (SDSU strategic goal #3, WEMWBS score, GPA, time-to-degree)” • Evaluation methods/criteria to outcomes • Results to outcomes • Decisions to outcomes • Resource requests to outcomes
Ludvik.indb 52
27-10-2018 00:23:29
Ludvik.indb 53
27-10-2018 00:23:29
1. Two-minute reflective journal prompt A 2. Faculty Interview Completion Sheet 3. Scavenger Hunt
1. Two-minute reflective journal prompt B 2. Faculty Interview Completion Sheet 3. Planner entry and corresponding SSM reflection
1. Academic probation rate after first semester 2. First semester GPA 3. Hoffman Sense of Belonging Scale score 4. WEMWBS scale score
1. Continuation rate from fall to spring semester 2. Hoffman Sense of Belonging Scale score 3. WEMWBS scale score
1. Self-report having developed high quality personal relationships within the USEM course instructors 2. Explain how to seek help from faculty outside of class 3. Explain how they leveraged SDSU community resources to manage their overall well-being 1. Self-report having developed high quality personal relationships within the USEM course 2. Explain how to seek help from faculty outside of class 3. Report how they leveraged the SDSU community to manage their overall well-being
Foster development of sense of belonging in students Improve self-reported comfort levels in seeking help from faculty Improve overall well-being
Foster development of sense of belonging in students Improve self-reported comfort levels in seeking help from faculty Improve overall well-being
Increase academic success of commuter students (SDSU Strategic Plan Initiative 3.5)
Increase academic persistence of commuter students (SDSU Strategic Plan Initiative 3.6)
Note. Reprinted with permission of San Diego State University
Direct Outcomes Measures
HAAS Performance Indicator(s) Compared by Intersecting of Characteristic Identities
USEM Learning Outcome(s) Students who have completed USEM will:
Department/Program Goal(s)
SDSU High Achievement Goals
Goal, Outcome, Performance Indicator Alignment Table
TABLE 3.1
Alignment Table
54
OUTCOMES-BASED PROGRAM REVIEW
Outcomes Outcomes statements are the point at which the mission and goals come to life. Good practice institutions specifically describe the result of the program activities for enrollment, research, faculty development, service, and in-class and out-of-class initiatives, and many make these outcomes publicly visible on their websites (see Appendix B). Outcomes are more detailed and specific statements derived from the goals. Outcomes describe what programs expect the end result of their efforts to be. For example, what does program leadership expect participants to know and do as a result of participants completing a series of courses, a workshop, a sequence of coaching sessions, one-on-one advising sessions, or a Web-based training series? Outcomes do not describe what the program does to the participant, but rather how the program expects the participant to demonstrate what he or she knows or can do (Banta & Palomba, 2014; Bresciani et al., 2004; Bresciani et al., 2009; Suskie, 2009). In this section, identify students’ or participants’ learning and development outcomes as well as other program outcomes that address student services, program processes, enrollment management, research, development, HAAS, alumni outreach, and other organization services and practices. In addition, align each outcome with a program goal. This alignment allows you to link your outcomes to department, college, division, and university goals; high-achieving performance indicators for all students; and strategic initiatives and performance indicators as well as professional accreditation standards (see Box 3.1). Such alignment allows you to determine how this program intends to meet higher level organization goals and strategic planning initiatives. Once data are collected on the outcomes that are aligned with these goals, you may develop insight into how a program is contributing to meeting higher level organization goals and performance indicators, which include HAAS goals and performance indicators and strategic planning initiatives and predictive analytic metrics, because you have previously completed the alignment process. You should also be able to describe how the outcomes align with the program mission and goals.
Planning for Delivery of Outcomes/Outcomes-Alignment Matrix In the cases where good practice institutions ask programs to outline how services or a curriculum are delivered, these institutions demonstrate the link of planning to OBPR. Linking planning for the delivery of a program to
Ludvik.indb 54
27-10-2018 00:23:29
MORE WHY AND WHAT OF OBPR
55
planning for its evaluation helps faculty and staff to further identify the value of the OBPR process as well as providing learning organization evidence. In articulating how their program is delivering the intended result, faculty and staff can better identify what that result should be based on their delivery methodology, or they can change their delivery method to better ensure reaching the intended outcome. They can also identify whether they can expect to see the expressed outcome based on whether they are intending to deliver the outcome, and they can identify opportunities inherent in the delivery where evaluation methods can be embedded, thus making the process more manageable. In this section, you describe or illustrate (using tables, diagrams, or figures) how the program leadership expects the participants to learn what they are expected to learn in order to meet an outcome. Does the program leadership expect participants to learn what they are expected to learn through courses and a series of out-of-classroom learning opportunities such as workshops, one-on-one consultations, a website, a webinar, high-impact practices, or through other mechanisms? Planning for delivery of outcomes involves indicating all of the ways that the program leadership provides participants with the opportunity to achieve the learning outcomes. Developing and including a curriculum alignment matrix or an outcome delivery map may help illustrate planning for delivery of outcomes. When you identify where an opportunity for the outcome to be taught or for the service to be delivered resides, you can better determine whether the outcome will be met given the opportunities provided to participants. This process of outcome delivery planning also ensures that program leadership provides opportunities for the outcome to be learned or met, rather than just expecting it to be achieved without intentional action planning. Identifying where outcomes are being taught or delivered also provides opportunities to identify where the outcome may be evaluated. Following are two examples of curriculum alignment matrices using different approaches. The first one is from a psychology program at the University of St. Thomas (see Table 3.2). Here you can see where faculty expect to introduce, reinforce, and then assess the student learning in this program. Table 3.3 is an example from a biology program at the University of Hawai‘i-Mānoa where Xs are used, yet there is still notation of where evidence will be collected and analyzed for use in program review. Another example can be found in Appendix E from Hagerstown Community College.
Ludvik.indb 55
27-10-2018 00:23:29
Ludvik.indb 56
27-10-2018 00:23:29
TABLE 3.2
I
Demonstrate a broad and general knowledge of psychology, including its history, methods of study, major applications, and ethical issues.
Demonstrate ability to write a research paper that provides a coherent summary and analysis of empirical literature devoted to a psychological research question.
Demonstrate ability to conduct appropriate psychological research to investigate problems related to behavior and mental processes.
Demonstrate an understanding of how psychology can be applied to enhance human welfare, promote social justice, and appreciate human differences.
Outcome 1
Outcome 2
Outcome 3
Outcome 4
STAT
R
Course One R
Course One
Cognitive: PSYC 315, PSYC 400
R
R
Course One
Social: PSYC 121, PSYC 151
R
R
Course One
R
R
Course One
Clinical: PSYC 301, PSYC 302, PSYC 428 Electives
R
Course One
R
Course Two
R
Course Three
Req.
R, M
M, A
I, R
M, A
R, M
Course Two
Laboratory Courses Course One
I, R, A
PSYC 212
Req.
M, A
M, A
PSYC 422
Note. “Introductory level: Students are introduced to knowledge and skills and are able to remember and understand what they have learned. Reinforcement level: Students practice through activities that help them learn how to apply their learning or skills. Mastery level: Students are able to integrate the knowledge and skills in order to synthesize, evaluate, and create new ideas or products. Many program student learning outcomes may be assessed through the same project or measure. Assessed: To gather evidence of student learning.” (Retrieved from https://www.stthomas.edu/accreditationassessment/assessment-best-practices/curriculummapping/) Note. Reprinted with permission of the University of St. Thomas.
I
BIOL
MATH
Allied Requirements
Req. PSYC 111
Legend: I - Introduced; R - Reinforced; M - Mastered (or emphasized); A - Assessed
BioPsych: PSYC 206, PSYC 207
Develop.: PSYC 200, PSYC 202, PSYC 203, PSYC 204
University of St. Thomas Student Learning Outcomes Curriculum Map Psychology BA Program
57
MORE WHY AND WHAT OF OBPR
TABLE 3.3
Curriculum Map Example: Excerpt From a Sample Biology Program at the University of Hawai‘i-Mānoa
Requirements
Apply the scientific method
Develop laboratory techniques
Diagram and explain major cellular processes
BIOL 101
X
X
BIOL 202
X
X
X
BIOL 303
X
X**
X
BIOL 404
X**
Awareness of careers and job opportunities in biological sciences X
X**
Exit interview
X X**
** = collect evidence for program-level assessment from students in this course Note. Reprinted with permission of the University of Hawai‘i at Mānoa
Evaluation Methods and Tools This section includes a detailed methodology to evaluate each outcome. While OBPR is not research, it utilizes many of the methodologies that educational researchers use. And similar to research, having more ways in which you can assess each outcome means you will have more information to really understand what is going on. Good practice institutions use both direct and indirect methods to evaluate their outcomes. Banta and Palomba (2014) define direct evidence as methods of collecting information that require the students to demonstrate their knowledge and skills and indirect evidence as methods that ask students or someone else to reflect on student learning. Using Peter Ewell’s (2003) definitions of direct and indirect evidence and applying them to examples for administrative program review, indirect evidence would be the type of data collected through means that are designed to gather evidence, such as surveys. Direct evidence is more inherent to the function one is evaluating, such as tracking and reading Web use patterns when students register for courses or analyzing the dialogue of small groups who are discussing a case study. While both methods have value when assessing outcomes, some may be more revealing about how the outcomes were met or not met and may thus lead to more informed decision-making. For example, imagine that program selects an indirect outcome of “increasing retention rates by 4%.” Upon measuring that outcome, it learns
Ludvik.indb 57
27-10-2018 00:23:29
58
OUTCOMES-BASED PROGRAM REVIEW
that retention rates of students who participated in the program did not increase by 4%. What do the program faculty and administrators do with that information? How do they improve their program? In contrast, the program administrators may have chosen to engage in direct evaluation methods for outcomes, such as using a scavenger hunt (either as a digital game or on-campus activity) to see whether students can identify two separate offices offering academic support based on specific case studies that illustrate different academic support programs that are needed. In this evidence-gathering activity, if students are unable to select the most appropriate academic support program service, program faculty and administrators know what to improve in the program that taught students about those support systems and may begin to ascertain why these students are not persisting as expected. Selecting evaluation methods that help inform decisions is important, regardless of whether they are direct or indirect means of evaluation (Table 3.1). So, as mentioned, whether using direct or indirect outcome measures, selecting multiple methods of evaluation will reveal a better understanding of what was learned about the outcome and will inform more meaningful decisions (Banta & Palomba, 2014; Bresciani et al., 2004; Bresciani et al., 2009; Maki, 2004; Suskie, 2009). Selecting several methods to evaluate one outcome is good research. However, selecting multiple methods for each outcome is something that evolves over time, as faculty and staff become more skilled in embedding outcomes-based assessment into their day-to-day ways of doing. Selecting evaluation methods for each outcome often requires the assistance of several program faculty and/or administrators. Furthermore, it often takes time to identify an evaluation method that actually provides program faculty and staff with meaningful information they can use to inform decisions for continuous improvement. As such, in this section, describe the tools and methods (e.g., observation with a criteria checklist, survey with specific questions identified, essay with a rubric, role-playing with a criteria checklist) that will be used to evaluate the outcomes of participants in specific programs. In this section, identify the sample or population you will be evaluating, identify one or more evaluation methods or tools (survey, focus group, case study, observation, etc.) for each outcome, and include the criteria (questions in the survey, questions in the focus group protocol, characteristics in the case study, specific behaviors in the observation, etc.) that will be used with the tool to determine whether the outcome has been met. For example, if your tool to measure an outcome is a behavioral inventory, which subscale in the survey is measuring which outcome, or is the entire inventory doing the measuring? If your tool is a test, which questions assess a particular outcome? If your tool is an observation,
Ludvik.indb 58
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
59
what are the specific behaviors that you are seeking to observe in order to identify whether an outcome has been met? Your methods section also provides a rationale for the measurements used to assess each outcome (e.g., why certain outcomes were measured quantitatively, while others were measured qualitatively or using mixed methods), definitions of variables (other factors that may influence the interpretation of the data gathered such as level of learning already obtained, year in school, socioeconomic status, number of hours working outside of class, or any pertinent information that may influence variances in your data), a description of how the analyses will be conducted, how the sample was selected why you selected the size you selected, and other methodological questions discussed in class. Where appropriate, you may identify other institutional, system, or national data performance indicators or benchmark data (e.g., enrollment numbers, faculty-to-student ratios, retention rates, graduation rates, utilization statistics, satisfaction ratings, Community College Survey of Student Engagement (CCSSE)/NSSE scores, Sense of Belonging scores, overall wellbeing scores, and other performance or survey benchmark indicators) that will be used to help you interpret how and whether an outcome has been met, particularly in regard to your connecting your program’s outcomes with the institutional strategic initiatives and high achievement goals for all students, such as increasing persistence and graduation rates by specified identity group. Include the rationale for selecting each performance indicator. Also include the actual assessment, evaluation, and benchmark tools in the appendices and refer to each specifically in the narrative of your plan. Many good practice institutions collect outcomes-based assessment planning data annually and very quickly in order to support faculty and staff in their inquiry processes and their ability to roll up those annual inquiries into their larger program review documents when it comes time for them to do so. Examples of annual data collection tools are found at www.nu.edu/ student-achievement/outcome.html from National University; Appendix F from California State University, Sacramento; and Appendix G from James Madison University. Furthermore, James Madison University includes a complete “how-to” for complete these annual assessment guide for their faculty and staff online. You can access it at www.jmu.edu/assessment/_files/ APT_Complete_How_to.pdf.
Level of Achievement Expected This section indicates whether there is a particular expected level of achievement for each outcome, as well as for each higher level organizational goal
Ludvik.indb 59
27-10-2018 00:23:30
60
OUTCOMES-BASED PROGRAM REVIEW
and performance indicator. In other words, is there an acceptable level of performance listed for each outcome and indicator, and who determines that expected level? For example, let’s say we expect to see 80% of the students achieve a 4.5 on a 5.0 rubric for a specific outcome. Now we need to indicate who decided that the expected level of achievement was acceptable. For example, who determined that 80% at a 4.5 is the expected level of achievement? How was that performance level determined and why? Or, for example, perhaps you expect to see a 2% increase in Filipino student persistence from fall to fall. And perhaps it was an institutional Student Success team that determined that percentage increase. See www.nu.edu/student-achievement/ outcome.html for examples of expected levels of achievement from National University for each of their colleges.
Limitations and Assumptions If necessary, discuss the limitations of the evaluation methods or tools. Pay particular attention to how race, gender, ethnicity, disability, and other characteristic identities or the intersecting of those identities may have been categorized together and the assumptions that were made as a result. Limitations are reminders to you and your readers that you recognize the limitations inherent in your inquiry and have acknowledged them. Assumptions are statements that you have assumed are true, such as “all students participated in this program with the same level of motivation and engagement.”
Implementation of Assessment Process This section describes your plan for the implementation of the assessment process. It may be helpful to understand that you aren’t required to evaluate everything that a program does every year or every program cycle, as noted in the examples in Appendices F and G. You may elect to evaluate a limited number of outcomes each year and create an assessment plan that spans the number of years you have before your next OBPR review. For example, if your institution engages in program review every five or seven years, then you can spread out the assessment of your learning and development outcomes over four or six years (gathering, analyzing, and interpreting data on two to three each year), using the final year to pull everything together, engage in dialogue, and request an external review if desired or required. The implementation plan identifies the individuals responsible for conducting each step of the evaluation process. While many desire faculty involvement in the OBPR process, gaining faculty involvement is not always easy, particularly when it comes to dialogue across systems (programs/
Ludvik.indb 60
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
61
departments/schools/colleges) and collaboration with student success professionals. Regardless of whether faculty are primarily involved with teaching, research, or service, their involvement in OBPR is one way to ensure that the process reflects what the discipline/professional research illustrates the level of quality within the program should be. Without faculty involvement in the process, the relevance and value of decisions may be decreased. For example, when faculty have an opportunity to select what should be evaluated, discern how it should be evaluated, and deliberate over the findings, they claim ownership of the results generated from the study, so those results can be used to inform decisions for improvement. When faculty are not involved in this process, results are suspect simply because faculty feel detached from the process. When results are questioned for their integrity, it is less likely that they will inform sustainable decisions for improvement. Gaining administrative support for OBPR can be easier as administrators are more accustomed to having tasks assigned to them and complying with administrative directives. However, gaining administrative buy-in to be collaborative in the design, gathering of data, analysis, and deliberation over what decisions assessment results can inform can be tricky if key administrative players are left out of the assessment process loop. Therefore, it is as important to involve administrators in OBPR as it is to have pervasive faculty involvement. Good practice institutional leaders (at all levels of the organization) embody learning organization principles; they require that their colleagues engage fully in outcomes-based assessment. Each high-level administrator creates a community of expectations around engagement in OBPR. If needed, professional development support can be provided to teach organizational members how to engage in reviewing results in a meaningful manner. Without conveying explicit expectations that assessment will be completed and that personnel evaluations will be taken into account whether assessment has been done in a quality manner, a learning organization may be void of evidence that it is engaged in inquiry in a sustainable manner. As such, this section may include reference to the need to establish collaborative dialogue practices, clarify or revise roles and responsibilities or reference needed professional development. Some good practice institutions involve students, employers, alumni, and other pertinent community members and indicate that intended involvement in the time line. As such, develop a time line for implementation and include the points in time when each outcome will be evaluated. It is probably no surprise that articulating a time frame during which to operate will help ensure that the assessment work is completed when you need it (Bresciani et al., 2004; Bresciani et al., 2009). In addition, articulation of
Ludvik.indb 61
27-10-2018 00:23:30
62
OUTCOMES-BASED PROGRAM REVIEW
a time frame helps keep the program review process manageable. In some cases, it may be important to align the time frames differently for academic programs and administrative programs. It is important to be aware of professional accreditation review time lines, work cycles, course/term cycles, and fiscal years for those academic and administrative programs whose reviews may be affected by this. For example, at North Carolina State University, academic support programs with high summer work volume, such as new student orientation, have a different annual assessment reporting cycle from those whose programs are better suited to the academic calendar, such as supplemental instruction. Furthermore, an institution needs to decide whether it will encourage and support faculty and staff to gather annual assessment data to roll into 5- or 7-year program review processes, thus decreasing the amount of lastminute preparation for the more comprehensive program review process and making the process much more meaningful and embedded into the day-today ways of doing. As mentioned, some programs within the same institution have different cycles based on their professional accreditation reviews and their personnel resources. It is important to be sensitive to professional accreditation review and standards review cycles. Even if your institution has a 10-year program review cycle, consider being flexible in aligning the program review process with the professional or specialized accreditation cycle of those programs that have external accreditors or standards reviews. Aligning these time frames will make the OBPR process more meaningful to the faculty and staff involved if they can see that their work serves internal institutional accountability purposes as well as professional accreditation or standards review. This type of internal flexibility often infers an increased institutional administrative support cost early on in the process, as it literally takes more time and effort to educate program faculty and administrators and to provide programs with this type of reporting flexibility. However, the increase in value to those engaged in the OBPR process may be well worth this early investment. Many good practice institutions publicly display their rotating program review time lines on their websites (see Appendix B). If you develop a multiyear or multicycle assessment plan, indicate the point or points in time when a comprehensive review of assessment data will take place (e.g., when assessment measures taken at different points in time will be examined in a comprehensive manner). Identify the individuals who will be administering the assessment tools, collecting the data, analyzing the data, participating in interpreting the data, and making recommendations, along with a time line for implementing the decisions and recommendations. Consider how you should stagger when OBPRs are due so as to both leverage
Ludvik.indb 62
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
63
and not overwhelm upper-level administration with too many reviews that require dialogue and informed decisions yet allow upper level decision makers to leverage the budgeting cycles to provide needed resources for improvement. Identify the subject/task being evaluated. While it may not be necessary to report the subject/task being evaluated at an institutional level, if it is a large institution, asking program faculty and staff to take the time to identify the sample population may be helpful to institutional coordination of the assessment processes. This is particularly important where students are involved. If this reporting practice occurs, the university assessment committee or assessment professionals can coordinate where certain students are sampled and avoid oversampling of one student population while another sample goes untapped. The Division of Student Affairs at Oregon State University and Texas A&M University discovered that identifying who was being evaluated within each assessment plan helped them to coordinate and collaborate administering division-wide student satisfaction surveys as well as facility use practices. Identification of the sample population for the assessment process also helps in planning administration of the evaluation method. For example, the First Year College at North Carolina State University wanted to use a preand post-test methodology for evaluating transference of learning study skills from one course to another. The college identified all the students in the study skills program and preenrolled those students in a general education course so it could ensure the ability to test them at the end of the academic year. It took quite a bit of planning and foresight, but it allowed the college to use the methodology of its choosing because it identified the subjects being evaluated up front. List resources needed or allocated to implement high-quality OBPR. Good practice institutions provide a brief description of the resources needed or used to engage in meaningful OBPR, such as survey methodology; in-house data analysis support for reporting results by intersecting of student, faculty, and/or staff characteristic identities; and stated needs for specific professional development such as interpreting the results as they relate to the intended outcome of measure. Knowing what specific support resources were used or are needed in OBPR can assist those who support program review in learning more about what is used and how well it contributes to quality review processes. Such knowledge may help leverage existing in-house resources and in identifying unmet needs. Furthermore, this information is useful in the meta-assessment process (discussed in chapter 4) and in better understanding what else may be needed to support ongoing learning organization efforts. Furthermore, identifying these resources may assist with systems thinking and ideas to further promote collaborative OBPR work.
Ludvik.indb 63
27-10-2018 00:23:30
64
OUTCOMES-BASED PROGRAM REVIEW
While some institutions require programs to submit a financial budget for engaging in their OBPR process, many good practice institutions do not ask for this because they expect that OBPR budgeting will be a part of the overall program budget or is already a part of the institutional office budget responsible for coordinating institutional OBPR. In addition, it is sometimes difficult to separate the budget for evaluating student learning from the instructional cost budget. For example, if one of the evaluation methods is the project you are requiring for the course or uses a capstone course or out-of-class required experience, should that item be included in the instructional cost budget or the assessment budget? Likewise, some administrative programs have difficulty separating the cost of evaluating their services from the cost of delivering their services, as the assessment practices are so deeply embedded in the day-to-day processes. Some institutions require a separate budget if the program calls for a comparative analysis that necessitates purchasing benchmarking tools that are discipline specific or if the program is using external evaluators when none are typically required or not furnished from the central institutional OBPR budget. Describe how the assessment results will be communicated to stakeholders. Identifying the recipient of the information can often help to formulate the reporting format so that the recipient of the report fully understands what is being attempted to be communicated. Identifying the target audience of the self-review report also means that certain types of language can be used to communicate more meaningfully to each audience type, particularly if the recipients of the information have varied expectations and their inclination is to respond to varying reporting formats. This process also includes identifying who will be involved in making recommendations based on the results. This is an effective way to illustrate all those who have played a role in formulating the recommendations that arose from OBPR results. Such documentation illustrates accountability for implementation by listing all those involved in formulating recommendations and decisions. Furthermore, it serves as a helpful referral when clarification is needed for recommendations and interpretation of results. Responses to the following questions often prove useful to OBPR implementation. Who will see the results, when will they see the results, and who will be involved in making decisions about the program based on the assessment results? Who will be connecting the outcomes results to the program goals and other performance indicators?
Results Summarize the results for each outcome as well as the process to verify/ validate/authenticate the results. This process may include how results were
Ludvik.indb 64
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
65
discussed with students, alumni, other program faculty and/or administrators, or external reviewers. Link the results generated from the outcomes-based assessment results to any other program, college, or institutional performance indicators, which include high achievement indicators. Provide a narrative for the rationale of linking the results to those performance indicators. Be sure to update the limitations section of the plan under methodology. Update anything else in the plan that may have changed during actual assessment tool dissemination, data collection, analysis, and anything else that is relevant. Appendix H provides an example from the University of Hawai‘i-Mānoa of a report where results are reported and used to improve each outcome.
Reflection, Interpretation, Decisions, and Recommendations A reminder of why this section is of great importance is the notion that a learning organization is committed to demonstrating that faculty and staff actually use the information collected from the process to make decisions and recommendations. It is evidence that they are indeed learning how to improve what they intend to create as an organization. Some good practice institutions, such as Isothermal Community College and California State University, Monterey Bay include those decisions in the program review portfolio. Others, such as IUPUI, include the decisions and recommendations made after all of the proper authorities and principal players have discussed the program review portfolio. In this section, summarize the decisions and recommendations made for each outcome. Explain how you determined if the results were satisfactory. In other words, be sure to describe the process used to inform how the level of acceptable performance was determined and why it was determined as such. Address whether benchmark data, if applicable, informed your decision of whether your results were “good enough.” Basically, clarify your expectations for a certain expected level of learning and why that level is expected. Illustrate how decisions and recommendations may be contributing to the improvement of higher level goals and strategic initiatives, which include HAAS goals and indicators. Identify the groups of people who participated in the reflection, interpretation, and discussion of the evidence that led to the recommendations and decisions. Organizing this section by key questions that guided your OBPR process is crucial. The questions could be as simple as, “How well is this program delivering what it intended to create?” Or, “How well is this program meeting the expected level of achievement for each learning and development outcome for every student it serves?” Or, “How well are research initiatives
Ludvik.indb 65
27-10-2018 00:23:30
66
OUTCOMES-BASED PROGRAM REVIEW
contributing to expected student learning and vice versa?” Questions such as “How well does this program prepare students for the predicted market place?” are more complicated and require additional data to examine and discern. (We’ll explain more about the importance of guiding questions a little later in this section.) Summarize any suggestions that arose for improving the OBPR process. For example, note any changes that need to be made to refine program goals, outcomes, evaluation methods and assessment tools, criteria, or time lines. Finally, be sure to identify when each outcome will be evaluated again in the future (if the outcome is to be retained). Sinclair Community College reported that the description of the process they followed was helpful in understanding the department’s status and preparation for their next program review. The description also informs the institution about the strengths of the overarching process and provides information about what the department would do differently in the next review cycle. For example, Isothermal Community College reports that the act of reporting decisions and recommendations about the review process has led it to enact a “culture shift” by including assessment language and references to its campus-wide rubrics in course syllabi. The college also published all the criteria and rubrics in a handout for students and began to include an assessment and student portfolio session in its success and study skills (new student orientation) class. In addition, it added an assessment vocabulary list to the student handbook so students could better understand the process as well. Identify those responsible for implementing the recommended changes and detail any additional resources required to implement the required changes. If an organizational member at some higher level needs to provide new resources, indicate who that is and how the results and recommendations will be communicated to that individual. If making a recommendation for a change that resides outside of the program leadership, identify to whom the recommendation needs to be forwarded and the action required/ requested from that person or organization. Many good practice institutions such as Azusa Pacific University include additional detailed tables of data that they want programs to report such as multiple years of admissions applications, enrollment yield, persistence, and graduation rates, as well as faculty credentials. These data contextualize OBPR findings and allow for fully informed decisions and recommendations. Other universities such as New Jersey City University and University of the Pacific invite programs to report descriptive data, however to do so with a framework of guiding questions such as those in the Western Association of Schools and Colleges (WASC) review. The intention of this is to shift from a descriptive time-consuming process of reporting data that may already be
Ludvik.indb 66
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
67
easily accessible to decision-makers to immediately interpreting data in light of the learning outcomes data. So, in this template, we do not list specific additional data points that leaders should examine, such as those listed in the WASC guide. Rather, we invite participants to examine these data in context of the adequacy within which they are meeting expected levels of high student learning and development achievement for all students. It is also important to interpret results in the context of the capacity of the organization to provide the expected high level of learning. This is where the “inputs” part of the program evaluation process that was discovered comes into play in the context of interpreting the “outputs,” as opposed to in isolation of each other. In regard to demonstrating transparency in reporting decisions, IUPUI, James Madison University, and Truman State University list the aggregated results of their OBPR process publicly on their websites as they relate to improvements that have been made (see Appendix B for access to those reports). Whenever program faculty or staff are having difficulty determining whether they have the authority to make decisions based on the findings from the process that they feel can actually improve their program, making recommendations for improvement may be the wisest thing to do. Recall Box 3.1 and remember that decisions are made on various levels of the organization, so it is important to be mindful of your locus of control for decision-making. In other words, when faculty and administrators feel that something is outside their control, such as improvements needed in math courses required for their chemistry degree program, recommendations made to the math department to make refinements in the math courses may prove optimal for improving their own chemistry program learning outcomes. In making recommendations for improvement, faculty and staff demonstrate the strength and value of the process, while demonstrating accountability to do all they can do to improve their programs and other programs to which they contribute. This is also an example of the kinds of systems thinking that learning organizations often engage in, as making recommendations creates an opportunity to use evidence to inform conversations about which improvements can be made collaboratively and why they should be made. Given the complexity of how students learn and develop, this point cannot be emphasized enough.
Action Plan, Memorandum of Understanding, and/or Documentation of Higher Level Organizational Feedback Provide a brief description of the organizationally approved action plan to deliver/implement the initiatives recommended to improve the program.
Ludvik.indb 67
27-10-2018 00:23:30
68
OUTCOMES-BASED PROGRAM REVIEW
Some organizations refer to this as their memorandum of understanding (MOU). The action plan typically includes the specific plan to improve that which needs to be improved, identifies the personnel responsible for implementation, lists the resources needed, includes a time line for implementation, and often includes measures for how the action plan will be determined as successful. It may also simply refer back to a follow-up OBPR time line in order to evaluate its successful implementation. Good practice institutions report that prioritization of this institutional approved action plan is important in making decisions for continuous improvement and allocating the resources to do so. In other words, organizational capacity is determined as to whether the action plan can be carried out within the expected time frame. When good practice institutions provide this information, they offer evidence of a learning organization and they empower faculty and staff to carry out the recommendations or decisions made based on the OBPR process. In this manner, good practice institutions can identify the cost and priority of the decision’s implementation plans, particularly if there are several intended changes and not enough resources to make all of them happen right away. This prioritization can also pose a challenge to institutions that are fully engaged in OBPR, which is why the time line for OBPR is crucial. Many faculty and staff may demonstrate resistance to engaging in OBPR for fear that inefficiencies in their programs will be identified and they will be penalized for these deficiencies. Faculty and staff may become concerned about allocation of resources to address meeting needs they demonstrate through outcomes-based assessment; for example, their program may not be seen as a priority. Not being seen as a resource-worthy priority may cause faculty and staff to resist continuing OBPR. These are valid fears. No institution claims it has enough resources to improve everything that needs to be improved, so hearing that your program is of lower priority is difficult. However, it is better for a learning organization member to negotiate resource reallocation with evidence of what you are doing well and information about what you need to do better so an informed conversation can take place. In doing so, you encourage decision makers to articulate their values in light of the evidence-based benefits of your program. If documented well, your results cannot be ignored, even though they may remain a low institutional priority. If resources for improvement cannot be secured, you can manage expectations for outcomes by illustrating that this is all that can be done with available resources. You may choose not to strive for improvement in this area if the necessary resources are not available or you may become motivated to seek resources elsewhere. While this approach can often be misconstrued as “making excuses,” the point is to focus conversations within the organization on using resources wisely to improve the areas that
Ludvik.indb 68
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
69
coincide with organizational priorities. This process can help an institution better inform its discussions of what excellence looks like and what can reasonably be attained given current structure and resource allocations. Including the immediate plan to implement decisions for continuous improvement not only demonstrates how the program can be improved and what resources are needed but also helps administrators better allocate and leverage resources to improve more programs over time. To emphasize this point further, when programs lay out this kind of detail, administrators must have the ‘‘values’’ conversation about what may be gained from investing in certain programs and whether the established quality in a certain program is good enough. Even though we present good practice institutions in this book, not all of them have reached a point where the evidence and planning for program improvement is so pervasive that OBPR can be pointed to for prioritizing allocation and reallocation of all institutional resources. In time, as more institutions engage in transparent OBPR and more programs gather increasing amounts of information about what works and what doesn’t, faculty and administrators can expect to have data-informed values discussions about what could be improved and what is “good enough” and for whom. While this may seem a courageous practice, it is better for a learning organization to have values conversations informed by data gathered and analyzed by faculty and administrators themselves, rather than uninformed opinions, politically motivated opinions, or opinions formed by indicators that can be misinterpreted. As such, this section documents how results are used and how the results are disseminated throughout the organization. The intent of this section is to clearly communicate conversations and collaborations that are being implemented in order to systematically and institutionally improve student learning and development. So, include the routing of the recommendations or decisions (e.g., who needs to see the recommendations, be involved in the decision-making) and notate whether resources, policy changes, or other information is required outside of the scope of the program in order to improve the program learning outcomes that were assessed. For example, if you are the program coordinator and the decision you and your students recommend requires the approval of the department director, then you need to indicate that the approval of the decision must flow through the departmental director. If the recommendation you are making requires the approval of another departmental director or division head, then you indicate that the decision must flow through those constituents. This may be quickly communicated with a communication flow diagram. Also include responses from those decision makers if they have already responded to your request for a change.
Ludvik.indb 69
27-10-2018 00:23:30
70
OUTCOMES-BASED PROGRAM REVIEW
Furthermore, note any changes needed to improve the OBPR process itself. For example, document the need to refine program goals, outcomes, evaluative criteria, planning processes, and budgeting processes as a result of your own analysis or as a result of institutional assessment committee or higher level organizational feedback, if feedback was obtained. An example of a brief action plan from the University of the Pacific can be found in Appendix I.
External Review Report Good practice institutions utilize external reviews as another source of information, particularly when discerning capacity to improve and/or determining whether expected quality of the program and direction are appropriate to professional standards and future program viability. The external review provides feedback to organizational leaders about the strengths and areas for improvement of the program. It might also provide a comparative analysis/ benchmarking data with similar organizations or programs. Good practice institutions often include reviewers who represent alumni, employers, and graduate school faculty (if applicable) and who are external to the institution. Good practice institutions provide programs with guidance as to how external reviewers are selected, a charge or list of expectations for the external reviewers that often include guiding questions, and a reporting template and time line for completion of the report. An example of such external review expectations can be found at www.chapman.edu/academics/learning-at-chapman/programreview/overview-for-external-reviewers.aspx from Chapman University as well as at irp.dpb.cornell.edu/academic-program-review/academic-programreview-process from Cornell University.
Program Viability In addition, some good practice institutions such as National University do require specific budgeting information to be reported as well as comparisons back to market viability analytics. Based on budgeting information combined with the results of their program review process, leaders are then required to determine the future viability of their programs or subcomponents of their programs based on the totality of their program review data. However, this is not completed until an external review has concluded. It is also done within the context of overall organizational purpose. In this manner, good practice institutions are linking resource reallocations with input data from market analysis, quality of student learning and development, institutional capacity data, organizational purpose, and their ability to cultivate human flourishing in the present and the future.
Ludvik.indb 70
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
71
Include Any Additional Appendices Generated From Completing Your OBPR Report Include any appendices that may help further illustrate the manner in which you evaluated your program. For example, you may want to include the curriculum alignment/outcome delivery map, syllabi, faculty curriculum vitaes, results of your market analysis or needs assessment, or the assessment tools and criteria you used to evaluate each outcome. You may also choose to include any data related to previous trends in enrollment, persistence, and graduation rates by student characteristic identities and their intersecting of identities. You might also include campus climate data and employee satisfaction data along with retention data of faculty and staff by the intersecting of their characteristic identities. There are a number of data points and reports that could be included, such as the external review report (feedback from stakeholders), results, and/or decisions. Include any correspondence from stakeholders (employers, alumni, etc.) or higher level decision makers used to interpret the results, make decisions, and inform the action plan. In addition, include any program budget plans and resource reallocation or allocation documents. Include an appendix that illustrates suggestions for future annual assessment cycles or annual reporting. Develop a time line outlining a full cycle of assessment that will encompass all program outcomes leading up to your next OBPR. Good practice institutions do not expect programs to assess every outcome every year; rather, they expect institutions to assess a few outcomes each year and then roll the outcomes into a five-year or seven-year OBPR cycle that is ongoing. As such, be sure to indicate when each program outcome could be reassessed if it needs to be on a more frequent cycle and the point or points in time when information from the assessment cycles can be brought together to reflect a complete evaluation cycle for program review (e.g., will the program review cycle remain every five or seven years?). In addition, be sure to include budget and expenditure information for the program review process, including the external review process if this is not already completed at a central location. Finally, you may also choose to include information that illustrates how the summary of the learning from engaging in the OBPR process has been made public/transparent. As you can see, several components of the OBPR process are required to demonstrate that your organization is a learning organization committed to high achievement of student learning and development for all students. It is important to utilize components of the OBPR process that will be the most meaningful to your faculty, staff, and administrators in prioritizing the decisions that need to be made. Providing reflective guiding questions such
Ludvik.indb 71
27-10-2018 00:23:30
72
OUTCOMES-BASED PROGRAM REVIEW
as those already mentioned (see www.wscuc.org/content/program-reviewresource-guide for ideas) to interpret gathered data so that they are used wisely is key to ensuring an ongoing inquiry process that will allow the institution to demonstrate it is indeed a learning organization deeply committed to its own human flourishing. An educational learning organization is a systems-thinking organization (Senge et al., 2012). In essence, it requires in-class and out-of-class professionals to purposefully plan the delivery of the intended student learning and development as well as systematically evaluate the extent to which that learning has been met and to propose recommendations for improving delivery of that learning. Thus, OBPR requires faculty, administrators, and staff to engage in conversations around what they want students to be able to know and to do and then intentionally plan a program that delivers and evaluates that learning. The same is true for research and service—all must be purposefully articulated, planned, delivered, and evaluated. Thus, creating the intentional conversation for how this is all done so that every student achieves at a high level is one of the goals for OBPR. Like any inquiry process, OBPR will evolve as the institution evolves in regard to what it learns from engaging in this process, which is evidence of a learning organization indeed. As a reminder, the purpose of this book is not to provide you with a clean set of to-dos so you can go back to your institution and “plug everyone and everything in”; the point of this book is to provide you with a framework, questions to consider, and examples so your institution can approach OBPR with shared intentional reflection. If everyone in an institution does not engage in OBPR with similar (not necessarily the same) intentions, then the practice of OBPR and the use of the results gained from the practice may inform poor decisions or none at all. In addition, it is important to emphasize that while institutions may share templates for documenting their engagement in the OBPR process, the adopting institution must be able to apply that template within its own institutional culture. Doing so allows the institution to engage in genuinely meaningful dialogue about what the institution is learning about student learning and development in the context of its own institutional values.
Key Learning Points 1. It is important for the members of your organization to determine the definition of and purpose for OBPR (e.g., an explanation of why you are engaging in outcomes-based assessment and what you hope to gain from
Ludvik.indb 72
27-10-2018 00:23:30
MORE WHY AND WHAT OF OBPR
2.
3.
4.
5.
6.
7.
73
it), as well as develop a common language for engaging in outcomesbased assessment (e.g., define what each term used in the process means). It is also important that the terms benchmark indicators, performance indicators, dashboard indicators, scorecard indicators, and predictive analytics are defined. Dolence and Norris (1995) defined performance indicators as “measures that are monitored in order to determine the health, effectiveness, & efficiency of an institution” (p. 35). OBPR can be used to interpret benchmark indicators, performance indicators, dashboard indicators, scorecard indicators, and predictive analytics, as well as inform their selection and how they are used. There are several documentation components of the OBPR process. It is important to document components of the OBPR process that will be the most meaningful to your faculty, staff, and administrators in prioritizing the decisions that need to be made. Providing reflective guiding questions to interpret data gathered so that they are used wisely is key to ensuring an ongoing inquiry process that will allow the institution to demonstrate it is indeed a learning organization deeply committed to its own human flourishing and meaningful improvement and expansion of what it seeks to create. Conveying the organizational learning that transpires from engaging in the OBPR process to the public in a transparent manner assures learning organization external accountablity.
More Questions to Consider There are several questions already posited in this chapter that we invite the reader to revisit. Questions found in Appendix J relate to specific components of the OBPR template described in this chapter. In addition, there are several questions that pertain to this chapter’s key learning points in chapter 5.
Notes 1. Direct evidence of student learning is observable and relevant evidence of what students have learned and how they have developed as opposed to what they self-report they have learned and developed. 2. In a HAAS OBPR process, it is assumed that providing students with the same opportunities will not result in equal levels of HAAS. Many scholars refer to this as social justice, equity, or as equity-mindedness, while others refer to this simply as competency-based assessment, where every individual student can achieve
Ludvik.indb 73
27-10-2018 00:23:30
74
OUTCOMES-BASED PROGRAM REVIEW
based on his or her individual strengths and needs. Cultivating human flourishing or HAAS means that it is important to understand which of our students needs a different opportunity provided in order to succeed. Different doesn’t necessarily mean more or less of a resource, counter to what many images portray. It could mean providing more tutoring, engagement, more professional development for faculty and staff, or some other facet that research has shown to cultivate optimal student learning for all students; however, it could also simply mean providing something different than what is typically provided to all students, as is the case at California State University, Dominguez Hills and their Male Success Alliance Program (see Appendix B for a link to that program). 3. In the scientific method and most research methodologies, outliers would be treated differently. In OBPR, if we are committed to human flourishing for all and HAAS, we take the outlying data points into account within our decision-making processes, for human beings are not data points. The data are simply directing our attention to humans needs. Institutions will vary in their capacity to do that. 4. Pipeline or pathways projects seek to identify specific ways that certain groups of students with intersections of characteristic identities (e.g., race, ethnicity, gender, disability, etc.) within those groups of students progress through college and complete a degree in a timely fashion. The pipelines or pathways involve a series of courses taken in sequence during specific terms and also include very specific academic and support service areas engaging with these students at specific times and in specific ways. Historically, many institutions have had first-year pathways for students. Now, institutions are designing two-year and four-year pathways to improve time-to-degree and specific learning and development outcomes for all students. 5. An elevator statement is a popular referral to a two-minute succinct description of what is important to the program and how it makes it evident. An example is “We want to see all of our students experience overall well-being while they are at our college. We know how well this is happening when we examine our end-of-year WEMWBS [Warwick-Edinburgh Mental Well-being Scale] data by the intersection of our students’ identities” (https://warwick.ac.uk/fac/sci/med/research/platform/ wemwbs/).
Ludvik.indb 74
27-10-2018 00:23:30
4 CRITERIA FOR GOOD PRACTICE OUTCOMESBASED ASSESSMENT PROGRAM REVIEW Try to turn as many soft, aspirational goals as possible into success criteria, and make them specific enough that you can actually tell whether or not you’ve met them. (Erin Kissane, 2010, p. 5)
T
his chapter examines the criteria of evidence of engagement in effective OBPR that link the internal self-reflection process to external performance indicators. As we discussed earlier, the primary purpose of OBPR is for faculty, staff, students, and community partners to participate in a continuous, systematic, reflective inquiry process that allows them to transparently share evidence of their organizational learning. A learning organization, as defined by Senge (2006), is made evident when “people continually expand their capacity to create the results they truly desire” (p. 114). In a learning organization environment, there is evidence of systems thinking, challenging self-limiting mental models, and cultivating collaborative visionary and dynamic activity that strengthens the very human capital which drives every organization’s long-term success and viability. This means that everyone within the organization (faculty, staff, administrators, and students) is accountable for fostering human flourishing as they achieve individual and shared outcomes. As such, organizational members are empowered with evidence-based decision-making authority and responsibility to use data in order to achieve consensus-driven outcomes. Senge (2006) emphasizes that this requires organizational leaders at all levels to work effectively to eliminate the barriers that prevent those outcomes from happening. Learning organizations are living systems that flow around obstacles and are generative in the way they exceed expected outcomes. 75
Ludvik.indb 75
27-10-2018 00:23:30
76
OUTCOMES-BASED PROGRAM REVIEW
In this definition of a learning organization, the organization itself creates the motivation to engage in continuous improvement, not an external agency such as an accreditor or a quality assurance regulator. Also, the learning organization determines what it wants to create, not an external agency. As such, it is within this framework that we sought to examine the criteria of a good practice higher education organization. The following criteria emerged from a document analysis of desired criteria or the critique of such criteria published in literature found in Appendix A. The list was then sent to 33 assessment scholars, practitioners, and accreditation professionals for comments. Feedback from these individuals was incorporated into the criteria discussed within this chapter. In an effort to be brief, specific details of individual scholars’ work are not unpacked in detail within this list. However, what you do see (toward the end of this chapter) are some specific lettered lists underneath individual criteria that these assessment scholars, practitioners, and accreditation professionals wanted emphasized. In other words, they found implicit statements unhelpful and desired explicit expectations to be communicated. Having done that, we began to question the usefulness of this list of criteria primarily because in order to represent what was contained in the literature, as well as satisfy scholars, practitioners, and accreditation leaders who might use such a list, the list required more detail. In addition, once the list was completed, we could no longer verify that any of the good practice institutions in Appendix B met all of these criteria. That is not to infer that these good practice institutions do not meet all of these criteria; it is just to clarify that through the transparent processes used for data collection, not every point in the lettered list under each criterion could be verified by viewing online documents. It also begged the question of what a process would look like that could produce evidence of all these criteria being met. This is where we return to Senge’s (2006) definition of a learning organization and, thus, the relevance of the reference to his work in every chapter of this book. It seems we are at a crossroads in this country between expected outputs of higher education (e.g., jobs and other post-attendance life outcomes) and the actual learning and development outcomes that are currently designed into many degree programs that are being assessed while the students are within the academy walls (whether virtual walls or brick and mortar). In other words, there are many who want to hold higher education learning organizations accountable for what graduates can do once they leave the college/university. Does that sound unreasonable? No. However, does it sound unreasonable to say every student you admit into a four-year degree program has to earn a degree in four years and then get a job to pay back college loans within a certain amount of time? Or every student who comes into your community college and wants to transfer to a four-year degree better be
Ludvik.indb 76
27-10-2018 00:23:30
CRITERIA FOR GOOD PRACTICE OBPR
77
able to do so successfully within two years of matriculation? Well, many have argued, it depends on whom your learning organization serves and how it serves them. A learning organization that chooses what it creates and implements OBPR to examine how successful it is may come into conflict with those who are selecting the performance indicators of quality and establishing achievement levels and time to degree and acceptable rates. For example, when I told the first-generation undergraduate students whom I have the privilege of voluntarily mentoring that our institutional expectation is for them to graduate in four years, they responded with various versions of the following: “What if I don’t learn everything I need to know in order to get the job I want?” I respond, “Well, we are here to do our very best to make sure you do, and you have to work hard too.” And then they typically say, “This is so stressful! I’m still trying to figure out what I want to do. I had no idea there were so many options available to me. Who made up that rule anyway?” So, how does a system that is committed to serving first-generation students agree to using a performance indicator of four-year graduation rates? Well, I understand the system had to because the public that contributes funding to the system determined that a four-year graduation rate is a desirable indicator of quality learning and development for all students. However, could it be that time-to-degree has nothing to do with quality of learning and development? Is it more relevant to ask, how well is the way in which we are collecting data across all our organization’s systems that make up our learning organization contributing to the discussion and resulting decisions that will improve our sustainable capacity to graduate all of our students in four years without harming human lives? Or is the pertinent question, is there another performance indicator that better represents how well our graduates have learned and developed? In order to design quality OBPR processes that inform performance indicators, the performance indicators must align with the work that is actually being done within the organization and therefore those indicators must also align with the organizational purpose(s). Can quality learning and development take place within four years? Some research indicates that it can, and other research suggests that we often overestimate the learning and development that can occur within four years (Banta & Palomba, 2014; Blaich & Wise, 2011; Bresciani Ludvik, 2016). The important point to make here is that without OBPR, you don’t know how to “fix” what your organization is providing to whom because there will be no understanding of the experience of learning and development. This is also applicable if the purpose of your organization is to ensure conversion of research to publication or grant proposal to grant funding or idea to economic stimulation.
Ludvik.indb 77
27-10-2018 00:23:31
78
OUTCOMES-BASED PROGRAM REVIEW
Multiple organizational purposes may signal other conflicts that exist in the current dialogue, such as conflict among those members within the learning organization and their desire to create outcomes and specific results for things that not all those who are holding the learning organization accountable for creating see as desired or necessary to assess. For example, as you look at some of the citations in Appendix A, there is little apparent external emphasis on an organization’s ability to show how well it produces new knowledge, generates grant funding, develops winning student athletes, cultivates students’ ability to collaboratively create start-up companies, showcase its creative and performing artists, or serves the surrounding community with needed professional development. Yet these are desired and prioritized areas of creation for those within the postsecondary educational system. Should the OBPR process ignore these areas? How can a learning organization demonstrate it is a learning organization if it doesn’t examine all aspects of what it intends to create? But how does a learning organization create the capacity to accomplish this? This book intends to provide some good practice ideas on how to do just that. Just reading the book obviously won’t do it for you; it requires engaged leaders at all levels of the organization to link the internally self-reflected OBPR-generated data with the external accountability data. Without that linkage, an organization can’t answer the question of whether there is dissonance between creating new knowledge, serving the local community, developing winning student athletes, showcasing artists that make us think in different ways, and ensuring your students graduate within four years or transfer in two while also demonstrating desired high-quality learning and development outcomes. Is there a conflict between open access and expectations for time-to-degree? Is there conflict in using predictive analytics that reveal the need to provide different degree pathways for different groups of students seeking different types of jobs, graduate degrees, or career trajectories and the legislated funding formulas to finance such pathways? For some organizations, there is conflict because they are unable to prioritize what is needed to create pathways for all students to graduate in four years. For others, there is not. And where there is no reported conflict, there is capacity to meet varying needs at levels of HAAS. And having said that, those organizations that are doing that well are monitoring admissions, discussing degree and pathway viability, funding professional development, making clear statements about their priorities, reallocating resources, and putting new resources into the areas in which their learning organization data and collaborative dialogue reveal it is required. What we don’t know is how well those who are doing this kind of work also have data on the well-being of all of their faculty, staff, and students. How well is the organization cultivating
Ludvik.indb 78
27-10-2018 00:23:31
CRITERIA FOR GOOD PRACTICE OBPR
79
human flourishing? Is that even necessary? Some might argue it is not. So, what are the criteria for good practice? After receiving edits and comments about the initial list of criteria, we examined additional details of how these good practice institutions do their work. We found an innate curiosity to discover how well what was being created was working across systems; this is what Maki referred to in 2004. This innate curiosity was evident in the way good practice institutions examine data their OBPR processes produce. For instance, it didn’t matter whether they were examining outcomes-based assessment data, performance indicators, predictive analytic processes, space utilization reports, budgets and expenditures, research productivity, benchmarked data, institutionally collected survey data, or quasiexperimental design data; genuine questions emerged that kept this generation of data from becoming compliance reporting. Their curiosity and authentic desire to improve what they do drove their process. It was as if they conveyed a passion for making connections across systems, seeking to understand the experiences of their students and the professionals who serve them. The process to collect data and the manner in which it is documented is systematic, perhaps even rather dry, but the inquiry process is generative. The guiding questions in their templates were just that—guiding questions. Those discerning the data are documenting the ways in which they make meaning from the data, and as such, they formulate new questions. This evidence is sometimes found in the narrative of the reports and in meeting minutes, but not in the indicators or reported levels of expected achievement. We dug a little deeper into institutional websites and searched the Web for conference presentations, opinion articles, editorial review articles, and other information. We wanted to know what was out there for public consumption. What we discovered was fascinating! It caused us to rewrite criterion number 1 (p. 81, this volume) as well as refine others. And we are aware that we still may not have captured the dynamic care and concern for continuous improvement amidst awareness of the complexity that exists within these organizations and the desire to make the work results transparent. As you review the good practice criteria listed, the first draft included specific ways in which good practice institutions demonstrated meeting each criterion. However, after receiving the comments—some of which were encouraging and others conveying deep dissatisfaction with the criteria—we chose to remove the examples and the rubrics, refine the criteria, and expand the concerns expressed with lettered lists under each criterion. We also chose to posit even more questions for organizational leaders at all levels to consider. What is clear is that as students change, as our world changes, as needs change, as the future seems even more uncertain, and as information is readily available on the Web—some well-informed and some ill-informed—there is a plethora of information and data to discern. Good practice institutions
Ludvik.indb 79
27-10-2018 00:23:31
80
OUTCOMES-BASED PROGRAM REVIEW
make time for collaborative reflection of what their data mean and how each piece connects to another or doesn’t. That requires time and skill. It is not something that can be rushed into in order to ready an organization for an accreditation review in two or so years. Good practice institutions are reflective learning organizations, not organizations simply reacting to the crisis du jour or the initiatives of the most recent leadership regimes. To a good practice institution, there is no disconnect between the data they collect organically through their OBPR process and the predictive analytics and performance indicators they use to determine where they need to take a closer look at their organization. Even though they may be in a state where the governor and legislators have one message of what higher education should be doing and the institutional leaders have another, a good practice learning organization is not intimidated by being held accountable for postattendance life outcomes or demonstrating value-added, because they know their formative OBPR processes (whether formal or informal) are so robust that their outputs are sound and explainable. One more word about the newest criteria is that they are meant to address the leadership concerns expressed about OBPR processes being ILP. With concerns over key organizational leaders moving on and off campuses with shorter and shorter tenures, we looked to the work led by the Australian National Office of Teaching and Learning. Their FLIPcurric inquiry project sought to explore how best to prepare graduates to meet the social, cultural, economic, resource, and environmental challenges facing their nation now and in the future. As such, they interviewed over 3,700 learning and teaching leaders from across Australia, New Zealand, the Pacific, East Asia, Europe, the United Kingdom, and North America to identify some criteria for good practice. One important piece of information that emerged from their study that we neglected to discuss in our 2006 book was the importance of leadership development. As such, a new criterion was created, particularly because good practice institutions continually spoke to the importance of sound leaders supporting their OBPR processes and using the data in ethical and responsible ways.
How to Make These Criteria Useful We begin with the refined criteria, and then unpack those criteria with lettered lists. Following the list of criteria, we posit some guiding questions. So, rather than creating a rubric or a checklist based on this criteria for your institution, we invite you to simply read it, noting what aligns with your organizational values and priorities and those of your stakeholders. Then use the questions provided as a guiding tool (not only in this chapter but also in the previous
Ludvik.indb 80
27-10-2018 00:23:31
CRITERIA FOR GOOD PRACTICE OBPR
81
chapters and those that follow) to craft your research questions, if you will. Use those crafted inquiry questions to test how well your organizational OBPR, predictive analytic, performance indicator reporting, annual reporting, and strategic planning processes are determining the extent to which you embody evidence of a learning organization that is continually expanding the capacity to create that which you want to create while also being able to demonstrate the results of that creation publicly and transparently to your stakeholders. This will help you keep your OBPR process from becoming a process for process’s sake and also ensure a sustainable and meaningful internal inquiry.
The Criteria 1. While the institution may serve many purposes, its main priority is to demonstrate that it is a learning organization committed to human flourishing for all and to continuously investigating how it can improve highquality student learning and development for all of the students it serves. a. The institution embodies evidence that it is a learning organization, engaged in continuous collaborative and reflective inquiry and dialogue and finding ways to improve its inquiry and dialogue processes, as well as committed to the professional development of all of its people. b. There is notable differentiation in the processes used to collect and report for compliance purposes and in the use of that same data as well as OBPR data to inform decisions for improvement. c. Evidence of intentional cultivation of human flourishing for every being that is associated with the organization is apparent. d. Every meaningful piece of data is scrutinized and investigated for its system connections (e.g., connections across department/division lines). e. Authentic generative questions are posited to investigate where improvements can be made or to explore what other questions need to be asked. f. There is a passion to discover how to improve and a playful curiosity in discovering how to improve it. g. Evidence of a meta-assessment of the organization’s own inquiry process or that it is continually researching how well it embodies a learning organization is present. h. The organization posits lines of institutional research inquiry and refines processes to better understand how well it is creating what was intended. 2. The institution considers the primary purpose of OBPR to be assuring the educational quality of its students through a meaningful assessment
Ludvik.indb 81
27-10-2018 00:23:31
82
OUTCOMES-BASED PROGRAM REVIEW
of the many processes, policies, and practices that are required to assure student success. a. All organizational members are committed to discovering how each student best learns and develops. b. The institution defines student success in one or many ways and identifies how those definitions are measured. c. Program learning outcomes are articulated for each possible pathway to degree and/or certificate completion that the college/university provides. d. Students can relate to at least one institutional definition of student success and can also explain how the courses and the out-of-class experiences they choose contribute to that definition. e. Students can explain the set of program learning and development outcomes that have been articulated for their chosen degree/certificate pathway and can also explain how the courses and the out-of-class experiences they choose contribute to those learning and development outcomes. f. Students can describe their expected levels of achievement for each outcome and the importance of meeting those levels so their postgraduation plans can be realized. g. Functional areas within the institution that are not directly related to student learning and development include evidence of how the areas indirectly support student learning and development. If they are unable to do so, then it is expected that the institution will state the purpose of that functional area among its multiple institutional purposes and appropriate evidence of that functional area’s organizational learning will be examined in a separate process. 3. The institution demonstrates that evidence of student learning and development obtained is appropriate for the certificate or degree awarded and also consistent with the level of achievement expectations of relevant stakeholders, which includes students, faculty, administrators, staff, parents, guardians, alumni, donors, partners, siblings, offspring, employers, graduate schools, and other community members not specified here. a. There are clear mechanisms for how students inform how OBPR outcomes are written, how evidence is gathered and interpreted, and the decisions of how evidence is used. b. There are clear mechanisms for how faculty, staff, and administrators inform how outcomes are written, how the evidence is gathered and interpreted, and the decisions of how evidence is used.
Ludvik.indb 82
27-10-2018 00:23:31
CRITERIA FOR GOOD PRACTICE OBPR
83
c. There are clear mechanisms for how community partners (employers, graduate school admissions faculty, community service leaders, parents/guardians, etc.) inform how outcomes are written, how the evidence is gathered and interpreted, and how evidence is used. 4. The institution collaboratively creates learning and development outcomes that inform curriculum design (both in and out of the classroom), which are intended to result in transfer, a degree, or a specific learning and development experience. a. The institution ensures that data collection is embedded in day-to-day activities to cultivate habits of inquiry. b. The institution aggregates and disaggregates data by identity characteristics and the intersecting of those identities, as well as by programs, in order to fully understand which students are achieving which learning and development outcomes at the expected level and how, as well as which students are not and why in order to ensure acting on results that contribute to HAAS. 5. The institution demonstrates responsiveness to OBPR findings by prioritizing subsequent action plans as well as corresponding resource allocation and reallocation to ensure data-informed needed improvements. a. There is evidence that OBPR results are used in decisions to improve programs as well as to assure that students are achieving what they came to postsecondary education to achieve (graduation with a degree, transfer to a four-year program, admissions into a graduate program, job placement, creation of a new job/company/service industry/social movement, transformation of their lives, transformation of their communities, etc.). b. Improvement plans/action plans include what will be done, personnel responsible, resources allocated or reallocated, time lines, deliverables, and measurement of action plan success. This ensures the institution has determined its capacity to make this improvement and has also prioritized it. 6. The institution demonstrates evidence of changing or refining institutional administrative policies and practices to enhance student learning and development, as well as advocating for changes in system/district, state, or federal policies and practices that might hinder students’ successful progress toward meaningful degree attainment, including transferring from one institution to another. a. There is an understanding of how and why policy decisions have been made among those who are impacted by these policies.
Ludvik.indb 83
27-10-2018 00:23:31
84
OUTCOMES-BASED PROGRAM REVIEW
7. The institution is committed to developing its own leadership at all levels who can enact the change that the learning organization’s inquiry process shows is required while cultivating human flourishing for all. a. Leaders at all levels compassionately listen to internal and external community members, examine data making connections across systems, leverage strengths, and provide development where opportunities for growth have been identified. b. Leaders at all levels are committed to improving student learning and development and also know how to empower and support those who can make needed improvements. c. Leaders at all levels are emotionally intelligent and engage in accurate self-assessment resulting in an investment in their own professional development. d. Leaders at all levels will not tolerate illegal or unethical use of data and accept responsibility for any evidence-based decision that leads to harming human lives. 8. The institution demonstrates accountability transparently by the ability to use OBPR data to inform how it has reached a specific level of achievement for each performance indicator or how it determines to do so with its action plans and/or policy changes. a. There is evidence of a value-added experience for the students who attend and for the faculty and staff who work there. b. The institution posits lines of institutional inquiry to further research when data gathered do not clearly indicate what needs improvement. c. The institution refines inquiry processes to better understand how well they are creating what they intended or expected.
In essence, good practice institutions establish a process to self-evaluate how well the internal inquiry processes they have in place are moving them in positive and meaningful improvement directions while also informing transparent public accountability. So, while this book was written to establish inquiry processes that are programmatic in nature (which could have many definitions—see chapter 2), it is imperative that the institution as a whole self-reflect about how well the inquiry processes it has in place are contributing to its own ability to demonstrate that it is a learning organization.
The Guiding Questions 1. What is the mission/purpose(s) of our learning organization? Alternatively, has it been selected for us and we simply need to ensure that we
Ludvik.indb 84
27-10-2018 00:23:31
CRITERIA FOR GOOD PRACTICE OBPR
85
are hiring people committed to that already predetermined mission/ purpose(s) as well as reviewing and retaining people in accordance as well? 2. What is it we specifically want to create? What are our values? What do our constituents value? Have the answers to these questions already been predetermined? If so, see question 1. Examples of what we want to create could include the following: a. Open access b. Two-year transfer promises for all of our students c. Four-year degree attainment processes for all of our students d. Assurance of workforce skills (but not within a specified time line) for all of our students e. Generation of new knowledge f. Service training programs for the local community within which we reside g. Winning student athletes h. Collaboratively designed start-up companies i. Six-year degree attainment with no student loan debt incurred j. Pathways of inquiry that create new jobs and/or new professions k. Human flourishing for all l. Employees with whom employers are highly satisfied m. Young scholars who collaboratively publish and secure grants with their faculty mentors or in collaboration with other young scholars n. Poet laureates o. Fulbright scholars p. Award-winning performing artists q. Community activists r. A top, 10 college-university as defined by . . . [fill in the blanks] s. And so on (this list can go on and on . . . ) 3. Given all that we want to create (or have already agreed to create), how do we prioritize those intended areas of creation according to our current capacity to create them? a. How well are we determining our institutional capacity to deliver what we want to create? b. What do we do when we currently don’t have the capacity to create what we want to create or is expected of us to create at a high-quality level? Does it become a strategic planning initiative (complete with a business plan or development plan)? Or do we let it go? c. Is there any facet of our organization that doesn’t seem to be aligned with what we are about to create? If so, what do we want to do about that?
Ludvik.indb 85
27-10-2018 00:23:31
86
OUTCOMES-BASED PROGRAM REVIEW
d. How well does what we want to create align with our current funders’ or resource providers’ expectations of what the funding or resources are supposed to/expected to create? 4. Whom does what we create or want to create serve (remember to consider groups of identities within each category and the intersecting of those identities)? a. Students b. Faculty c. Staff/administrators d. Employers e. Prospective graduate school faculty f. Community members g. And so on 5. How well do our existing policies and practices support what we want to create? Or are creating? a. How do we change what needs to be changed? b. What other practices or policies might we need to craft in order to assure quality creations? 6. How does what we create or want to create best serve each group, subgroup, and intersecting identities of each group? To answer this, we need to examine the categories of how we deliver what we intend to create and get more specific about what each component does (e.g., outcomedelivery map): a. Courses b. Out-of-class experiences c. Professional development seminars or skills-building workshops d. Circle of trust dialogue processes e. Planning retreats f. Visioning teams g. Development teams h. Mentoring/coaching (groups and individual) i. Specific prescribed pathways of support j. Start-up costs for labs and businesses k. Formative assessment check-ins for all groups, subgroups, and intersecting of group identities, along with prescribing interventions, if necessary, based on formative assessment results l. Tutoring/supplemental instruction
Ludvik.indb 86
27-10-2018 00:23:31
CRITERIA FOR GOOD PRACTICE OBPR
87
7. How well do we serve each group, subgroup, and intersecting identities of each group? This response might be informed by data collected through: a. OBPR documentation and dialogue b. OBPR-generated improvement plans/action plans c. OBPR-generated viability reports d. Reflective student learning and development portfolios e. Reflective staff learning and development portfolios f. Reflective faculty learning and development portfolios g. Predictive analytic analysis 8. Are we collaboratively dialoguing about what all this evidence means in regard to what we want to create in comparison with what we are creating? 9. How do we know we are creating what we intend to create? a. What are our expectations as well as the expectations of those whom we serve for level of achievement (e.g., performance indicators)? b. What are our funders’ expectations for level of achievement? c. Are those expectations in conflict? If so, how will we resolve that conflict in a manner in alignment with what we value and the values of those whom we serve? d. How well are we collaboratively dialoguing around what the evidence is showing in relationship to various kinds of expectations? e. How well are we documenting needed decisions, verifying we have the capacity to implement these decisions, and prioritizing them? 10. How do we use documented information to improve our mission/ purpose and the process to create? a. How well does the following provide this kind of evidence? i. External reviews ii. Improvement plans/action plans iii. Program viability decisions iv. Refinements in resource allocation and reallocation v. Policy and practice changes b. Are we establishing another reflective review in the future where we can determine how well our decisions influenced the needed improvement? 11. How well are we transparently/publicly communicating what we are learning as an organization?
Ludvik.indb 87
27-10-2018 00:23:31
88
OUTCOMES-BASED PROGRAM REVIEW
While this list of good practice criteria and accompanying guiding questions (or this book for that matter) do not prevent the continued perceived conflict of what postsecondary education is to provide or the perceived conflicts among the expectations of all of its stakeholders, it is our hope that through a compassionate yet critical thinking–person’s process and a commitment to transparent compassionate dialogue, each postsecondary education institution can engage in some form of systematic inquiry where the provision of evidence of a learning organization affirms the value of a college educational experience, even if it can’t show an increase in its graduates’ ability to be placed into existing types of jobs six months post-graduation. In closing, I humbly share that in my 30 years within higher education, I have enjoyed the great privilege of working with many people who care deeply that their students or students they just meet briefly once get a highquality education and are successful in what they do once they graduate. I have also had the privilege of conversing with many policymakers who do know how complex learning and development is and how difficult it is to tend to individual learning of students in class sizes of 30, let alone 500 or more. I have also witnessed how hard high-level administrators work to listen to all the different needs of their campus community, prioritize those needs, and make decisions that they know will anger some of their community members while pleasing others. In this learning organization cultivation process it is not helpful to make broad general statements about a group of people. They are simply not accurate for every member you might place in that group. It is challenging to navigate polarized conversations in order to move an organization forward in its learning exploration so we can improve what we are trying to create and provide transparent evidence of what we have created and how we will continue to improve and expand that creation. As such, we have one more set of questions for you, the reader, to use in your own self-assessment. The intention is to use this set of questions to check yourself (a) when you notice you are the one closing down needed learning organization dialogue or OBPR inquiry or when you think someone else is, (b) when you are making sweeping assumptions about a group of people, or (c) when you find yourself leaving a meeting or any encounter with another human being saying, “I just don’t care anymore.” 1. How well am I taking care of my own well-being so I can listen as intently as possible to differing perspectives in order to discover what I think I already know?
Ludvik.indb 88
27-10-2018 00:23:31
CRITERIA FOR GOOD PRACTICE OBPR
89
2. How willing am I to learn about what others do within our organization so I can help make connections across systems to ensure our organizational ability to improve what we want to create and meet expected performance indicator levels? 3. How well am I holding questions for which our organization seems to have no answers in the belief of the possibility for them to be answered? 4. How well am I choosing to continually believe that there is a way to resolve this conflict, address this resource shortage, or explain this decision in a manner that can be understood by all parties? In other words, what am I not seeing? 5. How well am I remembering that all these people with whom I work— who lead, who serve, and who enter these in-class and out-of-class experiences (especially the ones with whom I am in conflict)—are humans who want to be happy, healthy, and successful, just as I do? 6. What do I need to do in order to take care of myself, so I can be kind, wise, and compassionately engaged in critical inquiry, contributing solutions to this learning organization that seeks to cultivate human flourishing for all?
Key Learning Points 1. While there may be a conflict between what various stakeholders are expecting higher education to create and how to demonstrate accountability, that conflict will not be resolved without evidence to illustrate how what is expected is being created within the organization or not and why. 2. There are many good practice criteria that your institutional leaders can consider adapting for use as you evaluate how well you are embodying the principles of a learning organization. 3. A learning organization must produce evidence of what it creates in a manner that can improve creation. To learn how to do that better, a learning organization must evaluate its own internal inquiry processes (e.g., meta-assessment). 4. Positing guiding questions or research questions may help your organization evaluate its own internal inquiry processes (e.g., meta-assessment). 5. Because the industry of higher education is composed of human beings cultivating other human beings’ learning and development, it is important for leaders at all organizational levels to practice compassionate and accurate self-assessment.
Ludvik.indb 89
27-10-2018 00:23:31
90
OUTCOMES-BASED PROGRAM REVIEW
More Questions to Consider 1. Which criteria might be useful for your learning organization to evaluate its internal inquiry process? 2. Which criteria might be useful for your learning organization to evaluate its external accountability reporting process? 3. Which guiding questions or research questions might your organization posit so that it can improve its internal inquiry process and external accountability process? 4. How well are you evaluating the way you contribute to the learning organization and compassionately improving the way you show up?
Ludvik.indb 90
27-10-2018 00:23:31
5 H OW A N D W H E N Key Questions to Consider When Implementing Good Practice Outcomes-Based Assessment Program Review Don’t Make Assumptions. Find the courage to ask questions and to express what you really want. Communicate with others as clearly as you can to avoid misunderstandings, sadness and drama. With just this one agreement, you can completely transform your life. (Don Miguel Ruiz, 1997, p. 32)
H
aving examined good practice templates and criteria for evaluation, you may now wonder how to start building your own good practice OBPR. Like the previous chapters, this chapter is not prescriptive; it is intended to provide some basic steps and references for implementing OBPR at any institution. Additionally, and as in the previous chapters, we invite you to pay close attention to your own institutional culture when answering these questions. Above all, be sure you are clear about what you want to create within your organization from having implemented the OBPR process (see chapters 2 and 3). The suggestions and questions posed in this chapter are drawn from a compilation of good practices that institutions use and are affirmed and critiqued throughout a variety of sources of literature, found in Appendix A. The questions posed in this chapter, as well as in the previous chapter and the one to follow, are for you to answer with your own program review faculty, administrators, and staff leadership. As you address certain questions, you will discover typical barriers to implementing OBPR. Ideas and suggestions for addressing those barriers are presented, along with good practice examples for overcoming the barriers, in chapter 6. In addition, you can visit flipcurric.edu.au/make-it-happen to contribute your good practice to effectively overcoming barriers.
91
Ludvik.indb 91
27-10-2018 00:23:31
92
OUTCOMES-BASED PROGRAM REVIEW
Recall that one of the most important reasons to engage in OBPR is to demonstrate that you are a learning organization. Accreditation and quality assurance practices will likely remain under scrutiny until an organization can demonstrate that it is not engaged in an inquiry-driven collaborative continuous improvement process just to appease accreditors or other quality assurance requirements. Setting up processes that connect diverse operational systems within an organization in a reflective inquiry process takes time. There is simply no way around it. So, the process begins by determining what you want your organization to create. See questions in chapter 2 about what you intend to create as an organization. Once you have determined that, then you can begin to design the process that determines how well you have created it. Designing that process begins with knowing who is responsible for ensuring a quality process.
Creating a Primary Point of Contact Good practice institutions have a primary point of contact or a primary office charged with managing the implementation of their OBPR process and name a coordinator of the process. Make sure your institution has identified a key contact person and make sure that the person has been given allocated time to do this coordination work. That doesn’t mean that the identified person can’t delegate responsibilities to committees or other faculty or professional staff members. It means that if no one is in charge of shepherding the process, it simply won’t happen. For example, look across your institution and ask yourself whether you can identify who is in charge of recruiting students, registering them for classes, and making sure they have financial aid awarded; there is someone directing these processes. Depending on the size of the institution, one person may be doing all three of these jobs or one person in each area may be leading and coordinating hundreds of professionals. The number of people involved in the process is not the question in this moment, although it may be in the future. The point now is that if you want your organization to show evidence that it is a learning organization, whose job is it to coordinate the cultivation of that reflective inquiry process? As you consider who that person is or should be, consider these questions. Is that person(s) • committed to fostering the learning organization process? • committed to inquiry-driven continuous improvement while assuring human flourishing for all? • knowledgeable about a variety of quality assurance processes, including institutional research, educational research, predictive analytics, and OBPR?
Ludvik.indb 92
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
93
• able to identify the connections of OBPR to institutional performance metrics and predictive analytics? • able to invite in and engage in equity dialogue? • flexible enough to accommodate the variety of quality assurance needs members of your organization might have yet rigorous enough to ensure a high level of consistent quality in all reflective inquirydriven review processes? • committed to promoting collaborative work? • patient enough to listen to “naysayers” and understand their perspectives even if agreement with those perspectives is not possible? • capable of presenting ideas and drafts to committees so they can refine them to ensure they represent cross-disciplinary needs? • organized? • able to ask insightful questions that keep inquiry proceses focused on the questions the organization cares about? • able to delegate in case the institution determines that sub-OBPR processes should be divided across divisions (undergraduate, graduate, business, and finance) and later rolled into larger institutional decision-making? • skilled in inter- and intrapersonal communication? • able to bring multiple data providers together to determine, along with committee assistance, how to best connect various types of data in the OBPR process so that the best institutional decisions can be made? • able to demonstrate cognitive flexibility? • emotionally intelligent? • kind? • committed to his or her own accurate self-assessment and professional development? If you are wondering whether such a person exists, good practice institutions can assure you that they do, for these are the people who are ensuring that their institutional OBPR processes are occurring. What is important to note is that OBPR is not at all about one person or one office doing all the work; it is simply about someone or some office having the authority to deliver the institutional leadership decision to say “Yes, this is the approach we will use.” Many good practice institutions centralize this work from their provost/vice president for academic affairs/vice chancellor for instructional services office, and then the work is delegated out from there. Others charge their institutional research, planning, and assessment offices, yet still require that the existing governance structures in place be the ones who approve the processes in place. Again, there is no right or wrong approach; what is critical is that
Ludvik.indb 93
27-10-2018 00:23:32
94
OUTCOMES-BASED PROGRAM REVIEW
everyone within the institution knows who is in charge of cultivating the learning organization—the OBPR process.
Create a Well-Represented, Well-Respected Committee After identifying the person or office in charge of implementing the OBPR process so that it ensures evidence of a learning organization, bring together a group of committed and respected faculty, student success professionals, and business and finance professionals to plan the process. Bringing together wellrepresented and well-respected faculty and other professionals is instrumental to building a well-informed and meaningful process. Many good practice institutions have invited student leaders and alumni into the design process as well. In forming your committee, you may want to consider the following: • Who embodies many of the characteristics that your organization requires from the person who leads the OBPR process for the institution? • Who cares enough about quality in research, teaching, service, outreach, and student learning and development to commit his or her time to assuring your OBPR process is designed to reflect on the connections across all of these systems? • Who is well respected in his or her disciplines/areas of expertise? • Do you have appropriate representation of all of the areas that should be represented? • If you have unions, are they represented? • Do you have advocates for improvement involved as well as those who have negative feelings about the process or don’t believe it is necessary? Be sure to invite the naysayers and even the so-called curmudgeons into the design of your process or its redesign or current efficacy evaluation. There is valuable insight to be gained from their perspectives. • Is it possible for you to involve student leaders? As you form your group, keep membership open and meeting notes public, yet carefully edited as Vincennes University does (see https://my.vinu .edu/web/institutional-effectiveness/assessment-committee-meetingminutes/-/document_library_display/TlY6VJXnigay/view/27189326). Be sure to allow anyone to attend who wants to attend. If you have been meeting for a while and a new group of faculty or administrators wants to join, allow them to join, making special time to meet with new members
Ludvik.indb 94
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
95
before their first meeting to update them on what has happened thus far. The time involved in updating new members allows them to contribute immediately; it also keeps the members who have been attending meetings for a while from feeling as if they are rehashing the same topics. To this end, it is wise to design an organizational member process where members have systematically rotating years of service, with elected officials coming from union groups as well as faculty, staff, and student senates or key leadership groups. This keeps all shared governing bodies informed of the trajectory and progress of OBPR. Finally, given the importance of tying OBPR to performance indicator conversations and predictive analytics, it is imperative to have members from those initiatives serving on the committee as well.
Meeting Times, Agendas, and Minutes Setting meeting schedules (times, locations, and frequency) often involves intricate aspects of institutional culture. Pay attention to your institutional culture and do not move faster or slower than the norm. Doing so could cause many to question the motives behind and sustainability of what you are trying to accomplish. For example, some institutions think about preparing for their accreditation review late in the game, and then they must hurry to get organized. Such hurrying causes many to think that the only reason they are organizing is for the accreditation review, not because the institution is trying to prepare itself for meaningful, long-term self-reflection and improved decision-making via evidence that they are indeed a learning organization. As we keep mentioning, becoming a learning organization requires an investment of time; there is simply no way around that. If you are hurrying to meet some accreditation or state reporting requirement, then just tell people that is what you are doing and have it reflected in meeting agendas and minutes. Why? A learning organization is interested in providing evidence of adaptive learning and generative learning (Senge, 2006). As such, it is about engaging in learning that looks at the system as a whole and is continually improving its capacity to create what it intends to create (e.g., outcomes). Learning organizations don’t pretend they care about something they don’t care about. So, if you need accurate data to demonstrate compliance of some state and/ or federal mandate or requirement, then just get those people together who can produce the required quality data and/or report. Not all compliance reports need the same type of reflective and collaborative processes that OBPR requires. Let’s respect that institutional leaders at all levels are often
Ludvik.indb 95
27-10-2018 00:23:32
96
OUTCOMES-BASED PROGRAM REVIEW
burdened by compliance reporting.1 Good practice institutions differentiate compliance reporting from their collaborative reflective OBPR processes. This statement does not indicate that good practice institutions avoid reflecting on how compliance reports can be used to improve what the institution is trying to create. Rather, good practice institutions don’t confuse compliance reporting with collaborative reflective inquiry of how to refine systems in order to assure the intended outcomes they care about. Furthermore, avoid calling a meeting without first establishing an agenda. Stay organized, and if you don’t have any business for the committee, task force, work group, or advisory committee (or whatever you choose to call it), cancel the meeting. Questions to ask yourself about meetings include the following: • How regularly do we want to meet? • How long do we want to meet? • When are the best days and times to meet? Should we vary meeting times and days to allow those whose teaching or service schedules conflict with an opportunity to attend? • Where should we meet? • Are we continually choosing a meeting time or location that appears to purposefully exclude one group or another from joining the conversations? • Do we need to set aside longer retreat or planning times to cultivate professional development? If so, how long should they be and how often? When and where would be the most relaxing environment for the committee to plan? • How long should service to the committee terms be so that we can rotate off knowledgeable members and rotate on new members? • Once established, when is the best time to conduct orientation for new members, including those rotating onto the committee, once established? • Does our agenda include information items and action items? Do we have specified time allotments for each item? • Where do we publicly post our meeting times, agendas, and meeting minutes? Keep in mind that when you are starting out, you may not know the answers to these questions, so be sure to allow yourself some flexibility in scheduling. Also, keep true to the practice of a learning organization and allow yourself to improve and refine your meeting schedule as your committee work evolves or as subcommittees become more active.
Ludvik.indb 96
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
97
Organize the Committee’s Role and Responsibilities Similar to the previous points, it may be difficult at first to establish and maintain a committee’s role, but be forthright with faculty and administrators and allow the committee to evolve as its thinking and work evolves. It is important to provide faculty and student success professionals, staff, and students with some sort of direction about what the committee needs to accomplish before you invite them to join. Whether a formal charge is necessary, such as what North Carolina State University and Texas A&M University provide, again depends on institutional culture. Other institutions, such as Sinclair Community College, do not provide a formal charge; however, they do provide committee members with roles and functions and very specific assignments. Stay focused on what the committee is supposed to create or support. Questions to ask when creating the role of the committee/task force/ working group include the following: • What is the primary purpose of this committee? • What do its existence and work intend to create within this institution? • Will the members define what OBPR means for the institution to be used by all members of the organizational community? • Will this committee draft the purpose of the OBPR process and what it hopes the process will create (conceptual framework)? • Will this committee draft the common operational language for OBPR? • Will committee members draft the components of the reporting template for OBPR (see chapter 3)? • Will members determine guiding questions to be used in the OBPR process (see www.wscuc.org/content/program-review-resourceguide)? • What will the committee’s primary role be? Will it have any secondary role(s)? o Advisory to those who will use OBPR data to make decisions? o Training/educating others by offering professional development about OBPR? o Providing one-on-one coaching support to others? o Evaluating the quality of the OBPR process? o Ensuring the connection of OBPR to institutional performance metrics? o Informing the predictive analytics process with what is learned from engaging in OBPR? o Evaluating the quality of the content OBPR creates?
Ludvik.indb 97
27-10-2018 00:23:32
98
OUTCOMES-BASED PROGRAM REVIEW
• Given the primary role, do you have the most appropriate representation on the committee? • Is the committee voluntary or do members have assigned time to do this work? • Will this committee be the support system for the entire OBPR process, or will professional staff be involved? • Is the committee permanently identified and appropriately recognized by the university? Or do members have specific terms they will serve? If so, is there a rotation schedule so as to ensure some institutional memory is always present? In other words, how and how often will new members be rotated onto the committee? • How will the work the committee generates be vetted by others in an appropriate and timely manner? • How will the committee members educate themselves or be provided with professional development about OBPR, performance indicators, and predictive analytics? Is there orientation for new members? • How will members stay motivated to continually improve the OBPR process? • Will the committee have permanent, or ongoing, faculty and/or student success professionals as consistent members? • Who will provide staff support to the committee so as to ensure agenda setting, minute dissemination, coordination of their activities, and so forth? • What responsibilities will be delegated to college/division or departmental committees, and what responsibilities will remain with a university-level committee? For example, at the U.S. Naval Academy, the agencies responsible for assessment include the departments and centers responsible for student learning, the academic and professional development divisions responsible for the core program, and the Faculty Senate Assessment Committee (formerly the Assessment Task Force) that is charged with oversight and coordination of the assessment process. Once the committee is established, you may want to consider keeping all of the committee’s work on a public website or, at least, a site that is public to the university community. Again, this openness allows others to join in as the committee addresses issues that interest them, and it encourages dialogue around institutional excellence and accountability, furthering ownership of those conversations and resulting decisions. In essence, public sharing of what the committee is discussing enables the learning organization to emerge. It is
Ludvik.indb 98
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
99
good to remember that committee members will be less frustrated if they and others clearly understand their roles. You also may need more than one committee for the process. For example, you may want an overarching university committee to advise on the overall process of how OBPR should work, but delegate review of the quality of content that the OBPR process produces to another committee or to each college/division. Absolutely avoid charging the committee that is supporting the OBPR process or reviewing the quality of OBPR process with also determining whether the content that is produced by OBPR is “good enough.” In essence, the determination of whether the evidence that supports the degree to which program outcomes have been met or not met by having engaged in the OBPR process is a different dialogue than this one. Similarly important, if you are reading this book because you are charged with leading the OBPR effort on your campus or within your college/ division or department, you will be less frustrated if you articulate your own role or if you are assigned the task by the proper authority figure. Questions for you and others to answer may include the following: • Are you a facilitator of the process, or are you directing the process? • Will you be an educator of the process, or will you do the work for the faculty and administrators? (Good practice institutions do not recommend the latter at all.) • Will you investigate good practices and propose them to the committee? Who will determine the best way to adapt and apply good practices? Will the committee investigate good practices? • Will you collect all of the program review documents in one central location, or will you know whom to contact within each program when it comes time to compile such documents to consider how systems are interacting? • Will you write reports, or will you edit what the committee writes? • Will you be the one to ensure OBPR results are aligned with performance indicators or will others do that? • Will you be the one who uses OBPR evidence to inform the predictive analytic process or will others do that? • Will you design the OBPR implementation time line (e.g., where you expect organizational learning to be by when and how) or will you make recommendations for the design? • Will you draft the meta-assessment (assessment plan) for OBPR, or will you edit what the committee drafts? • Will you conduct or coordinate the meta-assessment?
Ludvik.indb 99
27-10-2018 00:23:32
100
OUTCOMES-BASED PROGRAM REVIEW
• Will you arrange for professional development of committee members, or will you coordinate their professional development through their representing departments/programs? • Will you provide professional development to college and division assessment committees, or will you provide the professional development resources? • Will you assist in the creation of college/division committees, or will you provide guidance for their creation? • How often will you ask college/division committees to monitor their own work and report on it so that you can ensure a quality and ethical reflective review process across the campus? • Will you coordinate the other offices that have data pertinent to the OBPR process (e.g., performance indicator reporting, strategic initiative indicator reporting, predictive analytics, competency-based, budgeting, etc.), or will you be making recommendations to another committee who does that work? In other words, if your job is to coordinate the OBPR process, who coordinates the linking of what the OBPR produces with other offices that produce decision-making data? Is that the job of another person or another committee? Answering these questions and others as transparently as possible will help inform everyone about what they should expect and from whom. The responses should be useful in organizing the process in the beginning. As with most processes, what you design in the beginning will evolve and take shape as your learning organization process evolves, so things will change. Flexibility is key to ensuring the autonomy needed for individual disciplines to explore and improve their own processes. Flexibility is also a characteristic of a learning organization (Senge, 2006). Provide that flexibility within a framework of reflective structure and be sure to update communication flow and documents to illustrate those changes. Finally, make sure that everyone’s role in the OBPR process is clear. Delineate who is involved in reviewing quality of the self-reflection process and who is responsible for reviewing quality of content produced by OBPR. Regarding the latter, remember that the most effective improvements will be made if the review process of content is done first at a level close to where content is being delivered. Above all, ensure that OBPR does not take on a life of its own and become about the process—it is simply one process that a learning organization can use to discover what it is doing well and how it can improve what it is not doing well.
Ludvik.indb 100
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
101
Articulate Expectations for OBPR At an appropriate time, key leadership at all levels will need to inform the university community of the expectations for engaging in OBPR. An institution cannot ignore the skill sets needed to navigate conversations with those demanding accountability and how what is demanded may play into faculty and staff ’s interpretations of what “really” is being requested and how it will be used. Therefore, it is imperative that at some point the institution’s leadership articulates exactly what is expected from the OBPR process. In chapters 1 and 2, we share some language that might assist your organizational leadership in articulating why it is engaged in the OBPR process. In chapter 3, we provide you with documentation components. In chapter 4, we discuss what the OBPR process could create. What follows are guidelines for your organizational leadership to consider as you create your own process, becoming specific about what you hope to create by systematically, reflectively, and continually engaging in that process. Determining the appropriate time to announce the expectations for OBPR may be challenging. It depends on the reflective inquiry support process you have in place or the one you are planning to put in place. It is never empowering to be held responsible for getting something accomplished and subsequently have little clue how to accomplish it. Thus, before the key leaders at all levels announce expectations, it would be helpful to have professional development support systems in place and examples of what you want to see occur. Doing so reduces panic among the faculty and administrators whose responsibility it will be to implement such a practice in their already overloaded day-to-day schedules. In some instances, your organization’s response to some of these questions may vary by department or even program, depending on how much flexibility each department or program has. • What does it mean to demonstrate evidence that we are a learning organization? • What processes allow us to learn as an organization in a manner where we can improve what we do while also transparently demonstrating how well we are doing what we do? • Why are we engaging in OBPR? What is the purpose? What is this process supposed to create? • What value does it have for me as an individual faculty member/ administrator/staff member/student?
Ludvik.indb 101
27-10-2018 00:23:32
102
OUTCOMES-BASED PROGRAM REVIEW
• How will it make me a better faculty member/administrator/staff member/student? • How will it improve my program? • How will it affect my students or other constituents whom I serve? • Is this institution required to do it? Why or why not? • What is required for me to document to demonstrate my program’s accomplishments? • How often is a plan and/or report needed? • What is the review process for my plan and/or report? • Where do I learn how to do this? • Where do I find the time to do this? • Where do I find resources in addition to time? • What assistance will I receive in completing the plan and/or report? • How will the results of my OBPR process be used? • Will one committee evaluate my OBPR plan and/or report for quality of the collaborative reflective inquiry process? • Will that same or another committee evaluate my OBPR plan and/or report for quality of content? • How do I collaborate with other departments, programs, and or service areas whose operations influence the quality of our outcomes? • Who helps me align my OBPR results to institutional performance indicators? • How do I use OBPR results to advance equity? • How will my OBPR results inform our predictive analytics process? • Are external reviewers needed? Who funds that process? • Who will see my OBPR plan and/or report? • How public will my plan and/or report be? • How public will the review of my plan and/or report be? • Will the plan and/or report results be used to evaluate me personally? • Will program/department/college/division/institutional allocations or reallocations be based on my plan and/or report findings? • Will I receive an allocation to improve my program if the data demonstrate that funding is necessary? If so, how and when will that occur? • What happens if my OBPR results reveal that someone else’s program needs to be improved first in order for my program to be improved (e.g., poor performance in Math 101 outcomes may be influencing my students’ Statistics outcomes)? • Is there a university/division time line for implementation of this OBPR process? • Will someone do the ‘‘regular’’ data collection (e.g., enrollment figures, retention and graduation rates, budget figures, student satisfaction
Ludvik.indb 102
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
• • • •
•
•
•
• • • • • • • • • •
•
Ludvik.indb 103
103
surveys, sense of belonging inventories, etc.)? Who? And if so, how do I access that data? Will someone coordinate the OBPR process? Who? Will the history of OBPR plans and reports be centrally located or am I expected to find a system to store them? Will someone be in charge of documentation? Who? Are we using a shared technological package to document plans and reports? If so, what? Who is responsible for supporting and maintaining that system and how will I learn how to use it? How will OBPR results inform outreach, recruitment, enrollment management, performance indicators, HAAS indicators, and other types of evaluation? Can key assessment coordinators within my program/department/ division/college get assigned time or a stipend to establish and/or maintain the process? How will the institution manage the sometimes competing information requests from external and internal constituents (e.g., compliance reports versus collaborative reflective OBPR)? How does OBPR interact with compliance reporting? How will engaging in OBPR be considered in my annual review process or our review, promotion, and tenure process? How will all of the university’s planning and evaluation initiatives link to OBPR? Are there institutional learning outcomes that all programs need to assess or align their outcomes to? What if programs and courses cannot link their outcomes to specific institutional goals? What is the budget for OBPR? Is it centrally located or located within each college/division, department, or program? How will my professional accreditation or professional standards requirements fit into OBPR? How will our reflective student portfolio processes or student engagement résumés fit in with OBPR? How does competency-based assessment align with OBPR? When will my OBPR results be considered with other OBPR results so that we can engage in learning organization systems thinking? For example, how will the graduate school’s and/or research division’s OBPR process be considered in relationship with our OBPR student learning outcome results? How do we incorporate all the outcomes-based assessment data emerging from academic and student support services offices in our OBPR?
27-10-2018 00:23:32
104
OUTCOMES-BASED PROGRAM REVIEW
• How do we incorporate the way our institution, division/college, department, and/or program secures and allocates funds in our OBPR process? • What happens if I don’t engage in OBPR? These are just some of the questions that will need to be answered about the overall review process for each program. Again, these answers may come from institutional or college/division leadership, or they may come from a faculty committee or a faculty and student success professional task force that proposes recommendations to institutional leadership for implementation. In some cases, the answers may come from and subsequently vary by colleges/divisions and departments if their needs differ from each other. It all depends on your institutional culture. In time, these questions may subside as people see for themselves how much or how little OBPR is influencing the culture of organizational learning. Some of the good practice institutions no longer need to answer these questions, as they have demonstrated over several years how program review results have been used to improve programs. In the beginning, however, it is important to be consistent and to transparently demonstrate (commitment of resources, public support for assessment, etc.) that the responses are evident in the organization. Answering these questions in a public manner (transparently) allows what is written to serve as somewhat of an honor code as administrators and faculty members reach agreement and publish those agreements in the form of an online FAQ or memorandums of understanding. The publicly published responses to these questions can serve as a code of ethical practice and thus alleviate much of the anxiety of those engaged in OBPR as they see some of their understandable concerns worked out in a collegial manner. When you are also new in implementing or in the process of redesigning genuine institution-wide OBPR, you may not have the answers to all of these questions. That is normal. We encourage you to be forthright and allow your established committee to respond to those who are asking questions. You may get some fabulous ideas about how to answer these questions from those who are the most vocal about the questions.
Plan Short-, Mid-, and Long-Range Goals When implementing or refining your institutional OBPR process, you can quickly become discouraged if you intend to attain the good practices in this book in your first year. While you may be able to achieve many good practices quickly, you want to do so in a manner that allows for the practices to become second nature. As Alverno College has attested, some of their good practices
Ludvik.indb 104
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
105
are no longer required; they are, rather, part of the day-to-day routine. In other words, as Alverno College has committed to becoming a learning organization, their community members engage in these good practices as a part of their daily work, and, therefore, evidence of that practice appears without even being requested. While that is what most institutions aspire to achieve, getting there takes time and consistent reinforcement of the values of reflection and collaborative dialogue. Thus, planning your implementation strategies and evaluating how well you are transforming institutional decision-making culture may be best achieved with a series of short- and long-range goals. The importance of engaging leadership at all levels is crucial when establishing short-, mid-, and long-range goals. Many postsecondary educational organizations are experiencing frequent high-level organizational leadership transitions. This can become problematic to sustaining a culture of meaningful inquiry in a manner that provides evidence of a learning organization if there are not leaders at all levels of the organization engaged in OBPR. If there are, then an organization can sustain itself as a learning organization until the newly established high-level leadership can review the institutional inquiry processes in place and suggest improvements. As such, decide what you can accomplish in the short term. For example, some short-range goals (depending on institutional culture) may be as follows: • Identify the person or the office who will be the key contact for the OBPR process. • Form the committee that will guide the process. • Define what a learning organization means. • Define OBPR and how engagement in it can provide your organization with evidence that it is a learning organization. • Draft a document that explains the purpose of OBPR and what it will create (the shared conceptual framework). • Draft the common operational language. • Draft the guiding questions (see www.wscuc.org/content/programreview-resource-guide) and/or the documentation template (see chapter 3) for OBPR. Remember that these must be questions your organizational members care deeply about. • Identify which faculty, administrators, staff, and students have already engaged in OBPR, and create a professional development plan or one-on-one coaching team that leverages their expertise. • Identify resources (technology, data already collected, other evidencebased decision-making processes or committees) that are already in place that can inform program improvement conversations.
Ludvik.indb 105
27-10-2018 00:23:32
106
OUTCOMES-BASED PROGRAM REVIEW
• Identify key leaders, and other professionals on campus who can help integrate OBPR into the use of performance indicators and other predictive analytic processes. • Identify resources that are needed to embed evidence-based decisionmaking in day-to-day work. • Deliver introductory professional development workshops on how to engage in meaningful OBPR. • Embed explanations of OBPR expectations in orientation programs for faculty, staff, and students. • Explain how you will know you are engaged in successful institutionalized OBPR. In other words, what are the outcomes of a learning organization? How will you know you are a learning organization by engaging in OBPR? • Indicate which good practice criteria (see chapter 4) your institution will use to evaluate how well it has transformed into using evidencebased decision-making to improve programs and articulate what it will look like when your institution meets those good practice criteria. • Showcase programs that have been improved because of the pervasive and systematic practice of OBPR-like processes or other reflective inquiry practices. • Plan a comprehensive faculty and staff professional development program that includes new faculty and staff orientation and adjunct and teacher assistant training programs. • Draft a plan of how OBPR will connect with other central and programmatic purposefully reflective practices for improving student learning, research, and service. • Draft a plan of how OBPR will connect with institutional performance indicators, including HAAS. • Draft a plan of how OBPR will connect with predictive analytics. • Draft a plan of how OBPR will connect with budgeting processes. Some mid-range goals (depending on institutional culture) may include: • Adapt or adopt which good practice criteria your institution will implement and use to evaluate your progress. • Detail how engaging in OBPR will provide the evidence that you are a learning organization committed to generatively improving what you create. • Illustrate how your OBPR process aligns with the performance indicators you use.
Ludvik.indb 106
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
107
• Illustrate how your OBPR process informs your predictive analytic process. • Illustrate how your OBPR process aligns with your competencybased assessment process. • Illustrate how your OBPR process informs your budgeting process and vice versa. • Illustrate how your OBPR process informs your strategic planning process and vice versa. • Illustrate how your OBPR process informs the design of your comprehensive faculty and staff development program that includes new faculty and staff orientation and adjunct and teacher assistant training. Long-range goals may include the following • Detail how engagement in OBPR has provided the evidence that you are a learning organization committed to generatively improving what you create. • Explain how well your learning organization has transformed students’ lives through your student learning and development processes. • Demonstrate how well your learning organization has cultivated human flourishing. • Demonstrate how well your learning organization has closed achievement gaps. • Demonstrate how well your learning organization has delivered all other aspects of what the organization seeks to create such as new knowledge, community economic stimulation, community partnerships, and so on. • Illustrate how your OBPR process informs the hiring, review, and promotion of all your faculty and staff. • Illustrate how your OBPR process informs all of your organization’s core values and business processes and vice versa. Working backward from your long-range goals, you can establish several other short- or mid-range goals, such as those that will lead to your achieving good practices in OBPR and emerging as a learning organization. For example, one short- or mid-range goal may be to “deliver introductory workshops on how to engage in meaningful OBPR.” A next short- or mid-range goal that brings you closer to a long-range goal for a comprehensive faculty and staff professional development program may be that two years after starting your introductory workshops, you have a series of specific workshops on
Ludvik.indb 107
27-10-2018 00:23:32
108
OUTCOMES-BASED PROGRAM REVIEW
methodologies and instruments to evaluate student learning and development that also allow you to identify achievement gaps, thus better informing performance metrics and predictive analytics. Two years later, you may add workshops for preparing reports that inform various constituents about what you are learning about student learning and development and how all the systems at the university interact to support that learning and development. Two years later, you may remove your introductory workshops in exchange for a new faculty and staff orientation program in each department. The time frame for establishing good practices that are second nature vary among types of institutions. It varies even further based on key leadership (at all levels) commitment to an emergent learning organization. Most time frames for short- and long-range goals have to be adjusted due to the real-life influences that affect the work of those involved in institutionalizing OBPR (recall the ILP discussion in chapter 1). Influences such as key leadership changes, governing board member changes, budget crises, institutional or government policy changes, and other reactions to political or social media–driven demands can cause leadership to alter focus from the pursuit of excellence through reflective inquiry to survival. While the shifts on focus of attention are understandable, the degree to which an institution allows higher education crises to hinder its ability to define who it is and evaluate itself according to the stakeholders who deliver and receive the learning will influence the degree to which the institution progresses with its reflective, collaborative, evidence-based decision-making learning organization model. While most faculty and staff involved in day-to-day implementation of OBPR have little to no control over these higher education crises, they do have control over what the implementation plan looks like. For example, if an institution has established its short-, mid-, and long-range goals for implementing OBPR to demonstrate that it is a learning organization, and it has a plan to evaluate the extent to which those goals are being achieved, then at least faculty and staff can point leadership to where the plan has deviated from the time line. If a meta-assessment plan is being implemented, then faculty and staff can also illustrate what may be needed for the institution to move ahead in its purposeful self-reflection process. In this manner, the institution is “practicing what it preaches” in that it is evaluating its own achievement of meaningful, systematic OBPR so that it can emerge as or sustain itself as a learning organization. When planning your short-, mid-, and long-range goals, consider the following questions: • Who will define what it means to be a learning organization? And when?
Ludvik.indb 108
27-10-2018 00:23:32
HOW AND WHEN: QUESTIONS TO CONSIDER
109
• Who will define what OBPR means for the institution or college/ division in pursuit of becoming a learning organization? And when? • Who will identify the motivations for engaging in OBPR, such as advancing equity or meeting accreditation or state/federal requirements? And when? • Who will draft the purpose for and what is hoped to be created from engaging in OBPR? And when? • Who will draft the common operational language for OBPR? And when? • Who will draft the guiding questions for OBPR? And when? • Who will determine the components of the OBPR documentation template? And when? • Once these drafts and templates are completed, to whom do they go for review? For approval? For implementation? For funding? • Who will determine how departments/programs engage in and/or share their OBPR results so that systems thinking is made evident? • Who will ensure that OBPR results are linked to performance indicators? • Who will ensure that OBPR results are identifying equity gaps or the closing of achievement gaps? • Who will determine how OBPR fits into the organizational budgeting process? And when? • Who will determine how OBPR fits into the strategic planning, predictive analytic process, and/or other decision-making processes such as the hiring, review, and promotion of faculty and staff? And when? • Who will determine how we leverage technology to aid in our collaborative communication with one another and document how we are emerging as a learning organization? Additional questions to consider are posed in earlier chapters, previously in this chapter, and also throughout the rest of this chapter.
Identify Existing Resources and Processes and New Resources When starting anew with OBPR or redesigning your current process, it is extremely helpful to identify what your institution has already done regarding evaluation, assessment, or planning. In doing so, you often identify those who have already engaged in the process and can serve as your test case examples of success (Maki, 2004). Keep in mind that because there may not be a standard definition for assessment, one person may call
Ludvik.indb 109
27-10-2018 00:23:33
110
OUTCOMES-BASED PROGRAM REVIEW
outcomes-based assessment planning, while another acknowledges it as evaluation. Another may be thinking that outcomes are synonymous with institutional performance indicators. Therefore, it is important to clarify definitions before inviting volunteers to share their examples. In addition, these faculty and staff who are already engaged in outcomes-based assessment can assist in educating others, if they so choose, in how to engage in meaningful OBPR. It also helps you identify colleagues who have already defined a language for practical use. Inviting these faculty and staff to draft the common operational language for the entire organization to reference will ensure that those who are already doing assessment feel ownership in the creation or refinement of the OBPR process. Beyond identifying faculty and staff who are already engaged in outcomes-based assessment and leveraging their expertise, you also learn about the processes in which they are involved and the resources they are already using. This exploration is a great way to identify who knows what and who is getting data from where. Such information can assist others in their quest for evidence of how well their program is accomplishing what they say it is. When many programs are beginning their reflective inquiry of how well they are creating what they intended to, it is particularly valuable for them to know where they can access data that most institutions have readily available. Yet, many institutions have not had the time or structure to organize data and make it easily accessible to faculty and staff. Good practice institutions emphasize how important it is for institutionally collected data to be identified and made readily available. Then faculty and staff can spend more time exploring questions and searching for evidence of student learning and development that have not traditionally been so readily available. This is of particular importance when demonstrating much of the learning and development outcomes data that previously may not have been systematically considered, such as learning disposition data for which employers and graduate schools are demanding evidence (Kuh et al., 2018). Often when administrators try to find out who is already engaged in these processes, they disseminate a survey (an indirect measure). That may be helpful and productive, but it may be just as helpful to inquire informally about which people are doing assessment and ask them to submit plans and reports of what they are learning from their assessment work (a direct measure). Doing this provides you with plenty of information about how they are conceptualizing outcomes-based assessment and with an idea of the language they are using as well as what they are learning about their program. It also means that they do not have to take time to complete another report or survey—rather, they can just turn in what they have been working on for the last few months or years. While it takes more of your time to compile such
Ludvik.indb 110
27-10-2018 00:23:33
HOW AND WHEN: QUESTIONS TO CONSIDER
111
information, you have already begun to collect artifacts (direct evidence) that demonstrate which programs are or are not engaged in meaningful reflection, and you have examples of how results are being used to improve student learning and development, research, and service. Gathering these artifacts can also help you identify the types of outcomes that faculty and staff value. Identifying shared outcomes can lead to discussions about shared institutional learning principles. In addition, identifying the methods faculty and staff use to evaluate outcomes can assist you in identifying internal experts to design or modify existing rubrics or reflective learning portfolios. They may also be able to leverage social media data collection tools and other ways of discovering organizational learning. These internal experts may be willing to offer their work as examples to others who aspire to use those same methods or tools. Questions to ask include the following: • Who is engaged in assessment/planning/evaluation/program review/ accreditation/institutional effectiveness/quality assurance/quality enhancement/evidence-based decision-making/continuous improvement/ strategic planning/predictive analytics/reflective student learning portfolios/gathering student engagement data in their programs at your institution? • What has the institution already done with the evaluation of general education/your academic program/academic and student support services programs/your course/your activity/workshop/service/intervention? • What data can you get from your state system office/state coordinating board/institutional research office/registrar’s office/ enrollment services or management office/information system/ budget or finance office/research office/public relations office/ strategic planning office? • What national surveys (e.g., NSSE, Cooperative Institutional Research Program [CIRP], Your First College Year [YFCY]; see Appendix C for more ideas) have already been completed and where are the data located? • What resources are already available to assist with this process (data, assessment tools, people, technology, workshops, tutorials, reference books, statisticians, psychologists, sociologists, educators, consultants, etc.)? Once you have identified existing resources, you can move on to identifying the resources you may need to implement systematic and pervasive OBPR. Keep in mind that most of the resources many of the good practice institutions
Ludvik.indb 111
27-10-2018 00:23:33
112
OUTCOMES-BASED PROGRAM REVIEW
enjoy now were not present when they first began their processes. So, it may be wise to incorporate the acquisition of resources to support OBPR into your strategic plan, predictive analytic process, and other compliance reporting if applicable. By planning which resources you will need to support ongoing OBPR, you will also be able to identify opportunities to reallocate resources within the OBPR process itself. For example, as faculty and administrators learn more about outcomes-based assessment, you may be able to reallocate centralized training materials into stipends for decentralized “faculty fellows” or “data champions.” The faculty fellows or data champions, representing each college/division, can coordinate creation of professional development materials, post them on the Web, and offer workshops as well as one-on-one consultation services to their own colleges/divisions. Furthermore, using faculty fellows allows materials to be customized for specific college/division needs, such as meeting professional accreditation requirements or specific college/division goals for learning or development, research, or service. It also allows those colleges/divisions who have been doing this for a while to refine their reflective inquiry processes in accordance to their emerging needs and values—a learning organization value. Some sample questions to ask when determining what new resources may be needed include the following: • Do you have adequate reference materials on what it means to become a learning organization and engage in reflective inquiry through OBPR? • Do you need to organize data from your existing transactional information systems as well as data collection systems (student use of services and facilities, campus surveys, data clearinghouse sources) into a user-friendly, Web-query central, multidimensional database so faculty and administrators can access data they need for program review from their desktop? This will also inform meaningful dialogue about whether your predictive analytic process can be refined, as well as potentially create more meaningful and useful performance indicators along with the creation of new indices designed from a multitude of data points. • Do you need templates created for departmental program review websites? • Do you need scanning software? • Do you need survey software? • Do you need social media technologists to mine data? • Do you have a centralized place to store electronic or hard copies of OBPR plans and reports? • Do you need to hire a webmaster to organize the transparency of your OBPR plans and results on the Internet?
Ludvik.indb 112
27-10-2018 00:23:33
HOW AND WHEN: QUESTIONS TO CONSIDER
113
• Do you need tools to assist faculty and staff with text mining of qualitatively collected data or data mining? • Do you need experts to train faculty and staff on how to mine the data gathered from your paper or electronic reflective student portfolio process or student engagement transcripts? • Do you need funds to finance retreats where faculty and staff can collaboratively speak across systems discussing improvements or questions that have been informed by their OBPR efforts? • Do you need institutional competitive grants or a process to award assigned time to faculty and administrators who are designing rubrics and portfolios or who are collaboratively engaged in evidence-based dialogue across systems? • Do you need a retreat to discern your institutional priorities or to establish a process of how to determine priorities for funding when multiple programs demonstrate multiple needs for improvements that have been informed by their OBPR processes? • Do you have an institutional reallocation of resources process to fund programs for proposed improvements? • Do you need institutional competitive grants to send faculty and administrators to assessment conferences where they can learn about new practices while showcasing their own accomplishments? • Do you need more faculty and administrators to help with one-onone consulting regarding questions raised about OBPR? • Do you need to hire statisticians to assist with qualitative or quantitative data analysis? • Do you need to contract outside readers to review portfolios? Do you need funds to pay them or can you get community partners and alumni to volunteer? • Do you need to hire administrative assistants to help with documentation? • Do you need to hire research assistants to conduct the metaassessment of the OBPR process? • Do you need to provide higher level administrators with retreat time or specific professional development so they can learn how to responsibly and ethically use the data generated from the OBPR process and discern how they connect with predictive analytic and performance indicator data? While this list of questions is not exhaustive, please note that it does not mean that your institution needs to invest in all of this internal support for OBPR to become sustainable. Again, each institution is unique in its
Ludvik.indb 113
27-10-2018 00:23:33
114
OUTCOMES-BASED PROGRAM REVIEW
culture and, therefore, unique in how it implements OBPR—how it intends to demonstrate that it is a learning organization. This list will help you think about what you may need, as an organization, to engage in this meaningful, evidence-based decision-making process so that your learning organization emerges or becomes sustained.
Establish a Collaborative Dialogue and Communication Plan As mentioned previously, cross-system dialogue and communication is increasingly challenged by emerging and perhaps increasing crisis management on campuses. Daily, I am reminded that texting, social media postings, and e-mails do not constitute dialogue, although they may be a way of communicating information. How you go about dialoguing across various organizational constituents, coming to an understanding of various perspectives, and then reaching an agreement on a reflective inquiry approach is one process. Communicating the value and importance of engaging in reflective, evidence-based decision-making processes and how to go about doing that is yet another process. Still another communication and dialogue process involves discerning the meaning of results and how that will inform improvements, performance indicators, and predictive analytics is still another process. All three depend, once again, on your institutional culture. The following questions may help you discern the most productive means by which to dialogue among all stakeholders and then determine the process of information dissemination/communication with all stakeholders:
Dialogue • Who (considering leadership at all levels of the organization) needs to come together to discern the responses to the previously proposed questions in this chapter and in the previous chapters? • How will the time for that dialogue be prioritized amid the constant crisis of higher education? • How will ongoing dialogue, essential in making meaningful connections of OBPR results between performance indicators and predictive analytic processes, as well as strategic planning processes, be prioritized? • How will ongoing dialogue inform other questions that need to be raised about institutional priorities for decision-making and the importance of HAAS? • How will ongoing dialogue take place across the various systems to ensure evidence of a learning organization?
Ludvik.indb 114
27-10-2018 00:23:33
HOW AND WHEN: QUESTIONS TO CONSIDER
115
Communication • • • • •
•
•
•
•
• •
•
•
Ludvik.indb 115
Who will articulate learning organization expectations? Who will articulate expectations of the OBPR process? How will those expectations be communicated? How will the learning organization community be informed about the learning organization plan? How will the learning organization community be informed about the resources that are available to support it in establishing or sustaining a learning organization via OBPR? How will the learning organization community members, who were not included in the original dialogue, have an opportunity to comment about the proposed plan? Does each college/division/department/program have its own unique communication structure? If so, how can you ensure that the structure is tapped into so as to provide each community member with pertinent information and to bring important feedback to the decision-making bodies? Is it possible to have representation from each college/division or department/program on the OBPR committee (the committee charged with reviewing the quality of process) so the representative can serve as conduits for information dissemination to and from the committee and to and from the program/department/college/division that they represent? Will faculty and administrators check a program review website for updated information, or is it best to get that information out in a newsletter format, on an electronic mailing list, in a memorandum, texts, social media, or other routes? Which university/college standing governance committees should be informed of OBPR updates and how often? Can OBPR committee members present evidence of what the institution is learning about program quality to departments at faculty, staff, student, or standing governance committee meetings? Is it appropriate to use the campus social media to advertise what community members are learning about the quality of their programs so as to disseminate this learning organization’s values and evidence that supports or challenges those values? How can the university public relations team help to communicate within the learning organization and to external stakeholders what the organization is learning about program quality to align this information with expected performance indicator achievement?
27-10-2018 00:23:33
116
OUTCOMES-BASED PROGRAM REVIEW
• How can the university public relations team help to communicate within the learning organization and to external stakeholders what the organization is learning about program quality to demonstrate this learning organization’s values and the evidence that supports or challenges those values? • How can the university or college outreach, enrollment management, alumni, and development professionals leverage what the university is learning about program quality in their materials? • How can program quality information be used to recruit and retain faculty, administrators, and staff professionals?
Establishing Criteria for Prioritizing Resources The following questions are provided to help guide your organizational discussions for allocating resources with the use of evidence generated from OBPR and based on research from Bresciani, Gillig, Tucker, Weiner, and McCully (2014). Remember that in this discussion, faculty and staff time is considered a resource as are use of space, equipment, community volunteers, alumni networks, and actual dollars received through various avenues based on how your organization is designed to receive revenues.
Foundational Questions These sets of questions exist in order to identify whether you will be able to structurally support using OBPR to prioritize resource allocations. Keep in mind that resource allocation decisions are made at various levels of the organization (course, program, department, college, institution, system, state, and federal). So, this discussion can be facilitated at any level within your organization. • Is your organizational leadership on whatever level the resource decision is based—functional area, program, department, college or institution—clear on what it values? • Are those values prioritized? • Do you have goals and/or strategic initiatives articulated for the values that you have prioritized? • Are program and service leadership able to align their outcomes to the goals and strategic initiatives that your organizational leadership prioritizes?
Ludvik.indb 116
27-10-2018 00:23:33
HOW AND WHEN: QUESTIONS TO CONSIDER
117
• Do your budgeting process and practice align with what your organizational leaders value? • Does your resource allocation or reallocation process and practice align with what your organizational leadership values?
Implementation Questions Once you have determined to what extent the foundation for allocating resources using evidence has been identified, the following questions can be used to implement the practice of (a) allocating resources to programs and services to maintain their quality or improve their quality, (b) choosing to eliminate programs and services, or (c) choosing to initiate new programs and services. We greatly respect that budgeting processes vary by institution, system, and so on. We also respect that your organizational budgeting cycle time line may be vastly different from your OBPR discussion of results timeline and therefore may significantly influence your ability to allocate resources to improve those results. In other words, investments in improvements may be dependent on those budgeting processes and subsequent time lines of those processes. The invitation here is to get clear on organizational values and bring into alignment the funding of improvements based on evidence regardless of whether you live in a performance-based funding reality or an enrollmentbased funding reality or something else. The following questions simply invite reflection to guide alignment of those processes to the extent that you are able—all the while understanding the opportunities and challenges about the way all kinds of resources are needed to improve your results. • Which programs and services have outcomes that align directly with your organization’s mission? vision? strategic initiatives for the present and future? values? priorities? goals? performance indicators, including HAAS? • Of the programs and services that align with your organizational values, which have gathered evidence to demonstrate that maintaining resources at the current level will potentially improve the quality of expected outcomes? decrease the quality of expected outcomes? maintain the current level of quality of expected outcomes?
Ludvik.indb 117
27-10-2018 00:23:33
118
OUTCOMES-BASED PROGRAM REVIEW
• Of the programs and services that align with your organizational values and from which you have gathered evidence to demonstrate that maintaining resources at the current level will potentially improve the quality of expected outcomes, can you manage to maintain their current level of funding? If yes, then do so. If no, prioritize them to determine which ones may better fit with your organizational vision, preparing you for success in the present and the future, and allocate funds (at the same level or at an increased level or at a decreased level) according to their level of prioritization and your overall budget and capacity to do so. • Of the programs and services that align with your organizational values and from which you have gathered evidence to demonstrate that maintaining resources at the current level will potentially decrease the quality of expected outcomes, can you manage to increase their current level of funding? If yes, then do so. If no, prioritize them to determine which ones may better fit with your organizational vision, preparing you for success in the present and the future, and allocate funds (at the same level or at an increased level or at a decreased level) according to their level of prioritization and your overall budget and capacity to do so. • Of the programs and services that align with your organizational values and from which you have gathered evidence to demonstrate that maintaining resources at the current level will potentially maintain the quality of expected outcomes, can you manage to maintain their current level of funding? If yes, then do so. If no, prioritize them to determine which ones may better fit with your organizational vision, preparing you for success in the present and the future, and allocate funds (at the same level or at an increased level or at a decreased level) according to their level of prioritization and your overall budget and capacity to do so. • Group all the programs and services together that are to receive the same allocation of funding as the previous year; what is that total? • Group all the programs and services together that are to receive an increase in funding from the previous year; what is that total? • Group all the programs and services together that are to receive a decrease in allocation of funding from the previous year; what is that total?
Ludvik.indb 118
27-10-2018 00:23:33
HOW AND WHEN: QUESTIONS TO CONSIDER
119
• Can your organizational budget cover the totals of the resources identified in the three previous questions? If not, move the lower prioritized programs for each category or for one category—depending on what your organization seeks to accomplish—into the category of programs and services that will be discussed as follows. • If there are still resources in the budget, determine which of the remaining programs and services that align with organizational values and with present and future strategic initiatives to keep or which are necessary for the basic functioning of the organization (and for which it has been determined that outsourcing is not possible). Determine whether decreased quality of outcomes would be acceptable to the organizational leadership and constituents if those programs and services were cut or if funding was decreased. Then, make those very tough decisions with a brief explanation of why the decisions were made. While it is understood that these questions are simplistically presented and that many factors influence decision-making, the point of presenting these questions, other than evidence, is to begin to implement transparent evidence-based allocation of resources on your own campus. With clearly articulated values and strategic priorities in which your organization is willing to invest and a prioritized manner in which they are funded, constituents of your organization can collaboratively dialogue to make better decisions about where to invest their own energy and time. That empowers everyone in a time where the focus on “lack of resources” feels as if it has stripped organizations of all their humanity and compassion. Finally, we recognize that we have addressed only the allocation of resources that align with organizational values and with present and future organizational strategic initiatives. Keep in mind that this resource allocation process can occur on every level of the organization and therefore, some priorities may differ at various levels. As long as each level of the organization has authority for the manner in which it determines its values and priorities and is given resources to allocate, then this process means that values and strategic priorities do not have to align all the way up the hierarchically organizational ladder unless the resource allocation and budgeting processes are designed to facilitate and expect such alignment.
Genuinely and Compassionately Discuss Implementation Barriers and Strategies to Overcome Them Barriers are barriers, whether they are real or perceived. Chapter 6 addresses typical barriers and good practice strategies to overcome them.
Ludvik.indb 119
27-10-2018 00:23:33
120
OUTCOMES-BASED PROGRAM REVIEW
Move Forward With Flexibility As presented in chapter 6, fear is one of the primary barriers to faculty and administrative involvement in OBPR. Often, the only way to combat fear (real or perceived) is to engage in the activity you fear. Rock climbers who learn to climb to overcome their fear of heights do indeed overcome their fear by learning how to scale mountains safely. By using equipment that was designed for the task, by learning and practicing the skills, and by paying attention and collaborating with others (a requirement of mountain climbing), their confidence grows and their sense of fear (real and perceived) is replaced with a healthy respect for what they are undertaking and what they can collaboratively accomplish with the proper equipment. The same is true for scuba divers. Divers overcome fear of drowning by learning to trust in the equipment and their diving buddy. However, in both cases, divers and climbers would have never been able to test their trust in their equipment and their climbing partner or dive buddy if they never even tried to climb or dive. No first climb is ever perfect, and no first scuba diving attempt is challenge-free. The same is true for engaging in OBPR. Faculty and staff need to learn how to engage in meaningful and systematic reflective inquiry, learn about the appropriate tools and resources to aid them, and learn to trust other faculty and administrators to use the data appropriately. In this manner, over time, faculty and staff can improve their own OBPR processes, just as the climber and the scuba diver improve. While divers have buddies who may disappoint them, faculty and staff may be frustrated by their colleagues, yet the decision not to engage in a collaborative manner typically means you cannot sustain improvement in the program and it certainly means that achieving evidence that you are a learning organization will be out of reach. Responsible divers and rock climbers do not go it alone, even though they may be wary of their buddy or partner. Therefore, after faculty and staff learn how to engage in OBPR, their next step is literally to dive in and implement it. Keep in mind that if climbing and diving equipment go unchecked, the safety of the climber and the diver is at risk. Similarly, if faculty and staff jump in and do not check on the quality of their review processes, they may realize the very fears that originally kept them from becoming engaged materialize, such as poor process and inappropriate methods resulting in “bad” data and poor decision-making creating harm for humans. Therefore, continued improvement in the OBPR process must be monitored. While we have presented several tips and advice on implementing good practices in OBPR, our most important advice is to remain flexible. The best
Ludvik.indb 120
27-10-2018 00:23:33
HOW AND WHEN: QUESTIONS TO CONSIDER
121
of good practices can go awry if they are implemented as a checklist, with too much rigidity and without regard for human flourishing. In addition, a key part of becoming a learning organization is to leverage faculty’s and administrators’ innate intellectual curiosity. Most higher education professionals do want to understand how to improve their programs, research, service, and student learning and development; they want to use their own discipline-specific inquiry methods to do so. Leverage that knowledge and energy, and you will find that many implementation barriers will fade. If you take away one piece of information from this book, let it be to remain flexible, kind, compassionate, and patient in the systematic implementation process of becoming a learning organization. Pay close attention to how varying disciplines may need different schedules, report formats, or other elements of the process. Keep in mind that the ultimate goal is to engage faculty and staff in purposeful reflection, planning, and evaluation to improve their programs, research, service, and student learning so that you can provide evidence that you are indeed a learning organization that is not void of cultivating human flourishing and providing HAAS. A list of helpful implementation reminders follows: • • • • • • • •
• • • • • •
Ludvik.indb 121
Be flexible. Be kind and compassionate. Collaborate. Dialogue. Listen, observe, reflect. Ask questions you care about. Embrace the ambiguity of not knowing all the answers. Embrace the complexity of assessing and improving learning and development in a messy environment where humans are cultivating other humans’ learning and development. Define OBPR and determine why your institution is engaging in it. Articulate the purpose for OBPR and what your institution intends to create from engaging in it (a shared conceptual framework). Articulate a common operational language for OBPR. Clearly and transparently communicate expectations for engaging in OBPR. Clearly define how the results will be used. Go ahead and write down every program outcome, but do not try to assess every program outcome every year (Bresciani et al., 2004). You may want to start with course outcomes and build program outcomes from those.
27-10-2018 00:23:33
122
OUTCOMES-BASED PROGRAM REVIEW
You can start with institutional, college/division, or departmental goals and ask each program or course to align its outcomes to those goals. Then, move on to implementing the entire OBPR cycle one outcome at a time, making everything systematic; in other words, work on forming habits of reflective inquiry. Be sure that faculty/administrators understand the purpose of OBPR. It is not OBPR for OBPR’s sake. Its goal is to reflect on the end result of doing, so that the doing can be incorporated; it’s providing evidence of a learning organization. Be sure that faculty/administrators value what is being measured. Be sure that faculty/administrators have ownership of the process— that leaders at all levels are engaged. Respect varying disciplines’ human desire for academic autonomy. Recruit respected and influential faculty/administrators to lead the process. Remind each other of the benefits of engaging in a meaningful reflective inquiry process and drop any practice that detracts from those benefits. Share with each other examples of what works well and incorporate those practices into professional development opportunities and orientation programs. Celebrate what you are learning about your organization through OBPR. Advertise what you have learned from OBPR and the improvement decisions you have made. Gently challenge faculty and staff to establish another good practice once they have mastered one good practice. Incorporate students in all OBPR if your program is ready and your institutional culture allows. Compassionately acknowledge and address barriers to OBPR (see chapter 6). Pay attention to varying program demands and resources. Understand your role as an OBPR committee member. Understand your role as the OBPR chair or professional responsible for the process. Have the president and other organizational leaders publicly demonstrate their gratitude to OBPR participants in a meaningful way. Tie allocation and reallocation of resources to decisions and recommendations to improve student learning and development that are based on OBPR results.
•
• • • • •
•
• • • • • • • • •
•
Ludvik.indb 122
27-10-2018 00:23:33
HOW AND WHEN: QUESTIONS TO CONSIDER
123
• Have OBPR inform strategic planning and vice versa. • Have OBPR inform the selection of performance indicators, including HAAS. • Have OBPR inform your predictive analytic processes. • Be kind and compassionate.
Key Learning Points 1. 2. 3. 4. 5.
Ask questions. Listen to the responses. Ask more questions based on the responses. Repeat steps 2 and 3 until understanding is reached. Take action.
More Questions to Consider 1. What is missing from this list of questions?
Note 1. Compliance reporting is when an organization is required to produce some particular data in response to a specific state or federal mandate. Often, these data are reported and there is no organizational reflective dialogue on the systems that served to create that data. It is simply a compliance report. Sometimes, the institutional leaders aren’t even sure who examines the data or how they will be used at the level to which they are reported. Good practice institutions do not confuse data gathered for compliance reporting (even though they may use that data in their OBPR processes) with the reflective practice of dialoguing about the data and using it to inform improvement decisions. This is not to imply that a lot of thought does not go into how best to collect quality data for compliance reporting—it does. It is just to imply that the processes may be different.
Ludvik.indb 123
27-10-2018 00:23:33
Ludvik.indb 124
27-10-2018 00:23:33
6 OVERCOMING BARRIERS TO IMPLEMENTING GOOD PRACTICE OUTCOMESBASED ASSESSMENT PROGRAM REVIEW Culture makes people understand each other better. And if they understand each other better in their soul, it is easier to overcome the economic and political barriers. But first they have to understand that their neighbour is, in the end, just like them, with similar problems, and similar questions. (Paulo Coelho 2014, p. 34)
M
any faculty and administrators encounter resistance when they implement OBPR. This resistance is normal; however, how you address the points of resistance or barriers may affect the genuineness and productivity of your OBPR process positively or negatively. The following barriers to assessment are most typical. At some point, all of the good practice program review institutions had to address these barriers; some are still addressing them as new faculty and administrators cycle into leadership positions around their institutions, or as trustees, legislators, coordinating board members, alumni, parents, and students raise new policies and challenges. We mentioned in chapter 5 how fear of what is unknown requires a need to explain what OBPR is and why the organization is engaged. Trust in how data may be used or misused can be addressed only through transparent and consistent use. Establishing clear roles and responsibilities as well as documentation requirements and time lines will help manage expectations for the OBPR process. All of these practices will alleviate many barriers. In 2006, the five most typical barriers to OBPR implementation reported included the following: 125
Ludvik.indb 125
27-10-2018 00:23:33
126
OUTCOMES-BASED PROGRAM REVIEW
1. 2. 3. 4.
Limited time to conduct assessment Limited resources to put toward assessment Limited understanding of or expertise in assessment Perceived benefits of assessment are not substantial enough to engage in assessment 5. Not wanting to bother the students with completing several surveys.
The FLIPCurric (flipcurric.edu.au; Scott, 2017) research discovered these barriers: 1. Unclear expectations (which can be clarified by using the assessmentfocused unit learning guides) 2. Unclear on how each unit of study and its assessment fits into the bigger picture of where their degree program is leading 3. Inadequate or unfocused feedback 4. Different assessment loads between units of study 5. Assessment tasks all due on the same day 6. Overassessment of basic skills and knowledge out of context 7. Group assessment and so-called free-loaders
In addition, where faculty were concerned, the following barriers were reported: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Processes used to assure the quality of assessment don’t add value Time-consuming meetings without a productive outcome for students Absence of a shared language, overall framework, and clear accountabilities Unavailability of good practice models, exemplars, and “lonely planet” guides Unaligned services and rewards for improvements in quality assurance for assessment Limited, timely tracking and improvement data Inadequate opportunities to benchmark for improvement Unclear leadership and accountabilities Limited access to peer support Part-time staff are not always engaged or “in the loop”
While these may be the many reasons that faculty and administrators reported, we noted that further conversations revealed that the barriers could be summarized into two primary categories: 1. lack of understanding of the value of OBPR among those implementing it as well as among those top-level leadership who are being
Ludvik.indb 126
27-10-2018 00:23:33
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
127
asked to support it and use it to make resource reallocation decisions or decisions that connect OBPR to accountability conversations, and 2. lack of resources to engage in meaningful and manageable assessment, which includes time to learn about and to engage in reflective dialogue about what assessment results mean and how they can be acted on. In addition, there is a lack of institutional infrastructure to support OBPR and have it inform decisions related to competency-based assessment, predictive analytics, strategic planning, and performance indicators. The array of barriers that coexist inside these two categories of barriers are many (as previously reported) and have been more recently characterized as a list that resembles the following: • limited time to collaboratively conduct OBPR and reflect on and then dialogue about the usefulness of the results, • limited evidence of use of OBPR results to improve reflected in hiring criteria and performance evaluations, • limited transparent evidence of use of OBPR-informed recommendations to actual resources allocated or reallocated to make improvements, • limited engagement of faculty and staff in professional development opportunities provided to learn about how to engage in meaningful outcomes-based assessment, • limited understanding of perceived benefits of documenting assessment results being used by higher level decision makers to improve faculty, staff, and students’ day-to-day operations, • limited evidence of how competency-based practices such as ePortfolios are connected to OBPR, • limited evidence of how OBPR results inform predictive analytics conversations and use of performance metrics for decision-making, and • limited evidence of how OBPR results inform strategic planning and vice versa. In this conversation, it is difficult to differentiate between what serves as a historically, institutionally acceptable excuse for not engaging in OBPR (e.g., “Our presidential cabinet keeps changing, so why should I even bother?”) and what may be a legitimate barrier, such as limited time to accomplish a great deal of high-quality work and produce the evidence that it is of high quality. What seems to be apparent is that while postsecondary educational leadership at all levels has instilled a culture that celebrates and rewards time on task to reflect on and dialogue about research, creative endeavors, and grant
Ludvik.indb 127
27-10-2018 00:23:33
128
OUTCOMES-BASED PROGRAM REVIEW
proposals that are being proposed, are in process, or have been documented (published, performed, or funded), it is less likely to invest in the same kinds of activities to cultivate deeper reflective engagement in discovering how the organization itself is learning, particularly when it comes to advancing student learning and development. Even among institutions whose sole focus is on teaching, faculty and administrators may be more focused on the micro teaching aspect and less so on how the organization is learning about what all that teaching is creating. To determine that a learning organization wants to become a learning organization in all of its aspects because learning about itself is what it desires to do is imperative for any of this process to work. No reflective process for continuous improvement can systematically transform an organization if leadership at all levels doesn’t commit the time to engage fully in it. Continuing to play the game of meeting accreditation standards may continue, regardless of whether your institution has top-level leadership coming and going, unless leaders at all levels decide they will invest in self-reflection of how their organization learns. In 2006 when the first edition of this book, Outcomes-Based Academic and Co-Curricular Program Review: A Compilation of Institutional Good Practices was published, I had served in primarily an administrative role where it was my job at two different types of institutions to coordinate the OBPR process. Now, as I write this edition of the book, I have served primarily as a faculty member at one institution. I jokingly tell my administrative colleagues at San Diego State University that I have become the faculty member by whom I was challenged as an administrator. Wanting to coordinate a good quality OBPR process for the degree programs in which I serve as a faculty member with the very challenging process of preparing, delivering, and refining classes I teach, publishing refereed journal articles, seeking grant funding, advising students, recruiting students, serving on a number of program, departmental, university and system-wide committees, and staying engaged in professional communities while actively furthering my own professional development and well-being is challenging. As such, I have observed myself engage in institutionally accepted excuses while also feeling that I had to defend the legitimate reasons that my OBPR reports were turned in late or not representing the quality I know they should. And during the previous presidential cabinet change, I didn’t turn in any reports, not because I didn’t care, but because no one asked for them. As a faculty leader, I had become a part of the problem and it had nothing to do with our undergoing our interim presidential leadership cabinet changes and our immediate forthcoming one. I never questioned whether I needed to continue to commit time to engaging in research during all of
Ludvik.indb 128
27-10-2018 00:23:33
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
129
these cabinet changes, so why did I question my commitment to documenting and collaboratively dialoguing around how to improve student learning and development? My turn around has everything to do with re-engaging in this research and through self-reflection, realizing how I had become the faculty member I previously complained about. As institutional leaders at all levels instill OBPR practices to demonstrate they are learning organizations, the very real barriers must be compassionately addressed. Regardless of how many times you have heard or given excuses for why OBPR is not being done, each barrier that a faculty member or administrator communicates is real, even though the institution may have already identified a strategy to address it. If your institutional leadership has not already identified a strategy to address the barrier, then your institutional leadership must do so. Or if your organizational leadership has already identified a strategy to address the barrier, and you feel that strategy has been communicated, then you may have to repackage the strategy to reveal the solution, or you may need more individualized strategies to assist the faculty or administrator to move forward. (We must embody for ourselves the same kinds of diverse learning opportunities that we need to provide our students, so that all can be successful.) So, as a reminder, treat every barrier as if you never heard it before. There are a lot of solutions and some of them may just need to be short-term, such as the time I needed to request administrative support to scan archived reflective student portfolios into our new assessment-management software (which is now no longer in use) so I could demonstrate a history of program review. I made the request and the institutional leadership was able to accommodate that request. You would have thought I had just been handed a million dollars. It felt so good to be heard and positively rewarded. So, you may just need to be very creative and flexible, and refine already-present solutions for the individual or program raising the concerns. Many of them will be short-term solutions, thus emphasizing the importance of dynamic flexibility that embodies a learning organization. Learning to engage in meaningful OBPR—the process of becoming a learning organization—is an educational process for the individual faculty and administrative members as well as for the institution as a whole. Paying attention to what and how faculty and staff are learning how to improve their programs can formulate and refine your institutional expectations. This is an organic process. While expectations and structure need to be provided, it must be done in a manner that balances structure and flexibility. The harmony such a balance creates is intended to diminish faculty and staff anxiety surrounding institutional expectations while allowing faculty and staff the creativity and ingenuity to engage in learning more about what they do well and what they need to improve. These, after all, are characteristics of a dynamic learning organization.
Ludvik.indb 129
27-10-2018 00:23:34
130
OUTCOMES-BASED PROGRAM REVIEW
Flexibility in resolving barriers is not the only important criterion for moving faculty and staff forward; the ability to really understand the particular culture and listen to faculty and administrators is extremely important. As mentioned, a solution for one department may simply not work for another. So, while sometimes it may be about repackaging a strategy, other times it means recommending a completely different solution to the program. For example, at Texas A&M University, faculty and researchers did not want to hear about OBPR, so faculty and administrators charged with moving OBPR forward repackaged it and called it evidence-based decision-making. However, even with the repackaging, one group of prestigious researchers was still very resistant to the idea. For them, a different solution was needed. An example of how to resolve this type of resistance was described in the keynote address given by a well-known and well-respected member of the National Science Foundation (NSF) at Texas A&M’s annual assessment conference. The address focused on the integration of research and teaching. The NSF member drew from examples in funded grant research to illustrate the impact that new knowledge had in the classroom. Furthermore, the presentation illustrated how undergraduates could be engaged by discovering new knowledge in the classroom and through structured undergraduate research initiatives. Helping faculty see the connection between outcomesbased assessment in the classroom and research did not happen overnight, but this strategy, designed by one of the very well respected research faculty members who served on the OBPR committee, opened a new door. The primary point here is that as you read through these barriers and the proposed solutions to them, keep in mind the saying we have espoused throughout this book, “No one size fits all”; each barrier experienced is unique, as is each strategy to address the barrier. This chapter seeks to generate ideas and to show you that barriers are normal and not to be feared. Further, if you purposefully implement strategies to answer the questions posed in the previous chapters, and do so in a way that leverages your unique institutional culture while holding in awareness that this is all about emerging as a learning organization, then many of the solutions will be readily available for addressing the barriers we pose in this chapter because you are clear about what you are intending to create with this process.
Lack of Understanding of Assessment Addressing the lack of understanding of the value and importance of OBPR often means to first address the “fear” that comes from being asked to do something within a certain time frame and knowing little or nothing about how to do it. Often the solution to this fear is to offer an assurance of support
Ludvik.indb 130
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
131
in the form of resource allocations, including time for professional development, dialogue, and communication. Providing professional development about the unknown typically reduces one’s fears. Even if new fears emerge as a result of the education, a person is better informed and can address these new fears through conversations. Keep in mind that you are working with professionals who are experts in their respective areas. Asking them to learn something very new and apply it to their area of expertise within a specific time frame, particularly when they already have very full plates, may be very intimidating and stressful. Sometimes we notice that professional ego may feel at risk, and professionals may appear resistant simply because they fear failing to “get it right.” As previously mentioned, time to engage in the professional development opportunity may be challenging to find, so consider asking department chairs, heads, and directors for some of their already preplanned meeting time to deliver the professional development experience, as many of the good practice institutions do. As we mentioned earlier, there is a delicate balance between high-level organizational leaders’ expectations that OBPR be completed and faculty and staff ’s beliefs that program review should be organic, genuine, meaningful, and manageable. Sometimes, faculty and staff expect for OBPR to occur when higher level leadership would rather just use predictive analytics. Achieving a balance between meaningful reflective dialogue of evidence and the stress of providing data for transparent accountability purposes means that everyone has to know up front what OBPR means. Thus, generating a shared purpose and operational common language may be an important first step in reducing fear of the process (see chapters 2 and 3). If you have missed this first step and are off and running, you may have already articulated what OBPR means, and all you need to do is transmit this meaning to the university community. Consider the collaborative approach that the Student Affairs Division at Oregon State University took once the division leadership committed to engaging in outcomes-based assessment. When Oregon State University first began to engage in outcomes-based assessment, they noticed that many people felt hesitant about and threatened by assessment. They thought that if results did not meet expected levels of performance consequences would include punitive action and ultimately job loss. Thus, it was imperative from the beginning that Oregon State University leadership communicate the primary purpose of assessment and emphasize that the subsequent results would be used for program/service improvement. They made a commitment to continue to deliver this message in a variety of ways. They also kept all of their meetings open and invited anyone who was concerned about what was actually going on to the table to dialogue. Their work and what it created was transparent to anyone within the Oregon State University community.
Ludvik.indb 131
27-10-2018 00:23:34
132
OUTCOMES-BASED PROGRAM REVIEW
Sometimes faculty and staff need to understand how the results are going to be used even before they understand what assessment is all about. However, before answering, “How will the results be used?” in public, it may be wise to ensure that the person who can answer that question is well informed of the intended purpose of the process. Furthermore, there may be a logical progression within the answer to this question. For example, if faculty and staff are newly engaged in OBPR, it may be wise to recommend initially that the results from OBPR be used to inform decisions about improving the program. Later, it may be appropriate to provide those results to external reviewers (selected by the program’s faculty) as IUPUI does. Even later, depending on how programs are funded, it may be wise to use the results of program review to reallocate institutional funds to improve student learning as Guttman College has. And, somewhere in between, it may be prudent to use the results to inform professional development needs. Differentiating between using results to inform professional development and using results to inform personnel evaluation is key. Even the latter point can be incredibly confusing. For example, when I was at North Carolina State University, OBPR results were used to provide me with an opportunity to attend a professional development seminar where I learned to better provide feedback on students’ writing and redesign my course syllabus so that I could coach students into improving their writing. As an untenured faculty member at that time, I could have been concerned about my career trajectory. However, the way in which the results were used was incredibly helpful to becoming a better faculty member and, as a result, my students’ writing improved. Much later in my career, I argued that I shouldn’t be given tenure without being able to demonstrate direct evidence of students’ learning and development in my courses. However, the institution that awarded me tenure was more interested in seeing how I used the results of outcomes-based assessment to improve my teaching. Because I could show evidence that I used outcomes-based assessment results to improve my teaching, I was awarded tenure. In this one example, you can see different interpretations of use of results to inform personnel evaluation. The emphasis here is that institutional leaders must decide how the results are used. It may not be possible to determine how results will be used from the inception of OBPR, but it will likely be decided as the process develops. Regardless, it is important to communicate to the university community how the results will be used and to be very consistent in implementing that decision while remembering that your organization is trying to generate evidence that you are learning how to do your work in a more skillful way. A program whose results are used for different purposes from what was communicated to the college/university community will cause the
Ludvik.indb 132
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
133
reflective inquiry process to lose momentum; even worse, those participating in the process may lose trust in the process’ ability to transform the institution into a learning organization. One of the most effective ways to overcome fear is by building trust. If you are pretending to build a process that is organic and genuine yet plan to use the information for purposes for which it was not intended, you will lose the trust of the very people you are trying to reach. As referenced in an earlier analogy, it takes at least two people to climb and at least two people to dive. In both situations, you depend on your buddy to be there so you can climb on the mountain or go into the water without experiencing harm or death. If your buddy does not do what was promised, you could risk serious injury or death. If you live to tell about how your buddy broke your trust, you certainly won’t go climbing or diving with that buddy again, and no doubt others will decide not to trust that buddy either. You also may never even be able to engage in the sport again. While this analogy may seem a bit melodramatic, consider the following: When you ask faculty and staff to engage in comprehensive OBPR, you are asking them to become transparent about how they “do” their day-to-day work. This transparency leaves them feeling very vulnerable. If they can trust the decision-makers to use their articulated outcomes, evaluation methods, criteria, results, and recommendations with deliberate care, then they will continue to engage in OBPR. Further, they will testify to their colleagues’ trustworthiness, thus helping to expand the core of risk takers. This is why if your organization hasn’t already connected OBPR to use of predictive analytics, performance indicators, strategic planning, or other competency-based assessment processes such as individual reflective student learning and development portfolios, you need to do so. It helps the seemingly behind-the-scenes cultivation of learning and development and corresponding data collection become transparent. We can’t emphasize how crucial this to the future of higher education (see preface and chapters 1 and 2). By now, you have recognized that the day-to-day “doers” of OBPR are not the only ones who need to understand the purpose; all organizationallevel leaders need to understand OBPR and know how to responsibly use the results to inform their level of decision-making. Thus, even the higher hierarchical level leaders need professional development, dialogue, and communication. For example, at IUPUI, top-level leadership is educated through involvement in the OBPR process. When top-level leaders invite reviewers to serve, they tell them that every member of the review team is expected to contribute at least one section to the final written report due within a month of the conclusion of the campus visit. Once the report is received in the office of the vice chancellor for planning and institutional improvement (PAII), it is disseminated to the campus chancellor, vice chancellors, dean or
Ludvik.indb 133
27-10-2018 00:23:34
134
OUTCOMES-BASED PROGRAM REVIEW
director of the unit reviewed, and department chair. They then ask the chair or service unit head to work with colleagues to draft a considered response that addresses each recommendation in the report and to send that response to the vice chancellor for PAII within six months. A few conclusions and/ or recommendations may be deemed inappropriate or impossible to implement, but each one should be addressed in the written response to the review compiled by the unit. As soon as possible following receipt of the unit response, they ask the responsible dean or vice chancellor to schedule a discussion session that involves the dean of the faculties, if appropriate; the vice chancellor for PAII; the vice chancellor for graduate studies and research (if appropriate); the unit head; and faculty representatives, if desired. During this meeting, everyone concerned finds ways to support the unit in making warranted improvements. During the third or fourth year following the unit review—approximately halfway between scheduled reviews—the unit head is invited to address the Program Review and Assessment Committee to report on progress in the unit since the review and on the quality of the review process itself. This is an exemplary practice. As you can see, this type of professional development is built into the process. It is a very organic way to educate leaders at high levels on how to responsibly use OBPR results. It is also important to educate those providing infrastructure support to the process. If the infrastructure is not available to educate and support those involved in OBPR—providing faculty workshops, individual consultations with departments, survey assistance, and so on—those involved in the process may receive insufficient or conflicting advice, or they may be unsure how to best provide for the process if they simply do not understand the purpose of the process. The majority of good practice institutions have committees that provide educational support to faculty and staff engaged in OBPR. New Jersey City University is progressively engaged in the professional development provided to the key faculty and staff involved in directing and supporting this process. This support allows the team to incorporate new ideas as its members move the institution to improve their learning organization process. Another strategy to help faculty and staff understand what the process is really about is to engage in a faculty- or staff-dependent model. This model is referred to as a faculty- or staff-dependent model because the entire OBPR process depends on faculty and staff leadership to take the researched theories and adapt them to the unique institutional or program culture. Rather than having a process driven without research about how organizations learn, the faculty- and staff-dependent model implies that the professionals hired to coordinate the OBPR are the ones who conduct the research on what is
Ludvik.indb 134
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
135
working well at the institution as well as those who pay attention to what is working well elsewhere. This strategy addresses two primary barriers. First, for faculty and staff who do not understand OBPR, it provides a professional who can share good practices that save them time and frustration. Second, since faculty have saved some time by not having to do the research, the faculty committee can consider options, select one or several, and apply them to its own institutional culture. Without this last step, assessment quickly becomes an administratively driven process and loses meaning and trust. The beauty of this model is that it offers the opportunity for faculty to educate all and be educated as well. Comprehensive faculty and staff development initiatives are important in educating faculty about the assessment process and in supporting them throughout the process. Hampden-Sydney College organized extensive professional development programs when it prepared its faculty to deliver and evaluate writing proficiency. In attempting to overcome assessment barriers, you may often hear that faculty and staff need to become more involved in the intricacies of the process. Often, they are not involved because they do not understand the value of being involved. Answering the questions posed in chapter 5 and using the strategies in this chapter may help build faculty and staff understanding of OBPR—of what it takes to collaboratively showcase a learning organization—and, therefore, may increase faculty and staff involvement in assessment. Sometimes, as New Jersey City University did, it means that you need to start assessing institutional learning and development outcomes that the most interested group of faculty care about. And from this organic process where faculty got to dive into institutionally supported inquiry, their mission, vision, and goals for each program grew. The initial inquiry processes to determine what their faculty were passionate about learning were succesful, and more ideas arose for how to assess outcomes, as well as more requests for OBPR-related professional development. In addition, the position of assessment coordinators emerged, individuals who received assigned time to support faculty and staff to collaboratively dig even deeper by using multiple methods and looking at learning in and out of the classroom. The point is that New Jersey City University didn’t get here overnight or even within one year; they allowed the passions of their faculty and staff to drive their learning organization discovery process. In structuring an approach to OBPR in this manner, faculty receive support from their fellow program coordinators (constituting, in essence, a learning organization within a learning organization) and are encouraged to select from various outcomes that are particularly relevant to their respective
Ludvik.indb 135
27-10-2018 00:23:34
136
OUTCOMES-BASED PROGRAM REVIEW
programs. The process also calls for phasing in a program assessment plan over a period determined by the particular faculty coordinator and colleagues. Following an assessment of the work of the first cohort, a second cohort of faculty (and programs) will be invited to participate for each following academic year. This is a manageable way to grow the reflective inquiry culture of your learning organization. Faculty and staff need additional support to encourage them to become more involved in assessment, that is, day-to-day reporting and consultation services. At Oregon State University, staff in the office of Student Affairs Research, Evaluation, and Planning review and provide feedback on assessment plans and reports. Additionally, colleagues across the division make connections in Assessment Council and are encouraged to communicate and collaborate with those who are doing similar work across the institution. At James Madison University and Truman State University, a central office provides faculty and staff with an easy-to-access portal to query institutional and survey data. This allows faculty and staff to get quick reads on program progress as measured by institutional performance indicators and strategic planning initiatives so that they can delve further into more meaningful and specific outcomes-based assessment focuses within their OBPR, as well as clarify which of their students are not performing at high levels. At IUPUI PAII, institutional staff are the principal providers of support for program review. Each unit is on a seven-year review schedule. Then the committee convenes a meeting of the responsible vice chancellor(s), dean, and unit head to discuss the issues to be addressed in the unit OBPR, during the campus visit by the reviewers, and in the reviewers’ written report. Two decades ago, PAII staff created a step-by-step program review time line that begins 12 to 18 months before the reviewers’ visit and extends 2 months beyond the date of the visit. The time line includes 44 steps, along with how much time before or after the reviewers’ visit each step should take place and the person(s) responsible. Each responsible party receives a copy of this document to see what needs to be done and when. More recently, singlepage summaries of responsibilities for deans/vice chancellors and unit heads have been developed to go along with the more detailed time line, which combines everyone’s responsibilities in a single listing. Soon after the planning session, PAII staff visit the unit to be reviewed to discuss the types of information to be provided from central sources to augment the unit’s self-study. For over two decades, IUPUI Office of Information Management and Institutional Research staff have administered surveys to enrolled students and alumni. Unit data can be compared with schoolwide and campus averages. Since 2000, the National Survey of Student Engagement (NSSE) has been given to IUPUI students every other
Ludvik.indb 136
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
137
year, and now students’ responses are available for selected units. Numerous campus performance indicators, such as persistence rates and numbers of graduates in various racial and ethnic groups over time, are available. Since IUPUI requires that each responsibility center balance its own income and expenditures each year, activity-based costing data showing the differential costs of various initiatives have become an increasingly valued resource. PAII staff work with each unit to the extent requested—providing unique data sets, furnishing examples of data displays and self-studies created by other IUPUI units, and commenting on early drafts of the self-study. Staff also arrange for reviewers’ travel and accommodations. Up to two months before the review team visits, PAII staff work with all concerned to create and disseminate a schedule that will bring the reviewers in contact with virtually every office on campus that can provide an informed perspective on the unit under review. PAII staff furnish and/or monitor all logistics during the campus visit to ensure that the schedule proceeds smoothly. Following the reviewers’ visit, PAII staff follow up to ensure that the reviewers’ report comes in on time. They disseminate the report to all who need to see it, monitor the unit’s response to the review, and arrange the follow-up meeting with all who are responsible for the review to give everyone an opportunity to support the unit in taking appropriate responsive action. After three years, PAII staff schedule a presentation by the unit head to PRAC members concerning the long-term impact of the review and suggestions for improving the review process. (Submitted by Trudy Banta and Karen Black, IUPUI) (Bresciani, 2006, p. 131)
At Texas A&M University, the Center for Teaching Excellence leads faculty development initiatives, while the Student Life Studies office leads professional development for the Student Affairs Division, and the Office of Institutional Assessment leads assessment professional development efforts for the other administrative units. To make sure that all of the offices are teaching similar concepts, they collaborate on workshop designs and simultaneously host integrated workshops. New Jersey City University and Alverno College provide the majority of their faculty and staff development through their faculty and staff leadership committees. In the previous examples, you saw the intermingling of educational initiatives with educational support. There are built-in feedback loops or intentionally designed meta-assessment processes to ensure that the experiences and perspectives of faculty and staff are intermingled with improvements made to the OBPR process. Implementation of OBPR in good practice institutions is iterative and cyclical. To maintain meaning to faculty and staff and to increase their involvement, the program review process
Ludvik.indb 137
27-10-2018 00:23:34
138
OUTCOMES-BASED PROGRAM REVIEW
must be refined as often as the faculty and staff need for it to be. No good practice institution has the program review process with which it began. In all cases, each institution has refined its process, and no institution will tell you that it has already “arrived.” Each expects and aspires to refine the process further. If you want to determine where your institution is regarding faculty and staff involvement in assessing student learning, it may be valuable to pose the following questions from Regional Accreditation and Student Learning: Improving Institutional Practice, published in 2004 by the Council of Regional Accrediting Commissions: 1. How much emphasis is given to commitment to student learning as part of faculty hiring? 2. In the evaluation of faculty, how much importance is given to their effectiveness in producing learning outcomes? 3. Where is a focus on learning evident in key institutional documents, such as the college catalog, program descriptions, and personnel policies? 4. In what ways does the institution encourage and disseminate innovations in teaching and learning, and discuss their implications for curriculum and pedagogy? 5. How congruent are instructional techniques and delivery systems with students’ capabilities and learning needs? 6. In what ways are faculty members incorporating research on student learning in their teaching? 7. To what extent do educational offerings provide an opportunity for students to engage each other and their teachers in a free exchange of ideas and attitudes? [In] active student engagement in learning? [In] an opportunity for collaborative learning? 8. What is the congruence between learning goals and assessment practices? To what degree does the institution engage in “assessment of learning,” that is, ongoing assessment with feedback to help students improve? 9. To what degree is the institution’s assessment program marked by faculty ownership, and used in ways that lead to educational improvements? [By] feedback to faculty on a regular basis useful for the improvement of instruction and learning? [By] incentives, recognitions, and rewards for faculty efforts in assessment? 10. Does the institution award degrees and certificates based on student achievement of a program’s stated learning outcomes? (pp. 24–27)
Ludvik.indb 138
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
139
In asking these questions, you may continue to hear from certain faculty and staff that they are “confused” by what you are asking them to do. Even after attending workshops and hearing explanations of what OBPR is and why it is important, they may claim they still “don’t get it.” While their confusion may be real, investigate further to see if their confusion is a strategy. Some faculty and administrators have been known to claim they are confused simply to keep you finding varying ways to teach them how to do it, while they passively resist engaging in the self-reflection process. Another reason that faculty use to avoid engaging in OBPR is that they fear it violates academic freedom. Academic freedom has never implied a lack of quality assurance; therefore, those who claim that OBPR is in conflict with academic freedom do not understand that their colleagues have every right to demand a certain level of excellence in their students’ learning. Excellence in student learning can be demanded without telling a faculty member how to achieve or evaluate it. Such a demand to have all faculty members assess the quality of what they are expected to do is not a violation of academic freedom (American Association of University Professors, 1970). In addition, if groups of faculty determine that they want to use a specific assessment instrument or rubric in order to collect comparable learning data for all students, then they can decide to do that. As long as that choice is approved through the faculty governance processes, that choice is also not a violation of academic freedom. It is interesting to note that many institutions are already using performance indicators and predictive analytics to make decisions about what kinds of interventions students should have and when in order to improve their time-to-degree. These kinds of decisions might be void of any learning and development data and therefore also do not violate academic freedom. It is intriguing to ask, however, that if no evidence of learning and development is being used, does this kind of decision-making promote the devaluing of a higher education degree? This is a question that requires the intersection of OBPR data to inform predictive analytic and performance indicator use.
Lack of Resources to Engage in Meaningful and Manageable OBPR Some institutions believe they cannot engage in OBPR because they do not have enough resources. To believe that engagement in meaningful OBPR does not require an investment of resources would be a farce. It does, and the first, most precious resource—apart from the humans themselves—required is time.
Ludvik.indb 139
27-10-2018 00:23:34
140
OUTCOMES-BASED PROGRAM REVIEW
Time Implementation and refinement of OBPR requires time and energy from faculty and staff. Those engaged in OBPR must invest time for dialogue to create the process; time for professional development to educate faculty and staff about the process and how to engage in it meaningfully; time to reflect purposefully on the results and how they interact with predictive analytics, strategic planning initiatives, and performance indicators; time to document the findings, decisions made, and plans for improvement in the program; time for the review process; and time for follow-up after the evaluation on the newly made decisions and practices. Time is needed to collaboratively engage in the kind of systems thinking (cross-divisional dialogue) that embodies a learning organization. Considerations such as how investment in this kind of outreach activity yields the kind of students with the multitude of identities that we want to see successfully graduate or transfer to four-year programs require collaborative conversations to intelligently respond. How does asking student affairs to provide this type of orientation and first-year experience contribute to our institutional learning outcomes of civic engagement and effective collaboration? Asking these kinds of questions and finding the answers requires time to dialogue. There is no replacement for that. As we all know, additional time cannot be created; it can only be reallocated. Time to engage in OBPR must be reallocated from doing something else (Bresciani et al., 2004). Many good practice institutions have made this reallocation. In some cases, it was simply a matter of calling what faculty and staff were already doing “assessment.” In many cases, faculty and staff were already engaged in outcomes-based assessment; they just did not know they were. Thus, the process just needed to be named, and the resulting dialogue, decisions, and planning for improvement needed to be documented. In other cases, faculty and staff had to be asked to reallocate time from their current work to reflection and planning for their work (Bresciani et al., 2004). In other words, some staff did not need to plan 25 programs but could plan 10 and use the time that had been allocated to planning the additional 15 to evaluating the 10. For some faculty, it meant not trying to cover so much content in their courses or finding more efficient ways of doing so (Suskie, 2009) and building means of assessing whether students could apply the content that was learned (Maki, 2004). For others, it meant documenting the already existing graduate competency review process and articulating the learning and research outcomes. For others who could not reduce programming or services, it meant starting small by planning to evaluate one service component every year and using that time to build evaluation of the service component into day-to-day
Ludvik.indb 140
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
141
administration. Thus, the evaluation process becomes a habit (Bresciani et al., 2004), and the administrative unit can move on to the next year and the next service to evaluate, having already established some inherent reflection and assessment practices. In many good practice institutions, the expectation was made clear that assessment is not an “add-on”—that program review is not a process that is set aside, to be thought of only once every five or seven years. It is expected to be a process of reflection that is built into day-to-day work. In this model, time is not taken away from teaching, it is invested in improving teaching. Time is not taken away from providing services, it is invested in improving services. Time is not taken away from discipline research, the research informs the design and assessment of student learning. And for some disciplines and administrative units, outcomes-based assessment is their discipline scholarship. Given the necessity of reallocating time, the time needed to document OBPR processes and results is indeed daunting. As we all know, it is one thing to take four hours per term to process what we are learning about our program with our colleagues and plan for the improvements; it is another to then produce a written summary of that discussion. Such documentation is new to many, so it is often difficult to “find” the time to document. However, without documentation of OBPR, no one would know that your institution is engaged in it. There would be no way of knowing how the process has helped you improve student learning and development, teaching, services, and research. Without documentation, these good practice institutions would not have been able to share their lessons learned and examples of excellence for the rest of us to benefit from.
Documentation Documentation of this process is necessary, so time must be allocated for it. If it does not occur, the memory of how an improvement was made may be lost. To illustrate, consider that if researchers did not write summaries of their findings, we would have nothing to pass on to the next generation to inform additional research and to build on past findings. The same is true for program delivery and teaching curriculum. If an institution does not document its OBPR, it cannot share this information to improve the future. Furthermore, cross-discipline and cross-division conversations are hindered, as there is no report to read to prepare for the conversations and no summary of findings to debate regarding the next course of action the learning organization could take. The institutional memory of excellence becomes lost with the passing of leaders if OBPR findings are not documented and used.
Ludvik.indb 141
27-10-2018 00:23:34
142
OUTCOMES-BASED PROGRAM REVIEW
Dialogue and documentation take time; allocation of time to dialogue and documentation can be a significant barrier to engaging in systematic OBPR. The extent to which institutions can provide assistance in this area is important. Some institutions provide online documentation tools; others provide administrative support staff to assist with analyzing data and writing up findings and decisions. Again, these resources may be valuable; however, use them with caution, as many faculty and administrators can quickly turn providing aid for documentation into a one-size-fits-all solution, rather than demonstrating purposeful reflection in their documentation practices. We need to reiterate this point; collaborative dialogue and documentation take time. Something likely needs to come off people’s plates in order for this to occur in a meaningful, sustainable way. And determining what that is also requires time. We don’t expect faculty to teach a class unless it is in their workload. We don’t expect faculty to publish peer-reviewed journal publications unless such assigned time is in their workload. If we really want to create sustainable learning organizations, these expectations must be built into the workload, particularly during the OBPR process when workload is higher than normal, which would involve, for instance, dialoguing across systems to make meaning of data and determining priority of decisions or resources reallocations and the subsequent documenting of this work. Or perhaps faculty are tasked with determining how best to summarize program results from a multitude of areas to discern how well particular initiatives are advancing institutional performance indicators such as enhanced sense of belonging and overall well-being. If you don’t allocate the time and make it a priority, it won’t happen. We know this to be a solid truth.
Administrator, Faculty, and Staff Development In addition to time, another valuable resource required for OBPR is investing in administrators, faculty, and staff to enable them to learn how to meaningfully engage in this process. Given all the demands of their work and increasing crises on campuses, this becomes even more important. We have spent a great deal of time already discussing this necessary resource. We have also noted that professional development needs to occur at all levels of institutional leadership. Just as you wouldn’t expect your students to be able to conduct a lab experiment effectively without first teaching them how, we cannot expect faculty, staff, and administrators to engage in this inquiry process without first teaching them how. To illustrate the value of providing faculty and staff development, we have shared examples from good practice institutions, along with questions to pose to your own institution about how well faculty and staff are prepared to embrace and implement OBPR. All that
Ludvik.indb 142
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
143
said, without institutionalized efforts to embed OBPR education into faculty and staff development programs, an institution should not expect faculty and staff to understand what OBPR is, let alone know how to engage in it in practical and meaningful ways. Implementation of a systematic, institutionwide faculty and staff development program will empower you to address many barriers that arise when delivering this type of program review. Keep in mind the value of archiving recorded professional development opportunities for learning organization members to access on demand as well. Providing incentives for faculty to take the time to engage in these faculty development opportunities or offering them during times when they already come together, such as their regularly scheduled departmental meetings, is also important.
Other Resources In previous chapters, we have mentioned other resources of good practice institutions for engaging in OBPR. To reiterate, faculty and staff need access to the following readily available information: • Institutional research data. In many instances where faculty and staff have to run their own analysis just to determine enrollment trends, first-year academic preparedness data, or performance indicators by the intersecting of multiple identities, faculty and staff are taking valuable time away from reflecting and dialoguing about what the data mean and how the data could be used to improve program outcomes as well as performance indicators. Thus, if faculty and staff readily have data query analysis tools available to apply to the already organized extracts of data from the institutional transactional systems (i.e., non-privacy-protected fields from the student information system, human resource system, student engagement system, and financial systems), they can quickly access the information they need to inform their program planning and evaluation needs. • Institutional survey data. Similar to research data, providing faculty and staff with institutionally designed, administered, and analyzed surveys or national standardized inventories (see Appendix C) will give them much needed information for OBPR that will allow them to focus on the meaning of the results and their application to program improvement. Furthermore, the ability of the programs to use the results of the surveys and national inventories, as well as institutional empirical data, will enable faculty and staff to improve the institutional data collection processes and inform meaningful
Ludvik.indb 143
27-10-2018 00:23:34
144
OUTCOMES-BASED PROGRAM REVIEW
•
•
•
•
Ludvik.indb 144
use of predictive analytics, strategic planning, and performance indicators. Electronic reflective student learning portfolios. These are rich data collection tools that tie achievement of specific learning and development outcomes, as well as some performance indicator achievement data, to individual students. If designed well, these portfolios provide a wealth of competency-based data that can be used in OBPR. Guttman College and IUPUI serve as exemplary models of how to design reflective portfolios in ways that inform competency-based assessment as well as OBPR. In addition, if these portfolios are designed well, they can systematically collect the kinds of learning and development outcomes (see Appendix C) that can enrich predictive analytics and meaningfully inform performance indicators, thus heightening the ability to close achievement gaps. Web resources and templates. Given that each program has the ability to place many of the components of OBPR on its website (see chapter 3), less time is required to find who has the last version of the mission statement, thus allowing more time in dialogue to determine whether the departmental mission statement is still representative of the program. Other documents historically collected in paper files—such as faculty curricula vitae and staff résumés, course syllabi and program agendas, utilization statistics, service and customer satisfaction policies, and rules and regulations—can also be found on websites or in faculty portfolio systems, student engagement systems, and in other transactional systems. While many have now adjusted to this time-saving feature, time to read these materials and reflect on how they connect to OBPR is still required. However, providing access to these resources allows each member of the learning organization to benefit by being able to attend collaborative dialogue meetings fully prepared. Online resources. As mentioned earlier, the wealth of online resources from NILOA at www.learningoutcomeassessment.org/ CaseStudiesInstitutions.html or Assessment Commons at assessment commons.org are incredibly useful and free. Survey and rubric development tools. Many technological tools aid in the design, administration, and analysis of surveys and rubrics. While survey methodology and the use of rubrics are only two of the many methodologies available for use within OBPR, providing faculty and staff with tools to help them quickly and easily design means to gather and analyze information allows them to reallocate time to reflect on
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
•
•
•
•
Ludvik.indb 145
145
what they are finding and dialogue about potential solutions, rather than gathering much-needed information. Assessment plan and report consultation, including all aspects of the OBPR process. Faculty and staff development programs provide one-on-one consultation on all aspects of OBPR, including writing outcomes, selecting research methodologies and assessment tools, interpreting results, aligning results to performance indicators, and writing reports. In addition, faculty and staff anxieties about OBPR will be eased, thus allowing them to focus their energies on discerning what engaging in this process means for their program improvement and their ability to demonstrate how they are a learning organization. The same is true for the following one-on-one consultation services. While we have spent a great deal of time talking about offering professional development on establishing the process of OBPR and on understanding each recommended component (see chapter 3), it is important to note that professional development may also be needed in terms of teaching faculty and staff how to compassionately dialogue across disciplines as well as how to collaborate. That may sound funny, however, after spending over 12 years as a full-time faculty member and being rewarded for my “individual” work, collaboration loses its perceived value. Many organizational members are out of practice. Discipline-specific professional development. In addition to internal institutional faculty and staff development programs, faculty and staff are encouraged to engage in their own discipline’s professional development opportunities. Doing so allows them to influence and be influenced by standards in their discipline and the way in which they are evaluated. They can bring these new understandings back to their own campus, apply them to their OBPR processes, and excel further in their discipline and professional areas. Opportunities to present OBPR findings. Similar to providing professional development opportunities within the faculty or staff discipline, providing the opportunity to present OBPR findings at conferences helps teach others about the value of engaging in such review. Further, such presentations encourage feedback from conference participants about how they can improve their inquiry process and the program itself. Publicly sharing findings and methodology can be mutually beneficial to everyone involved. Opportunities to publish OBPR findings. James Madison University publicly showcases its faculty and staff ’s published works on what they learned from engaging in the OBPR process. This is an exemplary
27-10-2018 00:23:34
146
OUTCOMES-BASED PROGRAM REVIEW
•
•
•
•
•
Ludvik.indb 146
practice and one that reaffirms the scholarship of inquiry and the learning organization. Opportunities to engage in conversations about OBPR discoveries within the institution. Establishing a regular forum for information exchange or poster sessions can set the stage for such conversations. Inviting students and community partners to these events broadens the scope of the conversation as well as ideas to improve both the inquiry process and the program. Documentation tools. As we mentioned earlier, documentation is time-consuming. Providing electronic tools that aid in the documentation process may save faculty and staff time, while reinforcing that they can use purposeful and meaningful documentation of outcomes-based assessment for professional accreditation purposes, grant writing, institutional accreditation, and legislative and trustee reports as well as for program and institutional marketing and recruitment of students, faculty, and staff. Grants. Good practice institutions have found that providing institutional grants to aid programs in starting their OBPR processes is very valuable. However, the point at which programs are expected to become less reliant on institutional grants and embed the costs of the review process into their own departmental budgets is often a touchy transition that requires thoughtful planning, time, and patience to navigate. Release time/assigned time. Similar to providing institutional grants, providing release time or assigned time to faculty and staff to begin planning and/or coordinating their department’s OBPR process may be extremely valuable, if not essential. The transition process from release time to no release time may be as challenging as the movement away from providing institutional grants. Consider offering release time during the most time-intensive parts of the process, such as highvolume documentation times and times when collaborative dialogue across systems is needed to bridge ePortfolio data with OBPR data with performance indicators and predictive analytics. Faculty and staff fellows. Similar to providing release time or institutional grants, using faculty and staff fellows has proven beneficial. Salaries are fixed costs for many institutions. So, the intent is to “hire” or “buy out” a certain percentage of time for one faculty member per college or for a division administrator to provide the faculty and staff support for educating everyone within that division or college about OBPR. This practice is often a train-the-trainer
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
147
model and can be adopted by an organization as finances permit. For example, you might consider funding a faculty fellow for one college who is preparing to meet professional accreditation standards through OBPR. Then, as the other colleges institute programs preparing for accreditation, you can phase in the financing of their fellows as well. Another useful approach is for fellows to be trained in facilitating cross-disciplinary or division-level (i.e., systems thinking) dialogue so that multiple sources of data can be examined to determine how to improve experiences for specific groups of students or faculty or staff. This is a very powerful approach, the results of which can inform predictive analytic and performance indicators decisions as well as individual student competency-based improvements. These resources do require investment of institutional or college and division funds. How your institution chooses to approach the investment planning is for your learning organization leadership to determine, with your guidance and recommendations. Not to acknowledge that resources are required, whether it be reallocation of centralized institutional resources or decentralized provision of the resources, is unwise and may hinder advancement of your own institution’s excellence—your own institution’s learning. Whether your institution is just beginning to implement OBPR or is in the process of refining a long-standing process, reviewing the resources committed to OBPR is a good practice. Each institution should have its own plan for developing its internal resources to support OBPR, and each should proceed within its own means and capabilities. Again, there is no one right way to approach this conversation. There is no one right way to provide the institutional resources that are recommended or that your institution recognizes as required to engage in meaningful and manageable OBPR. However, it is important that your institution engage in purposeful planning of what is needed to model a learning organization (Senge, 2006). It is interesting to note that reported barriers of assessment don’t always align with reported good practices and good practice criteria. We don’t know why this is. Perhaps it is the way in which the questions are posited or perhaps it is where attention is placed when questions are asked. For example, confusion around use of results could lead to underlying concerns regarding program improvements that are shown to be needed but remain unfunded. This confusion then contributes to the decline of trust in the OBPR process, and, as a result, engagement dwindles. However, if the prioritization of program improvements aligns with strategic plan initiatives and that resource
Ludvik.indb 147
27-10-2018 00:23:34
148
OUTCOMES-BASED PROGRAM REVIEW
reallocation process is transparent, then it is clear that not all evidence-based improvement funding requests will be funded. What we understand from this research is that every single articulated barrier, perceived or real, must be addressed by another human being at the institution. Not to do so is counter to organizational principles of cultivating human flourishing and that of a learning organization. And after all, that is the point of this entire process.
Questions to Consider This book has focused on good practices for implementing OBPR in order to cultivate a learning organization that supports human flourishing for all. While we have posed many barriers and proposed strategies to overcome those barriers, several key concerns remain. The driving question is “How will we meaningfully provide to all stakeholders evidence that every student is reaching high achievement while cultivating the learning organization made up of faculty, staff, and administrative teams that serves those students?” There is research on how OBPR, if put into systematic and authentic practice, can transform higher education and improve the quality of student learning and development, research, and service as well as human flourishing (Banta et al., 2009; Kuh, Jankowski, Ikenberry, & Kinzie, 2014; Kuh et al., 2015; Maki, 2010; Suskie, in press). Yet, we need more evidence. Questions such as the following could be researched further: • What are the questions that postsecondary educational learning organizations care about? How do they go about ensuring those questions are being answered with integrity? • In what ways are postsecondary institutions committed to becoming learning organizations? How can those ways be leveraged into systematic reflective inquiry? • How are postsecondary institutions funded to allow them to be learning centered? • How do their organizational structures, supportive processes, practices, policies, and rewards systems promote learning and development for all students and community members? • How can leadership at all levels be positively and effectively engaged in reflective collaborative dialogue in order to assure learning organizational accountability? • What kinds of performance indicators best illustrate achievement of the kinds of questions that postsecondary educational learning organizations care about?
Ludvik.indb 148
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
149
• How will we know that learning organizations have achieved excellence? How is that excellence transparently communicated in a manner that honors learning organization systems? • How can emerging research on how humans learn and develop (how human organizations learn and develop) best be integrated into existing predictive analytic processes? • How can all postsecondary educational institutions transform into or refine their existing learning organizations as a result of engaging in well-designed OBPR? From the outset, we have felt a responsibility to share with others what we have learned from many wonderful scholars and practitioners; some are cited in this book, and many others are emerging. My hope is that readers will come away with the understanding that OBPR must avoid focusing on process for process’s sake; rather, it must emphasize the practices that cultivate habits of self-reflection and informed decision-making—all in an effort to transform how we talk about quality of higher education and definitely transform how we deliver and identify it. Many higher education scholars worry that our time has run out to demonstrate, through organized and systematic OBPR, that what we do in institutions of higher education is improving, that we are learning centered, and that what we do day-to-day is of quality. We understand what these scholars are saying. However, we are concerned that it may remain difficult to find means to appease external constituency demands for accountability data when the very nature of what is required to be reported could be disrupting institutions’ ability to reflect and dialogue in meaningful ways on what does have value in conversations about quality student learning and development, as well as other learning organization development. For example, some mandatory reporting systems require several specific measurements from institutions of higher education that are not informed by evidence of student learning and development—evidence that must be generated through OBPR. If not, we won’t know how to improve the very systems that are in place to cultivate that learning and development. Even if an institution is committed to competency-based assessment through student portfolios, how does the organization know how well what it is systematically providing is cultivating that learning and development without program review? Students could simply be able to learn without them; at least, that is what some critics of higher education are arguing. Might they be right? If so, it would still be valuable to affirm this with evidence so that resources could be reallocated to other types of learning and development or toward other
Ludvik.indb 149
27-10-2018 00:23:34
150
OUTCOMES-BASED PROGRAM REVIEW
students who do require certain types of systems to support their growth. We won’t know without the evidence that quality OBPR produces. Other questions need to be raised. A question to ponder about performance indicators and predictive analytic usage, for example, is “How does number of credit hours passed equate to improved learning?” The use of credit hours as a dosage of content equivalency, among other traditional modes of delivering and managing higher education (including workload unit assignments), is truly outdated. We have little evidence that the number of credit hours offered in a particular subject area contributes to expected quality of learning in that area. We also have no evidence that the workload assigned for preparing, teaching, and assessing a 3-unit undergraduate course for 100 students is the same as preparing, teaching, and assessing a no-credit, first-year learning and development experience for 100 students. These are management functions that are largely void of learning organization dialogue and reflection. Thanks to emerging neuroscience research, we understand that it takes at least 8 weeks of 27 minutes of daily focus on one task to see structural and functional changes in certain portions of the typically developing brain. While this finding could change tomorrow, if we understand this today, why are many of our in-class course requirements designed as 3 hours per week for 16 weeks or 10 weeks or more hours of class time organized into 4 or 5 weeks? A learning organization is all about paying attention to emerging research and adjusting what it does in response to that research in order to continue to cultivate high-quality outcomes. In well-designed OBPR, we have opportunities to inform improvements in learning by inquiring into how students engage with in-class and out-ofclass experiences and intersect that with how we have intentionally designed degree attainment pathways, pedagogical delivery, and professional development we provide for faculty and staff. Well-documented OBPR processes allow learning organizations to answer all kinds of questions they care about. However, first we need to get clear on what we care about. How clear is your organization about communicating what it cares about; articulating its purpose(s)? How well are your reflective inquiry practices aligned to answer those questions? There may be value in being able to benchmark or simply compare indicators of quality but only if those indicators of quality are representative of what the learning organization is intending to cultivate or create. Here again thoughtful reflective dialogue over evidence is required. For example, you could argue that we all value high marks from students. If this is true then you could compare cumulative grade point averages (cGPA) across institutions. As you examine postsecondary organizational behavior, awards are given for the highest cGPA earned, particularly at graduation. However,
Ludvik.indb 150
27-10-2018 00:23:34
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
151
we understand that grades are not what an employer values. Yes, the employer may look at cGPA; however, if the graduate has been awarded a degree and is not one of the top scorers, the graduate may still be employed. Why? An employer wants to see evidence that students know and can do what the degree said they know and can do in their ambiguous work environment as soon as the graduate is hired. So, what should be the comparable indicator of quality here? If cGPA is a relative indicator of embodied evidence of learning and development upon graduation then cGPA could be used. However, the only way in which a learning organization would know that is through the use of OBPR. If the number of students placed into jobs is a relative indicator of embodied evidence of learning and development upon graduation then cGPA could be used. However, again, the only way in which a learning organization would know that is through the use of OBPR. We could apply this same analogy to four-year graduation rates. When is the last time you heard that an employer chose not to hire someone because the graduate required eight years to demonstrate competency instead of four years? We also all know stories of CEOs of start-up companies who are successful without having completed their college degrees. These are success stories how did the college experience contribute to these success stories? We don’t know without OBPR. There is also value in being able to internally compare interventions across various types of students with quasi-experimental design research to discover how to refine what is being systematically offered. Yet, in our efforts to benchmark and compare, we must first be anchored in offering that which aligns to the very nature of how students learn and develop. The response resides in how we frame the question that guides the analysis. Are we comparing for comparison’s sake or are we trying to understand whether our expected levels of high achievement are where they should be for all students? Who determines what is compared and how? Without further organizational dialogue around these important questions and the evidence we are internally producing with OBPR to respond to them, comparative benchmark statistics that are easy to gather can quickly become a barrier to meaningful inquiry and improvement because it will become all about improving the statistic as opposed to doing the granular work of understanding and then improving the actual human experience. All human lives are worthy of our attention and therefore should never be reduced to simple comparable statistics. We must remember this as we refine our processes. Many learning organization members already know this to be true. For example, why do we avoid comparing grade reports of Introduction to Microeconomics courses across the hundreds of higher education institutions in order to identify expected levels of quality learning? Because we
Ludvik.indb 151
27-10-2018 00:23:34
152
OUTCOMES-BASED PROGRAM REVIEW
know that many of the factors that contribute to these grades are not the same. For example, • Student learning outcomes for each course may not be the same. • The framework for what is taught is not the same. • The preparation of the students for each class is not the same, nor is The expertise of each of the faculty members. • Faculty’s ability to apply their research to the classroom curriculum is not the same, and their research is not the same. • The pedagogy is different. • The assignments, and the means to evaluate them, are different. • How the scoring of assignments informs the grade given is different. • The extent to which peer-to-peer learning is occurring is different. • Faculty are likely spending different amounts of time with each student to ensure high achievement for all. These instructors are doing everything they can (or not) to ensure that every student is meeting their intended learning and development course outcomes. When you weave together a number of in-class and out-of-class experiences such as this, a student earns a degree and his or her progress toward it is very individualized even though the sequence in which he or she took the courses may be systematic. If we have engaged in OBPR well, we can assure that every graduate of each degree has achieved all they need to know and do to land a first job or go onto further college education. But readers who have worked with a wide variety of students know that if a student needs to repeat a class or stop out for a term, that does not necessarily mean the learning organization has failed that student. And yet sometimes, it does mean that the learning organization has failed that student. We won’t know without OBPR. If we know that comparisons of certain kinds of indicators are less meaningful on a microlevel where the human to human interaction is occurring, why are we still engaged with this comparative dialogue on the macro level? Yes, if faculty choose to do so, they can agree on a common exam or a common project and rubric to assess learning and compare it. The AAC&U LEAP project has been particularly useful in promoting this dialogue. Or faculty, as many good practice institutions already do, can invite community members and alumni to comment on the quality of the final projects of each class or on the culminating capstone project of the degree program to determine whether expected level of achievement is “good enough.” However, how do you improve the design of the course if it is not “good enough” without OBPR data?
Ludvik.indb 152
27-10-2018 00:23:35
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
153
We know that grades are not comparable indicators unless faculty members meet and agree on learning outcomes, delivery, means to evaluate, application of their research to the classroom, and so on, yet some states have set up articulation agreements espousing that learning will be the same in this three-credit-hour course at this institution as in similar courses at other institutions. While those agreements most certainly promote affordability of education and ease of transfer from institution to institution, which is important, it doesn’t aid the assurance of quality learning and development evidence production unless that kind of dialogue among learning and development providers takes place first. But what if a group of students needs a different kind of learning experience than what was agreed upon? What then if we are to close the achievement gaps? Would we rather hold each institution to a high level of scrutiny for its ability to demonstrate it is a learning organization and hold it accountable for how transparent its learning organization evidence is when shared with the public? For example, if we as a nation are genuinely serious about the integrity of our accreditation and quality assurance standards, might we make evident our investment in the kinds of processes that successful learning organizations use? “Isn’t this already happening?” you ask. It is for good practice institutions, so how can other institutions be empowered to move forward in their processes? That is for their leadership at all levels to discuss, gather data about, discern, and then make choices based on what they discover. We recognize that making evidence from OBPR too transparent too early in the organization’s learning curve can undermine the institutional integrity of objective review practices; however, in time, shouldn’t we be able to expect that every institution would include on its website each academic program’s student learning outcomes as well as how students are achieving the aggregate for each of those outcomes? Wouldn’t we also want to be able to demonstrate what each program’s administrators and faculty are doing to improve the performance for each outcome? And wouldn’t it be interesting to know whether stakeholders are willing to finance such improvements? Think of it this way: Rather than understanding whether your child or relative will get a good quality education based on an institution’s faculty to student ratio (i.e., easy-to-identify indicators), wouldn’t it be great to actually know what your student would be expected to learn and how they will be expected to develop—to transform—after having fully engaged in any specific degree program (which includes required in-class and outof-class activities)? Wouldn’t it be great to know what the level of expected learning and development within that degree pathway was for students who came before? And if you are wondering why the level of student learning and development is what it is, you could look deeper to see what the plans
Ludvik.indb 153
27-10-2018 00:23:35
154
OUTCOMES-BASED PROGRAM REVIEW
for improving the learning and development are at each institution and how they will be financed. You may then realize that a progressive institution striving to achieve more may be better suited to your student than one that has already established itself and vice versa. What might you be willing to invest in the educational experience if you knew this kind of transformation is expected? This kind of data could transcend legacy reputations and allow institutions to compete for international and national students based on the evidence of end result learning and development regardless or in light of how well the student was prepared upon entry. If you want to know before sending your child or relative to college whether he or she will get a job after graduation from the program, then we ask you if that is all you want from the higher education experience. If so, perhaps you want to select an institution that provides historic employability data for certain majors and the expected trend analysis for future graduates. And if it does, you might want to ask that organization how what it does assures employability of your student. How does it know? Does it only admit those students who are already employable? If so, how much would that experience be worth your investing in? You decide, but you can only decide if the organization makes the evidence transparent. Transparency of meaningful learning and developmental evidence gives those who invest in and finance higher education reforms more information to inform their decisions and provides the public with accountability to inform donors’ and other supporters’ decisions. Think of how helpful it would be to have an informed discussion where proposed reductions in funding correlate with a decrease in student performance based on an understanding of student learning and development as defined and identified by the faculty, administrators, and community partners responsible for designing and delivering that learning and development.
Challenges in Addressing Barriers Here, we provide a few more challenges that emerged, not as common themes from good practice institution research, but more as outlying comments we thought important enough to include for each institution to consider as it designs or refines its OBPR process in consideration of the barriers it might face in creating its own learning organizational culture. 1. Key leadership at all levels within institutions of higher education must demonstrate that they genuinely care about becoming a learning organization, particularly about cultivating human flourishing for all and
Ludvik.indb 154
27-10-2018 00:23:35
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
155
particularly for the students who enroll. If not, sustainability of a learning organization is not possible. This is not to say that organizational leaders must demonstrate a concern for cultivating HAAS in isolation from other concerns regarding research productivity, community engagement, and even athletics. It is possible to prioritize these lines of quality inquiry and their resulting decisions for improvement. However, not all college/university leaders at all levels demonstrate a level of transparency for their priorities and, therefore, resulting accountability for how well the priorities are evidenced is lacking. A question of who holds college/ university leaders at all levels accountable for providing evidence of a learning organization remains unanswered. 2. A culture of trust and integrity must be created at institutions through consistent actions of college/university leaders at all levels who demonstrate a commitment to ethical and quality evidence-based decisionmaking. For OBPR to take hold and become systematic, pervasive, and able to sustain a learning organization, college/university leaders at all levels must use the evidence gathered from OBPR in a manner that promotes organizational learning. Senge (2006) writes that an organization that links adaptive learning with generative learning increases its capacity to learn and, therefore, also increases its capacity to improve and reinvent itself. If colleges/universities demonstrate for themselves that they are the learning organizations that embody the qualities they desire to instill in their students, the institution will use objectively gathered information on how to improve itself and do so in a consistent manner that cultivates human flourishing by acknowledging human contributions and providing for ongoing professional development opportunities. In this manner, over time, the institution’s culture can become one of trust and integrity for creating, rather than one where staff are concerned about their lack of accomplishments in their unit, potentially leading to worry about job loss. 3. We must reconstruct our mental models for delivering higher education and evaluating its effectiveness so that we can leverage OBPR results to overcome our administrative/managerial delivery limitations. If higher education organizations claim to be thought leaders, then we must role model what it means to evaluate the quality of the actions that emerge from those thoughts. 4. We must connect formative assessment1 with summative assessment2 and link discussion of improvement with discussions on accountability. We have to connect the somewhat competing worlds of expectations for
Ludvik.indb 155
27-10-2018 00:23:35
156
OUTCOMES-BASED PROGRAM REVIEW
organizational learning, or at the very least connect the accountability of competency-based student learning and development, with predictive analytics, performance indicators, institutional priorities, and strategic planning. Often, expectations for accountability of learning appear to compete with the ability of faculty and staff to collaboratively focus on improving student learning and development, because the requested indicators do not help inform specific decisions for improving student learning and development. How do we provide more meaningful evidence of learning and development so as to reframe dialogue around the use of easy-to-digest numbers, which may not provide specific information needed for improving learning and development design. 5. We must examine how the organizational structure of postsecondary education, and the way in which it is funded, promotes or inhibits the emergence or sustainability of a learning organization. 6. We must emphasize the connection of curriculum design (in and out of the classroom), pedagogical approaches, and professional development to delivery and evaluation of learning and development. Some organizations still value individual expertise in a way that loses the connection of that expertise to what the learning organization intends to create, regardless of whether that expertise is embodied in research, teaching, or service, to the learning that is taking place in and out of the classroom. We must link individual experts and collections of experts to the overall plan for learning within programs, departments, and colleges/divisions. In addition, we must continue to develop individual experts so they know how best to apply their expertise for the optimum learning experience for all students, regardless of where that learning is expected to occur. 7. We need to rethink our internal and external reward systems. Research is emerging that informs how learning organizations can cultivate human flourishing for all. Some organizational leaders will always favor high expectations for research and grant productivity over high expectations for learning and development for all students. Having served as a fulltime faculty member for 12 years, I can attest that there were times when I have ignored my research to tend to students’ learning and development needs and when I have ignored my students to tend to my research. Internally, I find both rewarding and invigorating, however externally, there has never been a balance. It is a dance of where I place my attention. Perhaps I am lousy at both, which is why I notice when my attention is divided, or perhaps I am just now beginning to articulate the very real
Ludvik.indb 156
27-10-2018 00:23:35
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
157
challenge that may be undercutting an organization’s ability to sustain OBPR. With increasing demands for research and grant productivity as well as increasing demands to tend to the diverse ways our students best learn and develop while also committing to human flourishing for all, we can’t continue to pretend one person can really do all of this well all of the time, yet that is what we often expect. More difficult in this conversation is the challenge that institutions have historically celebrated and promoted faculty as individual commodities. We have historically promoted faculty for their individual contributions as researchers, for their individual service records, or for their individual teaching performances. Some institutions allocate assigned time for or reward the time-intensive group work required to team teach, develop multidisciplinary approaches to learning, integrate in- and out-of-classroom experiences into one seamless curriculum, use OBPR evidence to redesign programs, or embed inquiry practices or other effective pedagogies into day-to-day work. Yet, many institutions still do not have readily available avenues to assign time to or celebrate such collaborative work, and we have established that a learning organization must demonstrate collaboration across systems. Rather, we depend on the individual faculty or staff person’s intrinsic concern with improving student learning and development until it is no longer sustainable for he or she is worn out. While many faculty and staff are concerned about an individual student’s well-being and success as well as that of their colleagues, they often invest in improvements at risk to their own professional career or personal well-being. That is not in alignment with how a learning organization functions. 8. We must critique our predictive analytical models carefully. Some predictive analytic models do not recognize the emerging research that has been published about how students learn and develop and the kinds of work it takes to ensure that all students reach their goals, albeit not at the same time or in the same way. There are significant “cost” differentials for assuring high-quality student learning and development for all students, and the same is true for improving student learning or embedding research expertise into the design of in- and out-of-classroom curriculum. Without OBPR evidence, we risk reinforcing a design of an educational system that will ensure that only those students who learn and develop in the metric-prescribed way succeed. This could harm human lives. 9. Many good institutions hire faculty and administrators with the explicit expectation that they will engage in genuine OBPR. I have spoken to
Ludvik.indb 157
27-10-2018 00:23:35
158
OUTCOMES-BASED PROGRAM REVIEW
many faculty who believe they were not hired to teach, let alone evaluate the performance of their students in the classroom, lab, or cocurricular. Thus, when they find out after the fact that they are expected to gather evidence of student learning and other program performance, they often react with great resistance. If faculty and administrative position descriptions included the expectation of such evaluation skills and behavior, faculty and administrators could possibly make many improvements immediately, and improvements in the programs that prepare those faculty and administrators for service in higher education could be made as well. Criteria for performance evaluation of faculty and administrators hired with the expectation that they will engage in OBPR and how they use the results to improve their programs could be included. Thus, rather than using OBPR results for personnel evaluation directly, the focus would be on the quality way in which OBPR results are used to inform program improvements, thus also revealing opportunities for further professional development both offered and taken up. 10. While it is challenging enough to prepare faculty, staff and administrators to engage fully in a learning organization culture, it is also challenging to prepare prospective students to engage in the learning organization experience. While series of inputs assessments and needs analysis may help institutions better understand the ways in which students are prepared to enter their institutions and demonstrate what is expected of them once they graduate, using this information in meaningful ways to design just the right kind of optimal learning and development experience is time consuming. It requires collaboration and ongoing formative assessment to ensure that students are not being forced to develop and learn in the very specific way in which historical data predicted they should learn and develop. The learning organization is about transformation; the students are needed partners in an institution’s emergence as a learning organization.
While there are many other concerns and questions to consider, implementation of meaningful and manageable OBPR can transform higher education. You can identify and improve quality in every aspect, thus demonstrating you are a learning organization. The first question to ask is “Do your leaders at all levels want to implement OBPR in an effective, efficient, and enduring manner that respects your institutional culture, provides evidence of a learning organization, and cultivates human flourishing for all?”
Ludvik.indb 158
27-10-2018 00:23:35
OVERCOMING BARRIERS TO GOOD PRACTICE OBPR
159
Key Learning Points 1. Faculty and staff reported barriers to OBPR can be summarized into the following categories:
2.
3.
4.
5.
Ludvik.indb 159
a. There is a lack of understanding of the value of OBPR among those implementing it as well as among those top-level leadership who are being asked to support it and use it to make resource reallocation decisions or decisions that connect OBPR to accountability conversations. b. There is a lack of resources to engage in meaningful and manageable assessment, which includes time to learn about and engage in reflective dialogue about what assessment results mean and how they can be acted upon. In addition, there is a lack of institutional infrastructure to support reflective dialogue and the use of outcomes-based evidence to improve learning and development and have the resulting information inform decisions related to competency-based assessment, predictive analytics, strategic planning, and performance indicators. Lack of time to reflect upon and collaboratively dialogue about prioritized improvements is a significant barrier and must be addressed either by reallocation of workload, allocation of assigned time/release time, or clearly articulated position descriptions and corresponding personnel evaluation processes. Ensuring meaningful and transparent use of OBPR results is an effective way to reduce many barriers, particularly as use of results relates to informing strategic planning, predictive analytics, improvements in closing achievement gaps/HAAS, budgeting, resource reallocation, and performance indicators. In clarifying use of results, it is imperative to clarify whether use of results will inform needed professional development opportunities for faculty and staff, or whether the consideration of how results are used to improve programs will be considered in review, promotion, and tenure of faculty and staff. If quality of OBPR results are used to inform hiring, probation, reassignment, or release of faculty and staff, you will want to consider how that will influence the integrity of the entire process. There are multiple barriers and multiple ways to address barriers. What we understand from this research is that every single articulated barrier, perceived or real, must be addressed by another human being
27-10-2018 00:23:35
160
OUTCOMES-BASED PROGRAM REVIEW
at the institution. Not to do so is counter to organizational principles of cultivating human flourishing and that of a learning organization. And after all, that is the point of this entire process.
More Questions to Consider There are several important questions posited in this chapter. In addition, consider the following: 1. What are our organization’s barriers to implementing meaningful OBPR? 2. How might we collaboratively and creatively address those barriers? 3. Which of these good practices to addressing barriers might we want to adopt and/or adapt? 4. How do we address the concerns related to “lack of time?” 5. How will we ensure that those who articulate their barriers know they are heard while also demonstrating that we won’t tolerate excuses or those barriers that may be preventing meaningful engagement with OBPR? 6. How do we ensure our own human flourishing as we sustain our commitment to demonstrating we are a learning organization through the use of OBPR?
Notes 1. Formative assessment is a way of monitoring the process of learning and development as it is occurring in order to adjust the design and delivery of the learning and development to heighten end results. 2. Summative assessment is often completed for accountability purposes. It occurs at the end of the learning and development that was provided and is used to address whether levels of expected achievement occurred. It might also be used in a comparative manner.
Ludvik.indb 160
27-10-2018 00:23:35
APPENDIX A
DOCUMENTS USED TO DETERMINE GOOD PRACTICE CRITERIA FOR OUTCOMES-BASED ASSESSMENT PROGRAM REVIEW
Adelman, C. (2013). Searching for our lost associate’s degree: Project Win-Win at the finish line. Washington DC: Institute for Higher Education Policy. Retrieved from http://www.ihep.org/sites/default/files/uploads/docs/pubs/pww_at_the_finish_ line-long_final_october_2013.pdf American Association of Higher Education. (2012). Principles of good practice for assessing student learning. Retrieved from www2.indstate.edu/assessment/docs/ ninePrinciples.pdf Association of Accrediting Agencies of Canada. (2010). Guidelines for good practice. Retrieved from https://aaac.ca/pdfs-english/Guidelines-for-Good-Practice-eng .pdf Association of American Colleges and Universities. (2008). Our students’ best work: A framework for accountability worthy of our mission. Retrieved from https://www .aacu.org/sites/default/files/files/publications/StudentsBestreport.pdf Astin, A. W. (1991). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. New York, NY: Macmillan. Baker, G. R., Janowski, N. A., Provezis, S., & Kinzie, J. (2012). Using assessment results: Promising practices of institutions that do it well. Champaign, IL: National Institute for Learning Outcomes Assessment. Ballard, P. J. (2013). Measuring performance excellence: Key performance indicators for institutions accepted into the Academic Quality Improvement Program (AQIP). Retrieved from http://scholarworks.wmich.edu/dissertations/196 Banta, T. W., & Associates. (2002). Building a scholarship of assessment. San Francisco, CA: Jossey-Bass. Banta, T. W., Jones, E. A., & Black, K. E. (2009). Designing effective assessment: Principles and profiles of good practice. San Francisco, CA: Jossey-Bass. Banta, T. W., & Palomba, C. A. (2014). Assessment essentials: Planning, implementing, and improving assessment in higher education (2nd ed.). San Francisco, CA: John Wiley & Sons.
161
Ludvik.indb 161
27-10-2018 00:23:35
162
APPENDIX A
Barnveld, A., Arnold, K. E., & Campbell, J. P. (2012). Analytics in higher education: Establishing a common language. Educause Learning Initiative, 1, 1–11. Beckwith, S. (2016). Data analytics rising in higher education: A look at four campus “data czars” and how they’re promoting predictive analytics. University Business. Retrieved from https://www.universitybusiness.com/article/data-analyticsrising-higher-education Braskamp, L. A., & Engberg, M. E. (2014). Guidelines for judging the effectiveness of assessing student learning. Chicago, IL: Loyola University Chicago. Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practices. Sterling, VA: Stylus. Bresciani, M. J. (Ed.). (2007). Good practice case studies for assessing student learning in general education. Bolton, MA: Anker Publishing. Bresciani, M. J. (2011a). Challenges in the implementation of outcomes-based program review in a California community college district. Community College Journal of Research and Practice, 35(11), 855–876. Bresciani, M. J. (2011b). Identifying barriers in implementing outcomes-based assessment program review: A grounded theory analysis [Electronic version]. Research & Practice in Assessment, 5(1), 5–26. Retrieved from https://files.eric .ed.gov/fulltext/EJ1062653.pdf Bresciani, M. J. (2012). Recommendations for implementing an effective, efficient, and enduring outcomes-based assessment program review. Community College Journal of Research and Practice, 36(6), 411–421. Bresciani, M. J., Gardner, M. M., & Hickmott, J. (Eds.). (2009). Case studies in assessing student success. New Directions for Student Services, 127. Boston, MA: Jossey-Bass. Bresciani, M. J., Gardner, M. M., & Hickmott, J. (2010). Demonstrating student success in student affairs. Sterling, VA: Stylus. Bresciani, M. J., Gillig, B., Tucker, M., Weiner, L., & McCully, L. (2014). Exploring the use of evidence in resource allocation: Towards a framework for practice. The Journal of Student Affairs, 23, 71–80. Burke, M., Parnell, A., Wesaw, A., & Kruger, K. (2017). Predictive analysis of student data: A focus on engagement and behavior. Washington DC: NASPA. California Department of Education. (2017). Annual performance report measures: Short summaries of special education program and student outcome data for California LEAs. Retrieved from https://www.cde.ca.gov/sp/se/ds/leadatarpts .asp Council for Higher Education Accreditation. (2016). Good practices and shared responsibility in the conduct of specialized and professional accreditation review. Retrieved from http://www.chea.org/userfiles/CHEA%20Advisory%20 Statements/2000.01_good_practice.pdf Council of Regional Accrediting Commissions. (2003). Regional accreditation and student learning: Principles for good practices. Retrieved from www.msche.org/ publications/Regnlsl050208135331.pdf
Ludvik.indb 162
27-10-2018 00:23:35
APPENDIX A
163
Council of Regional Accrediting Commissions. (2011). Distance education programs: Interregional guidelines for the evaluation of distance education (online learning). Philadelphia, PA: Middle States Commission on Higher Education. Eduventures. (2013). Predictive analytics in higher education: Data-driven decision-making for the student life cycle. Boston, MA: Eduventures. Retrieved from www.eduventures.com/wp.../Eduventures_Predictive_Analytics_White_ Paper1.pdf Ekowo, M., & Palmer, I. (2017). Predictive analytics in higher education: Five guiding practices for ethical use. Retrieved from www.newamerica.org/education-policy/ policy-papers/predictive-analytics-higher-education/#introduction Engelmann, D. (2005). Teaching students to practice philosophy. In T. Riordan & J. Roth (Eds.), Disciplines as frameworks for student learning: Teaching the practice of the disciplines (pp. 39–58). Sterling, VA: Stylus. Engle, J., & Bill & Melinda Gates Foundation. (2016). Answering the call: Institutions and the states lead the way toward better measures of postsecondary performance. Retrieved from https://postsecondary.gatesfoundation.org/wpcontent/ uploads/.../aAnsweringtheCall.pdf European Consortium for Accreditation. (n.d.). Code of good practice for the members of the European Consortium for Accreditation in Higher Education. Retrieved from http://ecahe.eu/w/images/4/4c/Eca-code-of-good-practice.pdf Ewell, P. T. (1997a). From the states: Putting it all on the line—South Carolina’s performance funding initiative. Assessment Update, 9(1), 9, 11. Ewell, P. T. (1997b). Identifying indicators of curricular quality. In G. J. Gaff, L. J. Ratcliff, & Associates (Eds.), Handbook of the undergraduate curriculum: A comprehensive guide to purposes, structures, practices, and change. San Francisco, CA: Jossey-Bass. Ewell, P. T. (2008). U.S. accreditation and the future of quality assurance. Ann Arbor, MI: Council for Higher Education Accreditation. Ewell, P. T., & Jones, D. P. (1996). Indicators of “good practice” in undergraduate education: A handbook for development and implementation. Boulder, CO: National Center for Higher Education Management Systems. Food and Agriculture Organization of the United Nations. (2014). Good practices template. Retrieved from https://www.google.com/url?q=http://www .fao.org/fileadmin/user_upload/goodpractices/docs/GoodPractices_TemplateEN-March2014.docx&sa=U&ved=0ahUKEwjApLqtiLvYAhXlz1QKHZC_ BPsQFggIMAI&client=internal-uds-cse&cx=018170620143701104933:qq82jsfb a7w&usg=AOvVaw2IKTZAfbniM3Cr5ihTTJJN Griego, E. (2005). Promoting student success: What accreditation teams can do (Occasional Paper No. 15). Bloomington, IN: Indiana University Center for Postsecondary Research. Kinzie, J., Jankowski, N., & Provezis, S. (2014). Do good assessment practices measure up to the principles of assessment? Assessment Update, 26(3), 1–15.
Ludvik.indb 163
27-10-2018 00:23:35
164
APPENDIX A
Kuh, G. D., Ikenberry, S. O., Jankowski, N., Cain, T. R., Ewell, P. T., Hutchings, P., & Kinzie, J. (2015). Using evidence of student learning to improve higher education. San Francisco, CA: Jossey-Bass. Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in U.S. colleges and universities. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Leveille, D. E. (2005). An emerging view on accountability in American higher education (CHSE 8.05). Research & Occasional Paper Series. Retrieved from https:// cshe.berkeley.edu/publications/emerging-view-accountability-american-highereducation Lopez, C. (1997). Opportunities for improvement: Advice from consultant-evaluators on assessing student learning. Retrieved from http://www.ncahigherlearningcommission .org/resources/assessment/index.html Lopez, C. (2002). Assessment of student learning: Challenges and strategies. The Journal of Academic Librarianship, 28(6), 356–367. Maki, P. (2004). Assessing for student learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus. Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Sterling, VA: Stylus. McClarty, K. L., & Gaertner, M. N. (2015). Measuring mastery: Best practices for assessment in competency-based education. Washington DC: American Enterprise Institute. McCormick, A. C., & Kinzie, J. (2014). Refocusing the quality discourse: The United States National Survey of Student Engagement. In H. Coates & A. C. McCormick (Eds.), Engaging University Students (pp. 13–29). Singapore: Springer. Mentkowski, M., & Associates. (2000). Learning that lasts: Integrating learning, development, and performance in college and beyond. San Francisco, CA: JosseyBass. Moore-Gardner, M., Kline, K. A., & Bresciani, M. J. (Eds.). (2013). Assessing student learning in the two-year and community colleges: Successful strategies and tools developed by practitioners in student and academic affairs. Sterling, VA: Stylus. National Association of Independent Schools. (n.d.). Criteria for effective independent school accreditation. Retrieved from https://www.nais.org/about/commission-onaccreditation/criteria-for-effective-independent-school-accredit/ National Association of Schools of Theatre. (1996). Code of good practice for the accreditation of NAST. Retrieved from https://nast.arts-accredit.org/wp-content/ uploads/.../NAST-Code-of-Good-Practice.pdf National Commission on Accountability in Higher Education. (2005). Accountability for better results. Boulder, CO: State Higher Education Officers Association. National Council of Nonprofits. (n.d.). Principles and practices: Where can you find “best practices” for nonprofits? Retrieved from https://www.councilofnonprofits .org/tools-resources/principles-and-practices-where-can-you-find-best-practicesnonprofits
Ludvik.indb 164
27-10-2018 00:23:35
APPENDIX A
165
National Institute for Learning Outcomes Assessment. (2011). Transparency framework. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Retrieved from http://www .learningoutcomesassessment.org/TransparencyFramework.htm National Institute for Learning Outcomes Assessment. (2012). Papers, articles, & presentations. Retrieved from www.learningoutcomeassessment.org/papers.htm National Institute for Learning Outcomes Assessment. (2013). Principles for effective assessment of student achievement. Retrieved from www.learningoutcome assessment.org/.../EndorsedAssessmentPrinciples_SUP.pdf Office of Career, Technical, and Adult Education. (2008). Performance measures and accountability. Retrieved from https://www2.ed.gov/about/offices/list/ovae/ pi/cte/perfmeas.html Quality Assurance Commons. (2017). Draft criteria for EEQ Certification. Retrieved from https://drive.google.com/file/d/19fayxEhn2CuIOMimvH6bm0GnzBgMQ kRs/view Quality Assurance Commons. (2017). Piloting the “EEQ Certification.” Retrieved from https://theqacommons.files.wordpress.com/2017/04/theqacommons-pilot information-draft.pdf Rogers, G., & Mentkowski, M. (2004). Abilities that distinguish the effectiveness of five-year alumna performance across work, family, and civic roles: A higher education validation. Higher Education Research & Development, 23(3), 347–374. Scott, G. (2017). FLIPCurric. National Office for Learning and Teaching, Australian Government, Canberra. Retrieved from http://flipcurric.edu.au/ Senior College and University Commission. (2009). WASC resource guide for “good practices” in academic program review. Retrieved from www.csun.edu/sites/default/ files/wasc_program_review_resource_guide_1.pdf Senior College and University Commission. (2013). Program review resource guide: 2013 handbook of accreditation update. Retrieved from https://www.wscuc.org/ content/program-review-resource-guide Shulman, L. S. (2007). Counting and recounting: Assessment and the quest for accountability. The Magazine of Higher Learning, 39(1), 20–25. Suskie, L. (2018). Assessing student learning: A common sense guide (3rd ed.). San Francisco, CA: Jossey-Bass. U.S. Department of Education. (2007). Performance measurement initiative: Archived information. Retrieved from https://www2.ed.gov/about/offices/list/ovae/pi/hs/ pmiindex.html Zinshteyn, M. (2016). The colleges are watching: With access to predictive analytics and more data than ever before, how can universities avoid invading students’ privacy while promoting academic success? The Atlantic. Retrieved from https://www.theatlantic.com/education/archive/2016/11/the-colleges-arewatching/506129/
Ludvik.indb 165
27-10-2018 00:23:35
Ludvik.indb 166
27-10-2018 00:23:35
APPENDIX B
LIST OF GOOD PRACTICE I N S T I T U T I O N S T H AT W E R E N O M I N AT E D B Y S C H O L A R S AND PRACTITIONERS
1. Alverno College—http://lampout1.alverno.edu/saal/ 2. Antioch University—https://www.antioch.edu/resources/faculty-staff/ academic-assessment/ 3. Azusa Pacific University—https://www.apu.edu/oira/ 4. Baruch College—https://www.baruch.cuny.edu/assessment/resources .htm 5. Brandman University—https://www.brandman.edu/academic-programs /assessment 6. Brigham Young University—http://planningandassessment.byu.edu 7. California State University, Monterey Bay—https://csumb.edu/iar 8. Chapman University—https://www.chapman.edu/academics/learningat-chapman/program-review/index.aspx 9. Colorado State University—https://provost.colostate.edu/ assessment-planning-effectiveness/ 10. Community College of Baltimore County—http://www.ccbcmd.edu/ About-CCBC/Accreditation/Learning-Outcomes-Assessment.aspx 11. Cornell University—http://irp.dpb.cornell.edu/academic-programreview/academic-program-review-process 12. Grinnell College—https://www.grinnell.edu/about/offices-services/ student-affairs/evaluation-and-assessment 13. Guttman College—http://guttman.cuny.edu/about/mission-visiongoals-outcomes/learning-outcomes/ 14. Hagerstown Community College—http://www.hagerstowncc.edu/ academics/outcomes-assessment 15. Hampden Sydney College—http://www.hsc.edu/institutionaleffectiveness 167
Ludvik.indb 167
27-10-2018 00:23:35
168
APPENDIX B
16. Illinois State University—https://assessment.illinoisstate.edu 17. Indiana University–Purdue University Indianapolis —http://planning. iupui.edu/assessment/ 18. Isothermal Community College—https://www.isothermal.edu/about/ assessment/ 19. James Madison University—http://www.jmu.edu/assessment/ 20. John Carrol University—http://sites.jcu.edu/institutionaleffectiveness/ 21. Kennesaw State University—http://www.pacific.edu/About-Pacific/ AdministrationOffices/Office-of-the-Provost/Educational-Effectiveness/ Assessment-of-Student-Learning/Assessment-Resources.html 22. Macalester College—https://www.macalester.edu/assessment/ 23. Miami Dade College—https://www.mdc.edu/main/ie/default.aspx 24. Miami University—https://miamioh.edu/academic-affairs/teaching/ assessment/index.html 25. National University—https://www.nu.edu/student-achievement/ outcome.html 26. New Jersey City University—http://www.njcu.edu/ie 27. North Carolina State University—https://assessment.dasa.ncsu.edu 28. Northampton Community College—https://www.northampton.edu 29. Oregon State University Division of Student Affairs—http://oregonstate .edu/studentaffairs/assessment 30. Sacramento State University—http://www.csus.edu/programassessment/ 31. Sinclair Community College—http://www.sinclair.edu/about/offices/ provost/assessment-of-student-learning/ 32. Tompkins Cortland Community College—https://www.tc3.edu/ campus_info/institutional_planning.asp 33. Tri-County Community College—http://www.tricountycc.edu/ faculty-staff/institutional-effectiveness/ 34. Truman State University—http://assessment.truman.edu 35. United States Naval Academy—https://www.usna.edu/Academics/ Academic-Dean/Assessment/index.php 36. University of California, Merced—https://studentaffairs.ucmerced.edu/ stafffaculty/assessment-research-and-evaluation 37. University of Central Oklahoma—http://sites.uco.edu/academic-affairs/ assessment/index.asp 38. University of Hawai‘i-Mānoa—https://manoa.hawaii.edu/assessment/ 39. University of Maine, Farmington—http://www.umf.maine.edu
Ludvik.indb 168
27-10-2018 00:23:35
APPENDIX B
169
40. University of the Pacific—http://www.pacific.edu/About-Pacific/ AdministrationOffices/Office-of-the-Provost/Educational-Effectiveness .html 41. University of Saint Thomas—https://www.stthomas.edu/ accreditation-assessment/assessment-best-practices/curriculummapping/ 42. University of San Diego—https://www.sandiego.edu/cas/assessment/ academic-program-review.php 43. University of Wisconsin at Whitewater—https://www.uww.edu/ assessment/student-learning-outcomes 44. Valencia College—http://valenciacollege.edu/academic-affairs/ institutional-effectiveness-planning/institutional-assessment/loa/forms_ reports.cfm 45. Waubonsee Community College—https://www.waubonsee.edu/ learning/success/assessment/
Other Good Practice References 1. California State University, Dominguez Hills Male Success Alliance— https://www.csudh.edu/msa/ 2. San Diego State University—http://go.sdsu.edu/strategicplan/studentsuccess-updates.aspx 3. University of West Florida—https://uwf.edu/offices/cutla/services-for/ assessment/ 4. Vincennes University—https://my.vinu.edu/web/institutional-effectiveness
Ludvik.indb 169
27-10-2018 00:23:35
Ludvik.indb 170
27-10-2018 00:23:35
APPENDIX C
N AT I O N A L I N S T I T U T E F O R LEARNING OUTCOMES ASSESSMENT HIGH PERFORMANCE FOR ALL STUDENTS
Comparable Learning and Development Outcome Measures and Performance Indicators
T
his is an example of how learning outcomes can be used as comparable performance indicators and/or used in predictive analytics when implemented consistently, ethically, and with integrity. There are many other measures that could be used. This table simply serves to provide some examples for your organization to discuss, consider, and then responsibly choose and implement. Note that these learning outcomes/performance indicators become more meaningful when the data are aggregated by groupings of student self-identifiers (e.g., race, ethnicity, gender, sexual orientation, religious affiliation, disability, veteran, first-generation, foster youth, commuter, Pelleligible, number of hours/week working off campus, etc.). It is also useful to aggregate data by the intersections of these identifiers (e.g., comparing female Muslim first-generation commuters with African American and Black male commuters). Knowing which intersections to aggregate the data by is a topic for another conversation and may require a more sophisticated random forest tree analysis on your campus in order to determine which students need your attention most.
Adapted with permission from a National Institute for Learning Outcomes Assessment (NILOA) Occasional Paper (Kuh, Gambino, Bresciani Ludvik, & O’Donnell, 2018).
171
07_OPR_C007.indd 171
07-11-2018 21:58:21
172
APPENDIX C
Learning Outcome/ Data Collection Performance Indicator Instrument
Purpose
Term-to-Term Persistence Rates
IPEDS definition; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are persisting from termto-term to be able to refine OBPR implementation and organizational decision-making
Graduation Rates
IPEDS definition; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are graduating to be able to refine OBPR implementation and organizational decision-making
Cumulative Grade Point Average (GPA)
IPEDS definition; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are earning below high achievement expectations to be able to refine OBPR implementation and organizational decision-making
Learning Outcome Rubrics Scores
American Association of Colleges & Universities (AAC&U) Liberal Education and America’s Promise (LEAP) rubric scores; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are earning below high achievement expectations of specific learning outcomes to refine OBPR implementation and organizational decision-making
Ludvik.indb 172
27-10-2018 00:23:36
APPENDIX C
Ludvik.indb 173
173
Learning Outcome/ Data Collection Performance Indicator Instrument
Purpose
Time to Degree
IPEDS definition; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are not achieving expected time-to-degree expectations for specific degrees in order to refine OBPR implementation and organizational decision-making
Pass Rates of GateKeeping Courses
Campus definition of gate-keeping courses; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are not achieving expected time-to-degree expectations for specific degrees in order to refine OBPR implementation and organizational decision-making
Job Placement Rates
Data collected at graduation or in a six-month alumni follow-up survey
To determine whether there are gaps among groups of students or types of institutional experiences among students who are not securing meaningful or gainful employment for specific degree areas in order to refine OBPR implementation and organizational decision-making
Progress Toward Degree
Campus definition of data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are not achieving expected progresstoward-degree expectations for specific degrees in order to refine OBPR implementation and organizational decision-making
27-10-2018 00:23:36
174
APPENDIX C
Learning Outcome/ Data Collection Performance Indicator Instrument
Purpose
Discipline Competency Exam Scores
Campus definition; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are earning below high achievement expectations of specific discipline competencies in order to refine OBPR implementation and organizational decision-making
Licensure and Certification Exam Pass Rates
Data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are earning below “high achievement” expectations of specific discipline competencies in order to refine OBPR implementation and organizational decision-making
Number of Major Changes and Hours Accumulated When Change Was Made
Campus definition of student activities; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are not achieving expected progresstoward-degree expectations for specific degrees in order to refine OBPR implementation and organizational decision-making
Participation Rates in Campus Approved Student Activities and Organizations
Campus definition of student activities; data extracted from student transactional system
To determine whether there are gaps among groups of students or types of institutional experiences among students who are engaging in college/university community life in order to refine OBPR implementation and organizational decision-making
Ludvik.indb 174
27-10-2018 00:23:36
175
APPENDIX C
Ludvik.indb 175
Learning Outcome/ Data Collection Performance Indicator Instrument
Purpose
Participation Rates in High-Impact Practices (HIPs)
AAC&U definition of HIPs; data extracted from student transactional system
To determine whether there are gaps among groups of students who are engaging in HIPs or types of HIPs in order to refine OBPR implementation and organizational decision-making
Academic SelfEfficacy
Academic SelfEfficacy Scale (Chemers, Hu, & Garcia, 2001)
To measure confidence in abilities
Attention and Emotion Regulation
Five Facet Mindfulness Questionnaire (FFMQ) (Baer et al., 2008)
To measure five facets of mindfulness: observing, describing, acting with awareness, non-judging of inner experience, and non-reactivity to inner experience
Compassion/ProSocial Behavior
Multidimensional Compassion Scale (MCS) (Jazaieri et al., 2014)
To measure four components: awareness of suffering (cognitive component); sympathetic concern (empathy) triggered by suffering (affective component); desire to relieve suffering (intentional component); and readiness to help relieve suffering (action component)
Conscientiousness
Chernyshenko Conscientiousness Scales (CCS) (Green et al., 2015)
To measure industriousness, order, self-control, traditionalism, virtue, and responsibility
Engagement
National Survey of Student Engagement (NSSE)
To measure engagement of higher order learning, reflective and integrative learning, learning strategies, quantitative reasoning, collaborative learning, discussions with diverse others, student-faculty interactions, effective teaching practices, quality of interactions, and supportive environment
27-10-2018 00:23:36
176
APPENDIX C
Learning Outcome/ Data Collection Performance Indicator Instrument
Purpose
Grit
Grit Scale (Duckworth & Quinn, 2009)
To measure perseverance in achieving goals and consistency of interests over time
Growth Mind-Set
Growth To measure self-perceptions of Mindset Intelligence abilities Scale (Dweck, 1999)
Mental Well-Being
Warwick-Edinburgh Mental Well-being Scale (WEMWBS) (2006)
To measure overall mental well-being and the effects of participation in programs and projects on mental well-being
Personal and Social Responsibility
Personal and Social Responsibility Inventory (Reason, 2013)
To measure five dimensions: striving for excellence; cultivating academic integrity; contributing to larger community; taking seriously the perspectives of others; and ethical and moral reasoning
Psychological WellBeing
Psychological Well-Being (Ryff & Keyes, 1995)
To measure autonomy, environmental mastery, personal growth, positive relations with others, purpose in life, and selfacceptance
Resilience
Brief Resilience Scale (Smith et al., 2008)
To measure ability to “bounce back” following an adverse experience
Self-Control
Self-Control Scale (Tsukayama, Duckworth, & Kim, 2013)
To measure ability to regulate interpersonal and social impulsivity
Self-Regulation
Self-Regulation Scale (Schwarzer, Diehl, & Schmitz, 1999)
To measure attentional control in goal pursuit
Sense of Belonging
Sense of Belonging Scale (Hoffman et al., 2002)
To measure perceived peer support, faculty support/ comfort, classroom comfort, isolation, and empathetic faculty understanding
Ludvik.indb 176
27-10-2018 00:23:36
APPENDIX C
177
References Baer, R. A., Smith, G. T., Lykins, E., Button, D., Krietemeyer, J., Sauer, S.,... Williams, J. M. G. (2008). Construct validity of the five facet mindfulness questionnaire in meditating and nonmeditating samples. Assessment, 15(3), 329–342. Chemers, M. M., Hu, L., & Garcia, B. F. (2001). Academic self-efficacy and firstyear college student performance and adjustment. Journal of Educational Psychology, 93, 55–64. Duckworth, A. L., & Quinn, P. D. (2009). Development and validation of the short grit scale (Grit-S). Journal of Personality Assessment, 91, 166–174. Dweck, C. S. (1999). Self-theories: Their role in motivation, personality and development. Philadelphia, PA: Taylor & Francis/Psychology Press. Green, J. A., O’Connor, D. B., Gartland, N., & Roberts, B. W. (2015). The Chernyshenko Conscientiousness Scales: A new facet measure of conscientiousness. Assessment, 23(3), 374–385. Hoffman, M., Richmond, J., Morrow, J., & Salomone, K. (2002). Investigating sense of belonging in first year college students. Journal of College Student Retention, 4(3), 227–256. Jazaieri, H., McGonigal, K., Jinpa, T., Doty, J. R., Gross, J. J., & Goldin, P. R. (2014). A randomized controlled trial of compassion cultivation training: Effects on mindfulness, affect, and emotion regulation. Motivation and Emotion, 38(1), 23–35. Kuh, G. D., Gambino, L. M., Bresciani Ludvik, M., & O’Donnell, K. (2018, February). Using ePortfolio to document and deepen the impact of HIPs on learning dispositions (Occasional Paper No. 32). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Retrieved from http://learningoutcomesassessment.org/occasionalpaperthirtytwo .html National Survey of Student Engagement (NSSE). (n.d.). About NSSE. Retrieved from http://nsse.indiana.edu/html/about.cfm Reason, R. (2013). Creating and accessing campus climates support personal and social responsibility. Retrieved from https://www.aacu.org/publications-research/ periodicals/creating-and-assessing-campus-climates-support-personal-and-social Ryff, C. D., & Keyes, C. L. M. (1995). The structure of psychological well-being revisited. Journal of Personality and Social Psychological Well-being Revisited, 69(4), 719–727. Schwarzer, R., Diehl, M., & Schmitz, G. S. (1999). Self-Regulation Scale. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2442651/#R32 Smith, B. W., Dalen, J., Wiggins, K., Tooley, E., Christopher, P., & Bernard, J. (2008). The brief resilience scale: Assessing the ability to bounce back. International Journal of Behavioural Medicine, 15, 194–200. Tsukayama, E., Duckworth, A. L., & Kim, B. E. (2013). Domain-specific impulsivity in school-age children. Developmental Science, 16(6), 879–893.
Ludvik.indb 177
27-10-2018 00:23:36
178
APPENDIX C
Warwick-Edinburgh Mental Well-being Scale (WEMWBS). (2006). NHS Health Scotland, University of Warwick and University of Edinburgh. Retrieved from https://warwick.ac.uk/fac/med/research/platform/wemwbs/
Ludvik.indb 178
27-10-2018 00:23:36
APPENDIX D
O R E G O N S TAT E U N I V E R S I T Y STUDENT AFFAIRS LEARNING GOALS AND OUTCOMES ALIGNMENT GRID
Date: _____________ Person completing form: ___________________
Learning Goals A. Effective Communication E. Critical Thinking and Analytical B. Healthy Living Skills C. Active Citizenship and Responsibility F. Attitude of Inquiry D. Interpersonal and Intrapersonal Competence Learning Goals Learning Outcomes
A B C D E F
Department Name: 1. 2. 3. 4. 5. Additional lines can be added as needed. See Student Affairs Research and Evaluation Web page for electronic copy of this form. http://oregonstate.edu/admin/student_ affairs/research/res_introduction.html
The contents of this appendix are reprinted with permission of Oregon State University.
179
Ludvik.indb 179
27-10-2018 00:23:36
Ludvik.indb 180
27-10-2018 00:23:36
Ludvik.indb 181
List program outcomes here
Program Outcomes #5
List course outcome that support this program outcomes here
List course outcome that support this program outcomes here
List course outcome that support this program outcomes here
Course name
Reprinted with permission of Hagerstown Community College.
List program outcomes here
Program Outcomes #4
List program outcomes here
Program Outcomes #2
List program outcomes here
List program outcomes here
Program Outcomes #1
Program Outcome #3
Outcome
Program Name
List course outcome that support this program outcomes here
List course outcome that support this program outcomes here
List course outcomes that support these program outcomes here
Course name
List course outcome that support this program outcomes here
List course outcome that support this program outcomes here
List course outcomes that support this program outcomes here
Course name
List course outcomes that support this program outcomes here
List course outcomes that support this program outcomes here
Course name
APPENDIX E
H AG E R S TOW N C O M M U N I T Y C O L L E G E M A P T E M P L AT E
181
27-10-2018 00:23:36
Ludvik.indb 182
27-10-2018 00:23:36
APPENDIX F
C A L I F O R N I A S TAT E U N I V E R S I T Y , SACRAMENTO, EXAMPLE OF ANNUAL OUTCOMESB A S E D D ATA C O L L E C T I O N
The Assessment Plan for the Department of Design February 2017 The Department’s Assessment Strategy The Department of Design consists of three programs: Graphic Design, Interior Design, and Photography. Although the programs prepare students for different career paths, their shared core courses reflect a common foundation in art and design. For this reason, the faculty has elected to develop a single assessment plan for the department. The learning goals and expectations outlined as follows are introduced in the core program and subsequently developed in each program’s upper division required courses.
The Department’s Program Goals and Learning Expectations Program Goal #1: The Graphic Design, Interior Design, or Photography major at CSU Sacramento will be expected to study, review, and reflect upon the history of their discipline and its role in shaping the physical and social environment in which we live. Student Expectations of Goal #1: The student should be able to . . . (a) recognize major works from each of the historical periods that comprise their discipline. Reprinted with permission of California State University, Sacramento.
183
Ludvik.indb 183
27-10-2018 00:23:36
184
APPENDIX F
(b) demonstrate an understanding of the major trends and styles that have shaped their discipline. (c) describe how developments in their discipline reflect cultural, political, and economic factors. (d) apply critical techniques from art and design history to his/her own work and to work of his/her contemporaries.
Program Goal #2: The Graphic Design, Interior Design, or Photography major at Sacramento State will be expected to create professional-quality work that responds creatively to project requirements. Student Expectations of Goal #2: The student should be able to . . . (a) analyze the basic elements of functional and aesthetic problems and synthesize these elements into an overall program statement. (b) create work that responds to functional and aesthetic requirements and that develops a clearly articulated concept. (c) draw inspiration from the history of their discipline, and understand their own work in relationship to that history. (d) work confidently with a wide range of professional tools, including both traditional and digital technologies. Program Goal #3: The Graphic Design, Interior Design, or Photography major at Sacramento State will be expected to explain their work graphically, verbally, and in writing. Student Expectations of Goal #3: The student should be able to . . . (a) communicate his/her ideas through models, photographs, and drawings. Visual material should illustrate the student’s solution to functional and aesthetic problems and also express the work’s organizing concept. (b) present his/her work orally to a group comprised of teachers, clients, and colleagues. The student should be able to move comfortably back and forth between words and visual material in order to express his/her ideas. (c) express in writing the functional and aesthetic concepts that underlay his/her work.
The Department’s Assessment Tools Program Goal #1: The Graphic Design, Interior Design, or Photography major at Sacramento State will be expected to study, review, and reflect upon
Ludvik.indb 184
27-10-2018 00:23:36
APPENDIX F
185
the history of their discipline and its role in shaping the physical and social environment in which we live. Assessment Tools for Goal #1: The student . . . (a) takes general art and design history courses as well as specialized courses in the history of their respective disciplines. These classes use a combination of in-class exams and term papers to assess the student’s learning. (b) takes studio classes in which he/she must demonstrate, through written and verbal presentation of his/her work, an understanding of the discipline’s history and how his/her work relates to this history. Program Goal #2: The Graphic Design, Interior Design, or Photography major at Sacramento State will be expected to create professional-quality work that responds creatively to project requirements. Assessment Tools for Goal #2: The student . . . (a) takes a series of studio classes in which he/she must produce work that responds to specific functional and aesthetic problems. In each class, the student presents his/her work orally and with drawings, models, and/or photographs. The presentation provides an opportunity for the student to have his/her work evaluated by visiting professors and outside professionals. (b) takes a series of technical classes in which he/she learns the technology, traditional and digital, required to produce professional-level work in their field. Assessment in these classes is based on a series of exercises that tests the student’s knowledge. (c) takes a senior portfolio class in which he/she produces a professional portfolio that is reviewed by professors and potential employers at a department-wide exhibition at the end of the spring term. Program Goal #3: The Graphic Design, Interior Design, or Photography major at Sacramento State will be expected to explain their work graphically, verbally, and in writing. See (a) and (c) under “Assessment Tools for Goal #2” in the previous section.
Ludvik.indb 185
27-10-2018 00:23:36
Ludvik.indb 186
27-10-2018 00:23:36
APPENDIX G
JAMES MADISON UNIVERSITY EXAMPLE OF AN ANNUAL A S S E S S M E N T R E P O R T T E M P L AT E 2013–2014 EXAMPLE ANNUAL ASSESSMENT REPORT TEMPLATE This template intends to make our annual assessment and its reports simple, clear, and of high quality not only for this academic year but also for the years to come. Thus, it explicitly specifies some of the best assessment practices and/or expectations implied in the four [Western Association of Schools and Colleges] WASC assessment rubrics we have used in the last few years (see the information* that has appeared in Appendices 1, 2a, 2b, and 7 in the Feedback for the 2011–2012 Assessment Report, Appendix 2 in the Feedback for the 2012–2013 Assessment Report, and Appendices 5 to 8 in the 2013–2014 Annual Assessment Guideline). We understand some of our programs/departments have not used and/ or adopted these best practices this year, and that is okay. You do not need to do anything extra this year, and ALL YOU NEED TO DO is to report what you have done this academic year. However, we hope our programs will use many of these best practices in the annual assessment in the future. We also hope to use the information from this template to build a digital database that is simple, clear, and of high quality. If you find it necessary to modify or refine the wording or the content of some of the questions to address the specific needs of your program, please make the changes and highlight them in red. We will consider your suggestion(s). Thank you! If you have any questions or need any help, please send an e-mail to Dr. Amy Liu ([email protected]), Director of University Assessment. We are looking forward to working with you. *The four WASC rubrics refer to: 1. WASC “Rubric for Assessing the Quality of Academic Program Learning Outcomes”; 2. WASC “Rubric for Assessing the Use of Capstone Experience for Assessing Program Learning Outcomes”; 3. WASC “Rubric for Assessing the Use of Portfolio for Assessing Program Learning Outcomes”; and 4. WASC “Rubric for Assessing the Integration of Student Learning Assessment into Program Reviews.” The Learning And Development Outcomes Report (LADOR) was developed by James Madison University and is available at https://www.jmu.edu/studentaffairs/staff-resources/saac/templates.shtm. It is reprinted here with permission of James Madison University.
187
Ludvik.indb 187
27-10-2018 00:23:36
188
APPENDIX G
Part 1: Background Information B1. Program name: [_____] B2. Report author(s): [_____] B3. Fall 2012 enrollment: [_____] Use the Department Fact Book 2013 by OIR (Office of Institutional Research) to get the fall 2012 enrollment (http://www.csus.edu/oir/Data%20Center/ Department%20Fact%20Book/Departmental%20Fact%20Book.html). B4. Program type: [SELECT ONLY ONE] Undergraduate baccalaureate major Credential Master’s degree Doctorate: PhD/EdD Other, specify:
Part 2: Six Questions for the 2013–2014 Annual Assessment Question 1 (Q1): Program Learning Outcomes (PLOs) Assessed in 2013–2014. Q1.1. Which of the following program learning outcomes (PLOs) or Sac State Baccalaureate Learning Goals did you assess in 2013–2014? (See 2013– 2014 Annual Assessment Report Guidelines for more details). [CHECK ALL THAT APPLY]
Ludvik.indb 188
Critical thinking (WASC 1)* Information literacy (WASC 2) Written communication (WASC 3) Oral communication (WASC 4) Quantitative literacy (WASC 5) Inquiry and analysis Creative thinking Reading Teamwork Problem-solving Civic knowledge and engagement—local and global Intercultural knowledge and competency
27-10-2018 00:23:36
APPENDIX G
189
Ethical reasoning Foundations and skills for lifelong learning Global learning Integrative and applied learning Overall competencies for General Education Knowledge Overall competencies in the major/discipline Others. Specify any PLOs that were assessed in 2013–2014 but not included: a. b. c.
* One of the WASC’s new requirements is that colleges and universities report on the level of student performance at graduation in five core areas: critical thinking, information literacy, written communication, oral communication, and quantitative literacy.
Q1.1.1. Please provide more detailed information about the PLO(s) you checked:
Q1.2. Are your PLOs closely aligned with the mission of the university? Yes No Don’t know
Q1.3. Is your program externally accredited (except for WASC)? Yes No (If no, go to Q1.4) Don’t know (Go to Q1.4)
Q1.3.1. If yes, are your PLOs closely aligned with the mission/goals/outcomes of the accreditation agency? Yes No Don’t know
Ludvik.indb 189
27-10-2018 00:23:36
190
APPENDIX G
Q1.4. Have you used the Degree Qualification Profile (DQP)* to develop your PLO(s)? Yes No, but I know what DQP is. No. I don’t know what DQP is. Don’t know * Degree Qualifications Profile (DQP)—a framework funded by the Lumina Foundation that describes the kinds of learning and levels of performance that may be expected of students who have earned an associate, baccalaureate, or master’s degree. Please see the links for more details: http://www.luminafoundation.org/publications/ The_Degree_Qualifications_Profile.pdf and http://www.learningoutcomeassessment .org/DQPNew.html.
Question 2 (Q2): Standards of Performance/Expectations for EACH PLO. Q2.1. Has the program developed/adopted EXPLICIT standards of performance/expectations for the PLO(s) you assessed in the 2013–2014 Academic Year? (For example: We expect 70% of our students to achieve at least a score of 3 on the Written Communication VALUE rubric.) Yes, we have developed standards/expectations for ALL PLOs assessed in 2013–2014. Yes, we have developed standards/expectations for SOME PLOs assessed in 2013–2014. No (If no, go to Q2.2) Don’t know (Go to Q2.2) Not applicable (Go to Q2.2) Q2.1.1. If yes, what are the desired levels of learning, including the criteria and standards of performance/expectations, especially at or near graduation, for EACH PLO assessed in 2013–2014 Academic Year? (For example: what will tell you if students have achieved your expected level of performance for the learning outcome.) Please provide the rubric and/or the expectations that you have developed for EACH PLO one at a time in the space provided. [WORD LIMIT: 300 WORDS FOR EACH PLO] Q2.2. Have you published the PLO(s)/expectations/rubric(s) you assessed in 2013–2014? Yes No (If no, go to Q3.1)
Ludvik.indb 190
27-10-2018 00:23:36
APPENDIX G
191
Q2.2.1. If yes, where were the PLOs/expectations/rubrics published? [CHECK ALL THAT APPLY] In SOME course syllabi/assignments in the program that claim to introduce/develop/master the PLO(s) In ALL course syllabi/assignments in the program that claim to introduce/develop/master the PLO(s) In the student handbook/advising handbook In the university catalogue On the academic unit website or in the newsletters In the assessment or program review reports/plans/resources/ activities In the new course proposal forms in the department/college/ university In the department/college/university’s strategic plans and other planning documents In the department/college/university’s budget plans and other resource allocation documents In other places, specify:
Question 3 (Q3): Data, Results, and Conclusions for EACH PLO Q3.1. Was assessment data/evidence collected for 2013–2014? Yes No (If no, go to Part 3: Additional Information) Don’t know (Go to Part 3) Not Applicable (Go to Part 3) Q3.2. If yes, was the data scored/evaluated for 2013–2014? Yes No (If no, go to Part 3: Additional Information) Don’t know (Go to Part 3) Not Applicable (Go to Part 3) Q3.3. If yes, what DATA have you collected? What are the results, findings, and CONCLUSION(s) for EACH PLO assessed in 2013–2014? In what areas are students doing well and achieving the expectations? In what areas do students need improvement? Please provide a simple and clear summary of the key data and findings, including tables and graphs if applicable for EACH PLO one at a time. [WORD LIMIT: 600 WORDS FOR EACH PLO]
Ludvik.indb 191
27-10-2018 00:23:37
192
APPENDIX G
Q3.4. Do students meet the expectations/standards of performance as determined by the program and achieved the learning outcomes? [PLEASE MAKE SURE THE PLO YOU SPECIFY HERE IS THE SAME ONE YOU CHECKED/SPECIFIED IN Q1.1].
Q3.4.1. First PLO: [Critical Thinking] Exceed expectation/standard Meet expectation/standard Do not meet expectation/standard No expectation/standard set Don’t know [NOTE: IF YOU HAVE MORE THAN ONE PLO, YOU NEED TO REPEAT THE TABLE IN Q3.4.1 UNTIL YOU INCLUDE ALL THE PLO(S) YOU ASSESSED IN 2013–2014.]
Q3.4.2. Second PLO: [_____] Exceed expectation/standard Meet expectation/standard Do not meet expectation/standard No expectation/standard set Don’t know
Question 4 (Q4): Evaluation of Data Quality: Reliability and Validity. Q4.1. How many PLOs in total did your program assess in the 2013–2014 academic year? [_____] Q4.2. Please choose ONE ASSESSED PLO as an example to illustrate how you use direct, indirect, and/or other methods/measures to collect data. If you only assessed one PLO in 2013–2014, YOU CAN SKIP this question. If you assessed MORE THAN ONE PLO, please check ONLY ONE PLO EVEN IF YOU ASSESSED MORE THAN ONE PLO IN 2013–2014. Critical thinking (WASC 1) Information literacy (WASC 2) Written communication (WASC 3) Oral communication (WASC 4) Quantitative literacy (WASC 5)
Ludvik.indb 192
27-10-2018 00:23:37
APPENDIX G
193
Inquiry and analysis Creative thinking Reading Teamwork Problem-solving Civic knowledge and engagement—local and global Intercultural knowledge and competency Ethical reasoning Foundations and skills for lifelong learning Global learning Integrative and applied learning Overall competencies for GE Knowledge Overall competencies in the major/discipline Other PLOs. Specify:
Direct Measures Q4.3. Were direct measures used to assess this PLO? Yes No (If no, go to Q4.4) Don’t know (Go to Q4.4) Q4.3.1 Which of the following DIRECT measures were used? [Check all that apply] Capstone projects (including theses, senior theses), courses, or experiences Key assignments from other CORE classes Key assignments from other classes Classroom based performance assessments such as simulations, comprehensive exams, critiques External performance assessments such as internships or other community based projects ePortfolios Other portfolios Other measures. Specify: Q4.3.2. Please provide the direct measure(s) [key assignment(s)/project(s)/ portfolio(s)] that you used to collect the data. WORD LIMIT: 300 WORDS]
Ludvik.indb 193
27-10-2018 00:23:37
194
APPENDIX G
Q4.3.2.1. Was the direct measure(s) [key assignment(s)/project(s)/portfolio(s)] aligned directly with the rubric/criterion? Yes No Don’t know Q4.3.3. Was the direct measure(s) [key assignment(s)/project(s)/portfolio(s)] aligned directly with the PLO? Yes No Don’t know Q4.3.4. How was the evidence scored/evaluated? [Select one only] No rubric was used to interpret the evidence (If checked, go to Q4.3.7) Used rubric developed/modified by the faculty who teaches the class Used rubric developed/modified by a group of faculty Used rubric pilot-tested and refined by a group of faculty Used other means. Specify: Q4.3.5. What rubric/criterion was adopted to score/evaluate the key assignments/projects/portfolio? [Select one only] The VALUE rubric(s) Modified VALUE rubric(s) A rubric that is totally developed by local faculty Used other means. Specify: Q4.3.6. Was the rubric/criterion aligned directly with the PLO? Yes No Don’t know Q4.3.7. Were the evaluators (e.g., faculty or advising board members) who reviewed student work calibrated to apply assessment criteria in the same way? Yes No Don’t know
Ludvik.indb 194
27-10-2018 00:23:37
APPENDIX G
195
Q4.3.8. Were there checks for interrater reliability? Yes No Don’t know Q4.3.9. Were the sample sizes for the direct measure adequate? Yes No Don’t know Q4.3.10. How did you select the sample of student work (papers, projects, portfolios, etc.)? Please briefly specify here: Indirect Measures Q4.4. Were indirect measures used to assess the PLO? Yes No (If no, go to Q4.5) Q4.4.1. Which of the following indirect measures were used? National student surveys (e.g., NSSE, etc.) University conducted student surveys (OIR [Office of Institutional Research] surveys) College/Department/program conducted student surveys Alumni surveys, focus groups, or interviews Employer surveys, focus groups, or interviews Advisory board surveys, focus groups, or interviews Others, specify: Q4.4.2. If surveys were used, were the sample sizes adequate? Yes No Don’t know Q4.4.3. If surveys were used, please briefly specify how you select your samples. What is the response rate? Other Measures Q4.5. Were external benchmarking data used to assess the PLO? Yes No (If no, go to Q4.6)
Ludvik.indb 195
27-10-2018 00:23:37
196
APPENDIX G
Q4.5.1. Which of the following measures was used? National disciplinary exams or state/professional licensure exams General knowledge and skills measures (e.g., CLA, CAAP, ETS PP, etc.) Other standardized knowledge and skill exams (e.g., ETS, GRE, etc.) Others, specify: Q4.6. Were other measures used to assess the PLO? Yes No (Go to Q4.7) Don’t know (Go to Q4.7) Q4.6.1. If yes, please specify: [_____] Alignment and Quality Q4.7. Please describe how you collected the data? For example, in what course(s) (or by what means) were data collected? How reliable and valid is the data? [WORD LIMIT: 300 WORDS]
Q4.8. How many assessment tools/methods/measures in total did you use to assess this PLO? [_____] NOTE: IF IT IS ONLY ONE, GO TO Q5.1.
Q4.8.1. Did the data (including all the assignments/projects/portfolios) from all the different assessment tools/measures/methods directly align with the PLO? Yes No Don’t know Q4.8.2. Were ALL of the assessment tools/measures/methods that were used good measures for the PLO? Yes No Don’t know
Ludvik.indb 196
27-10-2018 00:23:37
197
APPENDIX G
Question 5 (Q5): Use of Assessment Data. Q5.1. To what extent have the assessment results from 2012–2013 been used? [CHECK ALL THAT APPLY]
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
Improving specific courses Modifying curriculum Improving advising and mentoring Revising learning outcomes/goals Revising rubrics and/or expectations Developing/updating assessment plan Annual assessment reports Program review Prospective student and family information Alumni communication WASC accreditation (regional accreditation) Program accreditation External accountability reporting requirement Trustee/Governing Board deliberations Strategic planning Institutional benchmarking Academic policy development or modification Institutional improvement Resource allocation and budgeting New faculty hiring Professional development for faculty and staff Other Specify:
Very Much (1)
Quite a Bit (2)
Some (3)
Not at all (4)
Not Applicable (9)
Q5.1.1. Please provide one or two best examples to show how you have used the assessment data.
Ludvik.indb 197
27-10-2018 00:23:37
198
APPENDIX G
Q5.2. As a result of the assessment effort in 2013–2014 and based on the prior feedback from OAPA, do you anticipate making any changes for your program (e.g., course structure, course content, or modification of program learning outcomes)? Yes No (If no, go to Q5.3) Don’t know (Go to Q5.3) Q5.2.1. What changes are anticipated? By what mechanism will the changes be implemented? How and when will you assess the impact of proposed modifications? [WORD LIMIT: 300 WORDS]
Q5.2.2. Is there a follow-up assessment on these areas that need improvement? Yes No Don’t know Q5.3. Many academic units have collected assessment data on aspects of a program that are not related to program learning outcomes (i.e., impacts of an advising center, etc.). If your program/academic unit has collected assessment data in this way, please briefly report your results here. [WORD LIMIT: 300 WORDS]
Question 6 (Q6). Which program learning outcome(s) do you plan to assess next year?
Ludvik.indb 198
Critical thinking (WASC 1) Information literacy (WASC 2) Written communication (WASC 3) Oral communication (WASC 4) Quantitative literacy (WASC 5) Inquiry and analysis Creative thinking Reading Teamwork Problem-solving Civic knowledge and engagement—local and global Intercultural knowledge and competency
27-10-2018 00:23:37
APPENDIX G
199
Ethical reasoning Foundations and skills for lifelong learning Global learning Integrative and applied learning Overall competencies for GE Knowledge Overall competencies in the major/discipline Others. Specify any PLOs that the program is going to assess but not included: a. b. c.
Part 3: Additional Information A1. In which academic year did you develop the current assessment plan? Before 2007 2007–2008 2008–2009 2009–2010 2010–2011 2011–2012 2012–2013 2013–2014 Have not yet developed a formal assessment plan A2. In which academic year did you last update your assessment plan? Before 2007–2008 2007–2008 2008–2009 2009–2010 2010–2011 2011–2012 2012–2013 2013–2014 Have not yet updated the assessment plan A3. Have you developed a curriculum map for this program? Yes No Don’t know
Ludvik.indb 199
27-10-2018 00:23:37
200
APPENDIX G
A4. Has the program indicated explicitly where the assessment of student learning occurs in the curriculum? Yes No Don’t know A5. Does the program have any capstone class? Yes No Don’t know A5.1. If yes, please list the course number for each capstone class: [_____] A6. Does the program have ANY capstone project? Yes No Don’t know A7. Name of the academic unit: [_____] A8. Department in which the academic unit is located: [_____] A9. Department Chair’s Name: [_____] A10. Total number of annual assessment reports submitted by your academic unit for 2013–2014: [_____] A11. College in which the academic unit is located: Arts and Letters Business Administration Education Engineering and Computer Science Health and Human Services Natural Science and Mathematics Social Sciences and Interdisciplinary Studies Continuing Education (CCE) Other, specify:
Undergraduate Degree Program(s): A12. Number of undergraduate degree programs the academic unit has: [_____] A12.1. List all the name(s): [_____] A12.2. How many concentrations appear on the diploma for this undergraduate program? [_____]
Ludvik.indb 200
27-10-2018 00:23:37
APPENDIX G
201
Master’s Degree Program(s): A13. Number of master’s degree programs the academic unit has: [_____] A13.1. List all the name(s): [_____] A13.2. How many concentrations appear on the diploma for this master’s program? [_____]
Credential Program(s): A14. Number of credential degree programs the academic unit has: [_____] A14.1. List all the names: [_____]
Doctorate Program(s) A15. Number of doctorate degree programs the academic unit has: [_____] A15.1. List the name(s): [_____] A16. Would this assessment report apply to other program(s) and/or diploma concentration(s) in your academic unit?* Yes No *If the assessment conducted for this program (including the PLO(s), the criteria and standards of performance/expectations you established, the data you collected and analyzed, the conclusions of the assessment) is the same as the assessment conducted for other programs within the academic unit, you need to submit only one assessment report.
A16.1. If yes, please specify the name of each program:
A16.2. If yes, please specify the name of each diploma concentration:
Ludvik.indb 201
27-10-2018 00:23:37
Ludvik.indb 202
27-10-2018 00:23:37
APPENDIX H
U N I V E R S I T Y O F H AWA I ‘ I - M Ā N O A EXAMPLE OF ASSESSMENT R E S U LT S A N D I M P R O V E M E N T P L A N T E M P L AT E
Modify this template to meet your needs. Department/Program and Degree: Assessment Project Name: Semester/Year Evidence Collected: Program Assessment Coordinator: Department Chair: Person submitting results: Date submitted:
Executive Summary: 1. State the SLO(s) That Was Assessed, Targeted, or Studied –or— State the Assessment Question(s) and/or Goal(s) of Assessment Activity What did the program want to find out? 2. State the Type(s) of Evidence Gathered To assess the outcome or answer the assessment question, what evidence was collected? 3. State How the Evidence Was Interpreted, Evaluated, or Analyzed
The contents of this appendix were reprinted with permission of the University of Hawai’i at Mānoa.
203
Ludvik.indb 203
27-10-2018 00:23:37
204
APPENDIX H
What process was used to interpret, evaluate, or analyze the evidence? Who was involved? 4. State How Many Pieces of Evidence Were Collected If applicable, please include the sampling technique used. 5. Summarize the Actual Results 6. In Addition to the Actual Results, Were There Additional Conclusions or Discoveries? 7. Briefly Describe the Distribution and Discussion of Results Who received the results? In what forum did the discussion of the results take place? 8. Use of Results/Program Modifications: State How the Program Used the Results --or-Explain Planned Use of Results Please be specific. Include a timeline and key people as appropriate. 9. Reflect on the Assessment Process What went well? What didn’t go well? Is there anything related to assessment procedures your program would do differently next time? 10. Other Important Information
Ludvik.indb 204
27-10-2018 00:23:37
APPENDIX I
A Z U S A PA C I F I C U N I V E R S I T Y F I V E Y E A R A C T I O N P L A N T E M P L AT E
Five-Year Action Plan:
Program Name:
After the external review has been completed, program faculty, chair(s), and the dean collaborate to develop a proposed action plan. The action plan serves as a five-year planning document for the program. The proposed action plan should respond to all the recommendations made in the report by external reviewer. If the program does not plan to implement an action recommended in the external reviewer report, the recommendation should be listed with an explanation as to the rationale for not implementing the recommendation. The proposed action plan may include recommendations based on information from other sources that has come to light during the program review process. Proposed action plans should be as concise as possible. Once the proposed action plan has been completed and signed by the program chair and dean, it is forwarded to Ann Rose ([email protected]) in the Office for Strategy and Educational Effectiveness (SEE). SEE will schedule a meeting between the provost, the dean, the chair, and the vice provost for strategy and educational effectiveness to discuss the proposed action plan. After this meeting the action plan is revised as necessary to reflect recommendations made during the meeting. After revision, the final action plan should be signed by the chair and dean and submitted to Ann Rose in SEE who will obtain the provost’s signature. At this point, your program review is complete! Final action plans are posted on the Program Review Sharepoint site. Programs will be asked to submit a brief progress report on action plans at the mid-point of the seven-year program review cycle. Action plans and mid-cycle progress reports are forwarded to the Institutional Effectiveness Committee which reviews action plans and mid-cycle review progress reports from programs.
205
Ludvik.indb 205
27-10-2018 00:23:37
Ludvik.indb 206
Optional: Recommendation from other source (include supporting evidence)
Proposed Actions/ Responses (include rationale, if needed)
Responsible Person; Start and Completion Dates
Measurements of Success
Provost: _______________________________________________________________________________________________ Date: ________________________
Dean: ________________________________________________________________________________________________ Date: ________________________
Program / Department Chair: ______________________________________________________________________________ Date: ________________________
Approvals:
(add rows as needed)
Recommendation from External Review (verbatim)
Please use this template to complete your action plan. Add rows as needed.
206 APPENDIX I
27-10-2018 00:23:37
APPENDIX J
OUTCOMES-BASED ASSESSMENT P L A N A N D R E P O RT F O R P RO G R A M REVIEW PURPOSES CHECKLIST
This checklist is designed to accompany the assessment plan and report for program review purposes. All questions should be answered as either (a) yes—present in the proposed plan or (b) no—not present in the proposed plan. If no, an explanation needs to be provided for why that component is missing. The intention of this checklist is to simply guide institutions in selecting which components to include in their outcomesbased assessment program review (OBPR) process. Furthermore, if applicable and if it is helpful to the reviewer and the one being reviewed, the reviewer can rate the quality of the component as 5 = excellent, 4 = very good, 3 = good, 2 = average, 1 = below average, 0 = not present.
Overall/General 1. Is the plan and/or report written to conform to American Psychological Association (APA) formatting guidelines (6th edition)? 2. Is the plan and/or report void of spelling errors? 3. Does the plan and/or report use proper grammar? 4. Was the plan and/or report submitted by the posted due date? 5. If the plan and/or report includes appendices, are they properly and accurately referred to within the plan? 6. Does the plan and/or report include a properly formatted APA list of references, if applicable?
207
Ludvik.indb 207
27-10-2018 00:23:37
208
APPENDIX J
Program Name 1. Does the plan and/or report provide the program/project/service area name? 2. Does the program name provide an indication of the scope of the OBPR project? 3. Does the plan and/or report list the primary contact information of the person who can answer questions about the plan and/or report?
Program Mission or Purpose 1. Does the plan and/or report provide the program/project/service area mission or purpose statement? 2. Does the plan and/or report provide an explanation of how this program mission or purpose aligns with the mission of the department, college, division, or university wherein it is organized? 3. Does the plan and/or report explain how the program aligns with institutional values and priorities?
High Achievement for All Students (HAAS) Statement 1. Does the plan and/or report indicate how this program has been designed to advance HAAS? 2. Does the plan and/or report list performance indicators that will demonstrate the closing of achievement gaps and the demonstration of high achievement expectations for all students? 3. Are there related HAAS goals for each performance indicator? 4. Are there related outcomes for each HAAS goal and corresponding performance indicator? 5. Is there indication of how the identity characteristics (e.g., race, ethnicity, gender, sexual orientation, disability) and intersection of identity characteristics of students, faculty, and staff will be aggregated for each outcome, as appropriate?
Descriptive Overview 1. Does the plan and/or report describe the program that is being assessed in a general manner that would be understood by people outside of the program?
Ludvik.indb 208
27-10-2018 00:23:37
APPENDIX J
209
2. Does the plan and/or report introduce any learning, development, and engagement theories that undergird the program goals and outcomes? 3. Does the plan and/or report describe a brief history of the program? 4. Does the plan and/or report introduce other relevant literature, such as professional standards or accreditation requirements, indicating why the program exists and what it is intended to accomplish? 5. Does the plan and/or report include a vision statement, market research, and/or community needs assessment about why the program came into being or explain the importance of the program’s existence? 6. Does the plan and/or report indicate how the program mission, purpose, goals, and outcomes were derived? 7. Where literature is not obtainable or accessible, does the plan and/or report list assumptions about the program?
Program Goals 1. Does the plan and/or report provide goals that are broad, general statements of what the program expects participants to be able to do or to know? 2. Does the plan and/or report align each program goal to department, college, division, and university goals or strategic initiatives? 3. Does the plan and/or report align each program goal to each HAAS goal and/or performance indicator? 4. Does the plan and/or report describe the alignment of program goals to the program mission? 5. Does the plan and/or report assist in your understanding of how meeting program goals may mean meeting higher-level organization goals and strategic planning initiatives, such as HAAS?
Outcomes 1. Does the plan and/or report include outcomes that are detailed and specific statements derived from the goals? 2. Do the outcomes describe what programs expect the end result of their efforts to be? 3. Can you identify participant learning and development outcomes? 4. Can you identify other program outcomes that address student services, program processes, enrollment management, research, development, alumni outreach, and other practices (if applicable)?
Ludvik.indb 209
27-10-2018 00:23:37
210
APPENDIX J
5. Is each outcome aligned with a program goal? 6. Is each outcome aligned with a relevant HAAS goal and/or performance indicator?
Planning for Delivery of Outcomes/Outcomes-Alignment Matrix 1. Is an easy-to-read outcome delivery map or a curriculum alignment matrix included? 2. Is it clear that there is an opportunity provided to participants of the program that enables each participant to achieve each listed outcome?
Evaluation Methods and Tools 1. Does the plan and/or report describe a detailed inquiry methodology? 2. Does the plan and/or report describe the assessment tools and methods (e.g., observation with a criteria checklist, survey with specific questions identified, essay with a rubric, role-playing with a criteria checklist) that will be used to evaluate EACH outcome? 3. Does the plan and/or report identify the sample or population that will be evaluated for each outcome? (This can go here or in the Implementation of Assessment Process section.) 4. Does the plan and/or report provide a description of how the sample size was selected? (This can go here or in the Implementation of Assessment Process section.) 5. Does the plan describe the sample by race, ethnicity, gender identity, socioeconomic status, and other relevant identifiers? (This can go here or in the Implementation of Assessment Process section.) 6. Does the plan and/or report identify one or more evaluation methods or tools for each outcome? 7. Does the plan and/or report include the criteria that will be used with the tool for each outcome to determine whether the outcome has been met? 8. Does the plan and/or report provide a rationale for the measurements used to assess each outcome (e.g., why certain outcomes were measured quantitatively, while others were measured qualitatively, or using mixed-methods)? 9. Does the plan and/or report provide definitions of variables? 10. Does the plan and/or report provide a description of how the analyses will be conducted or were conducted? (This can go here or in the Implementation of Assessment Process section.)
Ludvik.indb 210
27-10-2018 00:23:37
APPENDIX J
211
11. Does the plan and/or report provide any other relevant discussion of methodological questions important to the context of the program being assessed, such as questions raised by previous or current accreditation, state, or federal standards? 12. Does the plan and/or report indicate (if applicable) the limitations of the evaluation methods or tools? (This can go here or in the Limitations and Assumptions section.) 13. Does the plan and/or report include the actual assessment and evaluation tools in the appendices?
Level of Achievement Expected 1. Does the plan and/or report indicate a particular expected level of achievement for each outcome? 2. Does the plan and/or report indicate the level of expected achievement for all program participants? 3. Does the plan and/or report indicate the expected level of achievement for each performance indicator? 4. Does the plan and/or report indicate who determined that expected level of achievement (either for the outcome or for the performance indicator)? 5. Does the plan and/or report indicate how the expected level of achievement was determined (either for the outcome or for the performance indicator)?
Limitations and Assumptions 1. Does the plan and/or report include a list of limitations? 2. Does the plan and/or report include a list of assumptions? 3. Does the plan and/or report detail how race, gender, ethnicity, and other identity characteristics may have been categorized together along with the assumptions and limitations that were made as a result?
Implementation of Assessment Process 1. Does this section describe the plan for the implementation of the assessment process? (In the case of the report, does it indicate what was completed?)
Ludvik.indb 211
27-10-2018 00:23:38
212
APPENDIX J
2. Does the implementation plan identify the individuals responsible for conducting each step of the evaluation process? (In the case of the report, does it indicate what was completed?) 3. Does it provide a time line for implementation and include the points in time when each outcome will be evaluated? (In the case of the report, does it indicate what was completed?) 4. Does the plan identify the individuals who will be participating in interpreting the data and making recommendations? (In the case of the report, does it indicate who participated and how?) 5. Does the plan and/or report provide a timeline for implementing the decisions and recommendations? 6. Does the plan describe how the assessment results will be communicated to stakeholders, including who will see the results, when they will see the results, and who will be involved in making decisions about the program based upon the assessment results? (In the case of the report, does it indicate what was completed?) 7. Does the plan describe who will be connecting the outcomes to the program goals and other performance indicators, including HAAS indicators and goals? (In the case of the report, does it indicate what was completed?) 8. Does the plan include a list of resources (e.g., time, professional development, specific assessment or benchmarking tools that must be purchased, consultants, data entry professionals or analysts that must be hired, etc.) and corresponding budget, if applicable, that need to be provided in order to assure a quality OBPR process? (In the case of the report, does it indicate what resources were used and how much was spent?) 9. Does the plan describe how results will be communicated to all of the stakeholders? (In the case of the report, does it indicate how this was completed?)
Results 1. Are the results summarized for EACH outcome that was evaluated? 2. Are the results summarized for EACH HAAS goal and other performance indicators or benchmarks that were used in the evaluation? 3. In the summary of the results, is there a brief narrative that indicates whether the results met the expected level, particularly relating to the various ways that participant results (e.g., faculty, staff, and students) were disaggregated by characteristic identity and intersection of identities?
Ludvik.indb 212
27-10-2018 00:23:38
APPENDIX J
213
4. Are detailed results, if applicable, contained as tables, charts, or narrative in the appendix? 5. Is there a narrative about the process to verify/validate/authenticate the results for each outcome that was evaluated? 6. Is there a brief narrative that illustrates whether results were discussed with students, alumni, other program faculty and/or administrators, or external reviewers? 7. Are the results generated from this OBPR linked to any other program, college, or institutional performance indicators? a. And if so, is there a brief narrative describing the linkage? b. Is there a narrative for the rationale of linking the results to those performance indicators? 8. Have the limitations and assumptions and the data analysis section of the plan been updated based on the process and the data analysis that was conducted? 9. Has everything else in the plan that may have changed during actual assessment tool dissemination, data collection, and analysis been updated?
Reflection, Interpretation, Decisions, and Recommendations 1. Are the decisions and recommendations summarized for each outcome? 2. Are the decisions and recommendations summarized for each HAAS and other performance indicators or benchmarks that were used in the evaluation? 3. Is the process described for how to determine whether the results were satisfactory for ALL participants? In other words, be sure to describe the process used to inform how the level of acceptable performance was determined and why it was determined as such, particularly for disaggregated results. 4. If applicable, is the benchmark data that informed your decision of whether your results were “good enough” included? 5. Is there a reminder of what the expectations are for a certain level of learning as well as why that level was expected? 6. Are the decisions and recommendations that may contribute to the improvement of higher-level goals and strategic initiatives, including HAAS, identified as such? 7. Are the people identified who participated in the reflection, interpretation, and discussion of the evidence that led to the recommendations and decisions?
Ludvik.indb 213
27-10-2018 00:23:38
214
APPENDIX J
8. Is there a summary of suggestions that arose for improving the assessment process, tools, criteria, outcomes, and goals? 9. Is there an indication of when each outcome will be evaluated again in the future (if the outcome is to be retained)? 10. Are those responsible for implementing the recommended changes identified? 11. If applicable, are the additional resources required to implement the required changes listed? If so, is there a description of what those are or might be? 12. Have you indicated whether a member of the organization at a higher organizational level needs to improve the new resources requested? If so, have you indicated who that is and how the results and recommendations will be communicated to that individual? 13. If making a recommendation for a change that resides outside of the program leadership’s locus of control, have the individuals and the process for forwarding the recommendation and the action required/requested been indicated? 14. Are there recommendations for use of or change of use of institutional performance indicators? 15. Are there recommendations for use of or change of use of institutional predictive analytics?
Action Plan, Memorandum of Understanding, and/or Documentation of Higher-Level Organizational Feedback 1. 2. 3. 4. 5.
6. 7. 8.
Ludvik.indb 214
Is there an action plan to indicate how results will be used? Are the specific tasks that need to be completed included? Is the primary responsible party for task completion listed? Does the action plan include the time frame for implementing the decisions and who will be responsible for that implementation? Does the action plan refer to an assessment plan or performance indicators for how the action plan will be determined successful? Or will the assessment of this action plan be included in the next OBPR cycle? How have the decisions that inform this action plan been disseminated throughout the organization? Have the appropriate people approved the action plan? Have you included the plan and/or budget for the new resources, policy changes, or other information that is required to improve the program learning outcomes that were assessed?
27-10-2018 00:23:38
APPENDIX J
215
9. Have you noted any changes that will be made to the program goals, outcomes, evaluative criteria, planning processes, and budgeting processes as a result of higher-level organizational feedback, if feedback was already obtained?
External Review Report (If Applicable) 1. Have the members of the external review committee been named and their roles and responsibilities listed? 2. Is there a narrative included describing how they were selected and approved by the appropriate authorizing agent? 3. Is the charge that was given to the external review committee as well as the time frame for completion indicated in the report? 4. Are the guiding questions that the external review members were given clearly articulated in the report? 5. Is the comparative analysis or benchmarking report included, if applicable or required? 6. Is there evidence that the recommendations made by the external reviewers were considered by program leaders and high-level organizational leaders prior to the action plan being determined?
Program Viability (If Applicable) 1. Has a decision been rendered to continue with action plan improvements or phase out the program? 2. Have capacity data (e.g., inputs, market research, community needs data, etc.) been considered prior to the program viability decision being made? 3. Has evidence of human flourishing been considered prior to the program viability decision being made? 4. Is there evidence that the OBPR process, which may or may not include an external reviewer report, has been used to make this decision?
Be Sure to Include Any Additional Appendices Generated From Completing Your OBPR Report 1. Have you included any detailed level results, assessment instruments, rubrics, and/or meetings minutes that identify where accepted levels of learning and development were identified and how?
Ludvik.indb 215
27-10-2018 00:23:38
216
APPENDIX J
2. Have you included any program syllabi, faculty curricula vitae, enrollment data, admission yield data, outreach data, budget data, market analysis, needs assessment, or any other pertinent data used in interpreting OBPR results? 3. Have you included information that illustrates how the summary of the learning from engaging in the OBPR process has been made public/ transparent? 4. Have you included anything else that may be pertinent to understanding the context of this plan and/or report?
Ludvik.indb 216
27-10-2018 00:23:38
REFERENCES
Refer to Appendix A for Additional References American Association of University Professors. (1970). 1940 statement of principles on academic freedom and tenure with 1970 interpretive comments. Retrieved from https://www.aaup.org/file/1940%20Statement.pdf American College Personnel Association. (1996). The student learning imperative: Implications for student affairs. Washington DC: ACPA. Armerding, T. (2013, August 26). Big data without good analytics can lead to bad decisions. CSO from IDG. Retrieved from https://www.csoonline.com/ article/2133888/metrics-budgets/big-data-without-good-analytics-can-lead-tobad-decisions.html Association of Institutional Research. (2017). IR and key performance indicators: An essential relationship. Retrieved from https://www.airweb.org/eAIR/ specialfeatures/Pages/IR-KPI-Essential-Relationship.aspx Astin, A. W. (1991). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. New York, NY: Macmillan. Astin, A. W. (1993). What matters in college? San Francisco, CA: Jossey Bass. Bach, C. (2010). Learning analytics: Targeting instruction, curricula and student support. Office of the Provost, Drexel University. Retrieved from https:// pdfs.semanticscholar.org/c7cc/b6477c52435f049b82c9ba320a0717911474.pdf Balanced Scorecard Institute. (2017). What is strategic planning? Retrieved from http://www.balancedscorecard.org/BSC-Basics/Strategic-Planning-Basics Banta, T. W., Jones, E. A., & Black, K. E. (2009). Designing effective assessment: Principles and profiles of good practice. San Francisco, CA: Jossey-Bass. Banta, T. W., & Palomba, C. A. (2014). Assessment essentials: Planning, implementing, and improving assessment in higher education (2nd ed.). San Francisco, CA: Wiley. Blaich, C. F., & Wise, K. S. (2011, January). From gathering to using assessment results: Lessons from the Wabash National Study. (NILOA Occasional Paper No. 8). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Blumenstyk, G. (2014). American higher education in crisis: What everyone needs to know. New York, NY: Oxford University Press. Bok, D. C. (2013). Higher education in America. Princeton, NJ: Princeton University Press. Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practices. Sterling, VA: Stylus. 217
Ludvik.indb 217
27-10-2018 00:23:38
218
REFERENCES
Bresciani, M. J., Gardner, M. M., & Hickmott, J. (2009). Demonstrating student success in student affairs. Sterling, VA: Stylus Publishing. Bresciani, M. J., Gillig, B., Tucker, M., Weiner, L., & McCully, L. (2014). Exploring the use of evidence in resource allocation: Towards a framework for practice. Journal of Student Affairs, 22, 71–80. Bresciani, M. J., Zelna, C. L., & Anderson, J. A. (2004). Techniques for assessing student learning and development: A handbook for practitioners. Washington DC: NASPA. Bresciani Ludvik, M. J. (Ed.). (2016). The neuroscience of learning and development: Enhancing creativity, compassion, critical thinking, and peace in higher education. Sterling, VA: Stylus. Bresciani Ludvik, M. J. (2018). Positively transforming minds within educational systems: An inner-directed inquiry process for educators and the students they serve. Retrieved from http://rushingtoyoga.org/?page_id=286 Bright, K. (Sept 28, 2016). Why bad data always makes for bad decisions. Marketing Tech. Retrieved from https://www.marketingtechnews.net/news/2016/sep/28/ why-bad-data-always-makes-bad-decisions/ Business Dictionary. (2017). Action planning. Retrieved from http://www.business dictionary.com/definition/action-plan.html Calaprice, A. (Ed.). (2010). The ultimate quotable Einstein. Princeton, NJ: Princeton University Press. Retrieved from https://quoteinvestigator.com/2014/05/22/ solve/#note-8929–1 Carr, S. (2008). Editor’s notebook: A quotation with a life of its own. Retrieved from https://www.psqh.com/analysis/editor-s-notebook-a-quotation-with-a-life-ofits-own/ Coelho, P. (2014). The alchemist. New York, NY: Harper Collins. Cohen, A. M., & Kisker, C. B. (2010). The shaping of American higher education: Emergence and growth of the contemporary system (2nd ed.). San Francisco, CA: Wiley. Council of Regional Accrediting Commissions. (2004). Regional accreditation and student learning: Improving institutional practice. Washington DC: Pew Charitable Trusts. Cuseo, J. (2017). Student success: Definition, outcomes, principles and practices. Retrieved from https://www2.indstate.edu/studentsuccess/pdf/Defining%20Student%20 Success.pdf Dolence, M. G., & Norris, D. M. (1995). Transforming higher education: A vision for learning in the 21st century. Ann Arbor, MI: Society for College and University Planning. Drucker, P. (2008). The five most important questions you will ever ask about your organization. San Francisco, CA: Jossey-Bass. Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 9, 1087–1101. Dweck, C. S. (2006). Mindset. New York, NY: Random House.
Ludvik.indb 218
27-10-2018 00:23:38
REFERENCES
219
Ekowo, M., & Palmer, I. (2016). The promise and peril of predictive analytics in higher education: A landscape analysis. New America. Retrieved from https:// na-production.s3.amazonaws.com/documents/Promise-and-Peril_4.pdf Ewell, P. (2003, November). Specific roles of assessment within this larger vision. Presentation at the Indiana University–Purdue University Assessment InstituteIndianapolis. Indianapolis, IN. Fitzpatrick, F., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson. Fletcher, J. M., & Tienda, M. (2010). Race and ethnic differences in college achievement: Does high school attended matter? Annals of the American Academy of Political and Social Science, 627(1), 144–166. Food and Agriculture Organization of the United Nations. (2014). Good practice template. Retrieved from http://www.fao.org/capacity-development/en/ Gardner, H. (1999). Intelligence reframed: Multiple intelligences for the 21st century. New York, NY: Basic Books. Goleman, D., & Davidson, R. J. (2017). Altered states: Science reveals how meditation changes your mind, brain, and body. New York, NY: Random House. Goleman, D., & Senge, P. (2014). The triple focus: A new approach to education. Florence, MA: More Than Sound. Good practice. (n.d.). In Merriam-Webster’s online dictionary. Retrieved from https:// www.merriam-webster.com/dictionary/good%20practice Greer, P., & Horst, C. (2014). Entrepreneurship for human flourishing (values and capitalism). Washington DC: AEI Press. Herman, J., & Hilton, M. (Eds). (2017). Supporting students’ college success: The role of assessment of intrapersonal and interpersonal competencies. Washington DC: National Academies Press. Hernon, P., & Dugan, R. E. (2004). Outcomes assessment in higher education: Views and perspectives. Westport, CT: Greenwood Publishing Group. Hoffman, M., Richmond, J., Morrow, J., & Salomone, K. (2002). Investigating “sense of belonging” in first-year college students. Journal of College Student Retention, 4, 227–256. Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Boston, MA: Allyn & Bacon. Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018). Assessment that matters: Trending toward practices that document authentic student learning. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Jazaieri, H., McGonigal, K., Jinpa, T., Doty, J. R., Gross, J. J., & Goldin, P. R. (2014). A randomized controlled trial of compassion cultivation training: Effects on mindfulness, affect, and emotion regulation. Motivation and Emotion, 38(1), 23–35. Kissane, E. (2010). The elements of content strategy. New York, NY: A Book Apart. Kline, P., & Saunders, B. (2010). Ten steps to a learning organization (2nd ed.). Salt Lake City, UT: Great River Books.
Ludvik.indb 219
27-10-2018 00:23:38
220
REFERENCES
Kramer, G. L., & Swing, R. L. (Eds). (2010). Higher education assessments: Leadership matters. Lanham, MD: Rowman & Littlefield. Kuh, G. D., Gambino, L. M., Bresciani Ludvik, M. J., & O’Donnell, K. (2018). Using ePortfolio to document and deepen the dispositional learning impact of HIPs. National Institute for Learning Outcomes Assessment (NILOA) Occasional Paper. Kuh, G. D., Ikenberry, S. O., Jankowski, N., Cain, T. R., Ewell, P. T., Hutchings, P., & Kinzie, J. (2015). Using evidence of student learning to improve higher education. San Francisco, CA: Jossey-Bass. Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in U.S. colleges and universities. National Institute for Learning Outcomes Assessment (NILOA) Occasional Paper. Kuh, G. D., Kinzie, J., Schuh, J. H., Whitt, E. J., and Associates. (2005). Student success in college: Creating conditions that matter. San Francisco, CA: Jossey-Bass. Lindert, P. H., & Williamson, J. G. (2016). Unequal gains: American growth and inequality since 1700. Princeton, NJ: Princeton University Press. Maki, P. (2004). Assessing for student learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus. Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Sterling, VA: Stylus. Mandela, M. N. (2007). Quote retrieved from http://db.nelsonmandela.org/ speeches/pub_view.asp?pg=results Marquadt, M. J. (2011). Building the learning organization: Achieving strategic advantage through a commitment of learning (3rd ed.). Boston, MA: Nicholas Brealey. McClarty, K. L., & Gaertner, M. N. (2015). Measuring mastery: Best practices for assessment in competency-based education. Center of Higher Education Reform, American Enterprise institute. Retrieved from https://www.luminafoundation .org/files/resources/measuring-mastery.pdf Mettler, S. (2014). Degrees of inequality: How the politics of higher education sabotaged the American dream. Philadelphia, PA: Basic Books. Musu-Gillette, L., Robinson, J., McFarland, J., Kewal Ramani, A., Zhang, A., & Wilkinson-Flicker, S. (2016). Status and trends in the education of racial and ethnic groups 2016 (NCES 2016–007). Washington DC: U.S. Department of Education, National Center for Education Statistics. Retrieved from http://nces .ed.gov/pubsearch National League for Nursing. (2014). Practical/vocational program outcome: Human flourishing. Retrieved from http://www.nln.org/docs/default-source/ default-document-library/human-flourishing-final.pdf?sfvrsn=0 National Research Council. (2000). How people learn: Brain, mind, experience, and school. Washington DC: National Academies Press. Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: Wiley. Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students: Volume 2. A third decade of research. San Francisco, CA: Jossey-Bass.
Ludvik.indb 220
27-10-2018 00:23:38
REFERENCES
221
Quality in Action. (2018). Quality in Action: A review of our year. Retrieved from http://www.qaa.ac.uk/news-events/news/quality-in-action-a-review-of-our-year Rendon, L. I. (1994). Validating culturally diverse students: Toward a new model of learning and student development. Innovative Higher Education, 19(1), 23–32. Ruiz, D. M. (1997). The four agreements: A practical guide to personal freedom. New York, NY: Amber Allen. Scharmer, O., & Kaufer, K. (2013). Leading from the emerging future: From egosystem to eco-system economies. San Francisco, CA: Brett-Koehler. Scheerens, J., Luyten, H., van Ravens, J. (2011). (Eds.). Perspectives on educational quality: Illustrative outcomes on primary and secondary schooling in the Netherlands. Dordrecht, Germany: Springer. Schuh, J. H., Biddix, J. P., Dean, L. A., & Kinzie, J. (2016). Assessment in student affairs. San Franciso, CA: Wiley. Scott, G. (2017): FLIPCurric. Canberra, Australia: National Office for Learning and Teaching, Australian Government. Retrieved from http://flipcurric.edu.au Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization. New York, NY: Random House. Senge, P. M., Cambron-McCabe, N., Lucas, T., Smith, B., Dutton, J., & Kleiner, A. (2012). Schools that learn: A fifth discipline fieldbook for educators, parents, and everyone who cares about education (Rev. ed.). New York, NY: Crown. Stevens, M., & Kirst, M. (2015). Remaking college: Ecology of higher education. Stanford, CA: Stanford University Press. Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco, CA: Jossey-Bass. Suskie, L. (2014). What is institutional effectiveness? Retrieved from http://www .lindasuskie.com/apps/blog/show/41357706-what-is-institutional-effectivenessSuskie, L. (In press). Assessing student learning: A common sense guide (3rd ed.). San Francisco, CA: Jossey-Bass. Terenzini P. T., Rendon, L., Upcraft, L., Millar, S., Allison, K., Gregg, P., & Jalomo, R. (1994). The transition to college: Diverse students, diverse stories. Research in Higher Education, 35(1), 57–73. Tierney, W. G. (2000). Power, identity, and dilemma of college student departure. In J. M. Braxton (Ed.), Reworking the student departure puzzle (pp. 213–234). Nashville, TN: Vanderbilt University Press. Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago, IL: University of Chicago Press. Tu, Weiming. (1998). Joining east and west. Harvard International Review, 20(3), 44–49. Upcraft, M. L., & Gardner, J. N. (1989). The freshman year experience. San Francisco, CA: Jossey-Bass. Zelazo, P. D., Blair, C. B., & Willoughby, M. T. (2016). Executive function: Implications for education (NCER 2017–2000). Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education.
Ludvik.indb 221
27-10-2018 00:23:38
222
REFERENCES
Zohar, D., & Marshall, I. (2000). SQ: Connecting with your spiritual intelligence. New York, NY: Bloomsbury. Zull, J. E. (2011). From brain to mind: Using neuroscience to guide change in education. Sterling, VA: Stylus.
Ludvik.indb 222
27-10-2018 00:23:38
ABOUT THE AUTHOR
M
arilee J. Bresciani Ludvik is a professor of postsecondary educational leadership at San Diego State University. As a faculty volunteer, Bresciani Ludvik serves on San Diego State University’s Student Learning Outcomes and Programmatic Assessment Committee and the university’s Student Success Working Group. She has assigned time to work within the recently established Office of Educational Effectiveness at San Diego State University where she assists in connecting student learning and development outcomes to equity performance indicators in a manner that can inform improvements in course and program design and delivery. Bresciani Ludvik also volunteers her time on the California State University 2025 Data-Driven Decision Making Task Force. Prior to that, Bresciani Ludvik served as assistant vice president of institutional assessment at Texas A&M University; the director of undergraduate assessment at North Carolina State University where she was responsible for coordinating outcomes-based assessment program review (OBPR); and in a variety of student affairs, academic affairs, and alumni relations leadership roles at various types of institutions. Bresciani Ludvik has consulted with over 200 institutions on assessment and accountability matters. In addition, Bresciani Ludvik assists organizational leaders in identifying and leveraging opportunities to collaborate across division lines, using mindfulness-based inquiry practices, nonviolent communication, difficult conversation processes, compassion practices, restorative justice, and design thinking. Bresciani Ludvik’s research focuses on using translational neuroscience and mindful compassion practices to inform the design and evaluation of workshops and curriculum to decrease student, faculty, and administrator stress and anxiety and increase their attention, emotion, and cognitive flexibility, as well as enhance compassion, inquiry, creativity, and overall wellbeing. Bresciani Ludvik can be reached at [email protected].
223
Ludvik.indb 223
27-10-2018 00:23:38
Ludvik.indb 224
27-10-2018 00:23:38
INDEX
AAC&U. See American Association of Colleges & Universities academic freedom, 139 Academic Self-Efficacy Scale, 175 accountability, 84 accreditation demands of, xv, xviii–xix, 42 documentation for, 146–47 as external agency, 75 integrity of, 153 leaders, 6–7 learning organization and, 5 outcomes-based program review for, 1, 28 as primary driver, xvii process of, 8 professionals, xx, 76 of program learning outcomes, 189 regional, xv reporting needs of, 48–49 review, 95 review preparation for, 80 review time lines of, 62 scrutiny of, 92 staff support for, 111 standards of, 52–53, 103, 109, 128 action planning as criteria, 83 documentation for, 68–70 elements of, 39n5 iterative systematic outcomes-based program review, 26–27 alignment table for curriculum, 71, 210 goals and indicators of, 53
of Hagerstown Community College, 55, 181–82 of University of Hawai‘i-Mānoa, 55, 57 of UST, 53–56 Allen, Jo, xv Alverno College faculty development and, 137 good practices of, 104–5 Armerding, Taylor, xvi, 16 American Association of Colleges & Universities (AAC&U) High Impact Practices and, 175 learning outcome of, 172 rubrics of, 45 American higher education, 41–42 Anderson, James A., xv annual assessment report template, 200 annual assessment questions for, 188–99 background information of, 187–88 artificial intelligence, xv assessment, 109–10 of Community College of Baltimore County, 20 current plan for, 199 days for, 45 definition of, 19–28 faculty and, 28, 61 frequency of, 60 good practice institutions and, 61–62 implementation of process, 60–64 individuals responsible for, 60–61 of learning outcomes, 20 North Carolina State University and, 62 225
Ludvik.indb 225
27-10-2018 00:23:38
226
INDEX
opinions on, 21 outcomes-based, 44 plan and report consultation, 145 of program learning outcomes, 193 program review process for, 62 stakeholders communication to, 64 of student learning, 200 Suzkie on, 20 time line for, 61–62, 68 understanding lack of, 130–39 Assessment Commons documentation of, 47 online resources at, 144 Astin, A. W., 21, 23, 26 Australian National Office of Teaching and Learning, 80 Azusa Pacific University, 66, 205 Banta, Trudy W. on direct evidence, 57 on outcomes-based assessment, 19, 24 barriers, 119 categories of, 126–27, 159 challenges of, 154–58 for faculty, 126 fear as, 120, 125, 130–31, 133 flexibility and, 130 of FLIPCurric, 126 grants compared to, 146 institutional leadership and, 129 questions for, 148–49 Texas A&M University and, 130 typical, 125–26 benchmark indicators, 43, 73 Big Data, xvi, 16 Bill & Melinda Gates Foundation, xviii Black, Karen, 19 Blackboard, 47 Blair, C. B., 10 Brief Resilience Scale, 176 Bright, Karyn, xvi budget, 146 allocation process for, 8
Ludvik.indb 226
assessment results applied to, 197 balanced, 41–42 committee and, 100, 102–3 fixed, 4 for implementation, 117–19 for outcomes-based program review, 71 outcomes-based program review and, 63–64, 70–71, 106–9, 111 performance indicators for, 26–27 strategic planning for, 48 California State University, Monterey Bay, 65 Campus Labs, 48 Capstone class, 200 course, 64 projects, 152, 193, 201 CBE. See competency-based education CCS. See Chernyshenko Conscientiousness Scales CCSSE/NSSE, 59 cGPA. See cumulative grade point average Chapman University, 70, 167 Chernyshenko Conscientiousness Scales (CCS), 175 cocurricular faculty as, 158 as separate educational experience, 11, 13 student affairs staff as, 7 cognitive flexibility, 10–13 collaborative dialogue communication plan and, 114 leadership and, 148 meetings for, 144 practices of, 61 as release time, 146 resources and, 78 time for, 142 values of, 105 committee
27-10-2018 00:23:38
INDEX
budget and, 100, 102–3 for outcomes-based performance review, 94–95 predictive analysis and, 95, 97–100 questions for, 99–100 roles and responsibilities of, 97–100 website for, 98–99 communication plan collaborative dialogue and, 114 questions for, 115–16 Community College of Baltimore County on assessment, 20 as good practice institution, 167 competency-based assessment, xv competency-based education (CBE), xxin1 CORE classes, 193 Cornell University, 70, 167 Council of Regional Accrediting Commissions, 138 creation, areas of communication of, 87 prioritization of, 85–86 credit hours quality of learning and, 150 across universities, 153 crisis management, 114 criteria for good practice outcomes-based program review, 81–84 guiding questions for, 84–89 usefulness of, 80–81 values of, 85 cross-system dialogue, 114 cumulative grade point average (cGPA) high expectations of, 172 San Diego State University and, 50, 52, 54 across universities, 150–51 curricular faculty as, 7 as separate educational experience, 11, 13 curriculum
Ludvik.indb 227
227
alignment matrix for, 55–57, 71, 210 design of, 11–12, 83, 156 documentation for, 141 faculty research and, 152 good practice institutions and, 53–55 in-class, 13 as input, 29 integration of, 157 map template of, 181–82 resources for, 38n1 teaching innovations and, 138 Cuseo, J., 21 dashboard indicators, 43, 73 data availability of, 110 big, xvi, 16 champions, 112 collection for program learning outcomes, 191 collection method for, 192–93 collection of, 83 collection tools of, 59 good practices institutions collection of, 59 investigation of, 81 data champions, 112 Degree Qualification—Profile (DQP), 190 description, overview of, 50–51 direct evidence, Banta and Palomba on, 57 direct measures, 193–94 documentation, 47–73, 147 for curriculum, 141 for outcome-based program review, 141–42, 146 tools for, 146 Dolence, M. G., 43, 73n3 DQP. See Degree Qualification— Profile Duckworth, A. L., 14 Dugan, R. E., 20 Dweck, C. S., 14
27-10-2018 00:23:38
228
INDEX
educational learning organization, 72 Einstein, Albert, 22 electronic reflective student learning portfolios, 144 England, 9 Engle, Jennifer, xviii enrollment, 188 ePortfolios as competency-based practice, 127 as DIRECT measure, 193–94 Guttman Community College and, 36 portfolio learning outcomes and, 193–94 evaluation methods, tools and, 57–59 Ewell, Peter, 57 executive functions, 10–11 interpersonal skills and, 13 intrapersonal skills and, 13 semantic map of, 12 existing resources, new and, 109–14 expectations announcement time of, 101 questions for, 101–4 external review, 27, 39n6 external review report, 70
faculty assessment and, 28, 61 barriers to outcome-based program review and, 126 as cocurricular, 158 as curricular, 7 decisions and, 67 development for, 36, 134–35, 137, 142–43 fellows for outcome-based program review, 112, 146–47 identification of, 110 as individual commodities, 157 intellectual curiosity of, 121 learning organization and, 158 learning outcomes assessment and, 20
Ludvik.indb 228
model of outcomes-based program review, 135–36 New Jersey City University and, 135 outcomes-based program review and, 24–26 outcomes-based program review resistance by, 158 research by, 156–57 tenure and, 5, 20, 80, 103, 132 time for outcomes-based program review, 140–41 faculty fellows, 112 fear as barriers, 120, 125, 130–31, 133 overcoming by trust, 133 fellows for outcome-based program review, 146–47 FFMQ. See Five Facet Mindfulness Questionnaire The Fifth Discipline (Senge), 2 Fitzpatrick, F. influence of, 26 logic model of, 23 on program evaluation, 22 Five Facet Mindfulness Questionnaire (FFMQ), 175 Fletcher, J. M., 46 flexibility, 120–23, 130 FLIPCurric of Australian National Office of Teaching and Learning, 80 barriers to outcome-based program review and, 126 fluid intelligence, 10–11, 16 Food and Agriculture Organization of the United Nations, 5–6 formative assessment, 155, 160n1 foundational questions, 116–17 funding for learning organization, 156 for students, 132 Gather Data cycle, 27 goal planning, 104
27-10-2018 00:23:38
INDEX
of long-range goals, 107 of mid-range goals, 106–7 questions for, 108–9 of short-range goals, 105–6 good practice institutions assessment and, 61–62 data availability of, 110 data collection by, 59 graduation rates and, 66 as learning organization, 48 list of, 167–69 outcome-alignment matrix and, 53, 55 predictive analysis and, 26–27, 81 primary point of contact of, 92 resources and, 63–64 time reallocation of, 140–41 good practices, 16 at Alverno College, 104–5 criteria for, 5–7, 79, 89 definition of, 5 institutional effectiveness and, 42–43 institutions for, 7, 47, 167–69 North Carolina State University and, 29 questions for, 33–34, 81 templates for, 91 time frame for establishment of, 108 grades, 153 graduation rates appendices for, 71 assessment of, 29 demographics of, 46 four-year, 77–78, 151 good practice institutions and, 66 by identity group, 59 key performance indicators of, 38n2 as outcome indicator, 21 as performance indicator, 9 PIED definition of, 172 grants for outcome-based program review, 128, 130 for assessment conferences, 113
Ludvik.indb 229
229
departmental budgets and, 146 as performance indicators, 31 for young scholars, 85 Gretzky, Wayne, 22 Grit Scale, 14, 176 Growth Mindset Scale, 13, 176 Guttman Community College definition of outcomes-based program review and, 25 electronic reflective learning portfolios and, 144 e-portfolios and, 36 as good practice institution, 167 student learning funds for, 132
HAAS. See High Achievement of student learning and development for All Students HAAS statement, 49–50, 74n3 Hagerstown Community College alignment table of, 55, 181–82 curriculum map template of, 181 as good learning institution, 167 Hampden-Sydney College as good learning institution, 167 professional development at, 135 Herman, J., 12–13 Hernon, P., 20 High Achievement of student learning and development for All Students (HAAS), 83, 103, 106, 114, 117, 121, 123, 155, 159 descriptive overview and, 50–51 human flourishing and, 46 institutional effectiveness and, 28 learning organization and, 37 outcomes-based program review and, 73n2, 74n3 path to, xx statement of, 49–50, 74n3 higher education, 76 high-impact practices (HIP), 175 high-quality student learning, 3, 81–82 Hilton, M., 12–13
27-10-2018 00:23:38
230
INDEX
HIP. See high-impact practices Hoffman, 14, 50, 54, 176 human flourishing for all, 26–28, 29–30, 158 definition of, 30–31 High Achievement of student learning and development for All Students and, 46 leadership and, 155 learning organization and, 81, 148 measurement of, 33 positive cultivation of, 4 predictive analysis and, 44 rewards and, 156–57 Weiming on, 32 IEO framework, 21 IES. See Institute of Education Sciences ILP. See ineffective leadership proof implementation, 68 barriers to, 119 budget for, 117–19 implementation questions, 117–19 Improving Institutional Practice, 138–39 Indiana-Purdue University Indianapolis (IUPUI) assessment days of, 45 electronic reflective student learning portfolios and, 144 external reviewers at, 132 glossary of terms and, 27, 43 outcomes-based program review definition of, 25 indirect evidence, 57 indirect measures, 195 individual experts, 156 ineffective leadership proof (ILP), 3, 80 inhibitory control, 10–13 initiative fatigue, 42 Institute of Education Sciences (IES), 10 institutional culture, 95–96 institutional effectiveness, 24
Ludvik.indb 230
definition of, 38n1 good practices and, 42–43 institutional leadership barriers and, 129 compassion of, 84, 87–88, 121 compliance reporting and, 96–97, 123n1 institutional research data, 143 institutional survey data, 143–44 Integrated Postsecondary Education Data Systems (IPEDS) performance indicators of, 172–73 students definition by, 45 interpersonal competencies, 13 intrapersonal competencies, 13 introduction to economics, 151–52 IPEDS. See Integrated Postsecondary Education Data Systems Isothermal Community College assessment vocabulary list of, 66 as good practice institution, 168 program review portfolio of, 65 terms definition of, 43 website of, 27 IUPUI. See Indiana-Purdue University Indianapolis
James Madison University, 136 assessment days of, 45 assessment process at, 25 data collection tools of, 59 Department Fact Book of, 188 findings publication of, 146–47 Learning and Development Outcomes Report of, 187 results on website, 67 templates for, 23 Jazaieri, H., 14, 176 job loss, 131, 155 key performance indicator (KPI), 38n2 von Knorring, John, xv
27-10-2018 00:23:38
231
INDEX
KPI. See key performance indicator Kuh, G. D., 19 LADOR. See Learning And Development Outcomes Report Latinx students, 47 leadership, 84. See also institutional leadership accreditation, 6–7 collaborative dialogue and, 148 development of, 80 expectations informed by, 101 goal setting and, 105 human flourishing and, 155 ILP and, 3, 80 predictive analysis and, 131 as thought leaders, 155 LEAP. See Liberal Education & America’s Promise learning and development, 10, 12–15 complexity of, 9 as neurocognitive skills, 11 Learning And Development Outcomes Report (LADOR), 187 learning organization, xviii, 1, 15, 31, 75–76, 95, 129 accreditation and, 5 cultivation process of, 88 faculty engagement in, 158 funding for, 156 good practice institutions as, 48 High Achievement of student learning and development for All Students and, 37 high-quality student learning and development for, 3–4 human flourishing and, 81, 148 leadership for, 5 mission of, 84–85 personnel processes for, 4 predictive analysis and, 2 process improvement of, 89 resource allocation for, 4–5 self-examination of, 30
Ludvik.indb 231
self-management by, 7 students and, 158 sustainability of, 155 learning outcomes assessment, 20 level of achievement, 59–60 Liberal Education & America’s Promise (LEAP), 45, 152, 172 limitations and assumptions, 60 Liu, Amy, 187 Lumina Foundation, 190 Maki, P., 19–20, 79 mandatory reporting systems, 149 MCS. See Multidimensional Compassion Scale meetings, questions about, 96 memorandum of understanding (MOU), 68, 104 meta-assessment, 81, 89 Miami-Dade College assessment process at, 25 Fitzpatrick logic model and, 23 outcomes-based program review development of, 25 MOU. See memorandum of understanding moving forward, 120–23 Multidimensional Compassion Scale (MCS), 14, 175
National Academies of Sciences (NAS), 12–13 National Center for Educational Statistics (NCES), 46 National Institute for Learning Outcomes Assessment (NILOA), xvii Occasional Paper of, 10 online resources of, 144 provost survey of, xviii, 1, 7 reporting templates from, 47 National League for Nursing, 30–31 National Science Foundation (NSF), 130
27-10-2018 00:23:38
232
INDEX
National Survey of Student Engagement (NSSE), 136, 175 National University, 70 NCES. See National Center for Educational Statistics neurocognitive skills crystallized intelligence as, 10–11, 16 executive functions as, 10–13 fluid intelligence as, 10–11, 16 learning and development as, 11 neuroscience research, 150 New Jersey City University faculty and, 135 as good practice institution, 168 guiding questions of, 67 professional development of, 134–35 staff leadership committees and, 137 WASC questions and, 66 NILOA. See National Institute for Learning Outcomes Assessment Norris, D. M., 43 North Carolina State University, 25, 132 good practices and, 29 as good practices institution, 168 outcomes-based program committee of, 97 outcomes-based program review assessment of, 62–63 outcomes-based program review purpose statement of, 28–29 NSF. See National Science Foundation NSSE. See National Survey of Student Engagement
OBPR. See outcomes-based program review Office of Institutional Assessment, 137 Office of Institutional Research (OIR), 188 online FAQ, 104 online resources, 144 Oregon State University Division of Student Affairs at, 63
Ludvik.indb 232
learning goals of, 49 outcomes alignment grid of, 179 outcomes-based program review evaluation by, 136 Student Affairs Division at, 131 organization success, 30–31 outcomes, 52–53 outcomes-alignment matrix, 53–57 outcomes-based assessment, xv, 20 definition of, 19 recognition of, 29 outcomes-based program review (OBPR), xvi for accreditation, 1, 28 action plan for, 68–70 administrative support for, 61 appendices of, 71–72 budget and, 63–64, 70–71, 106–9, 111 committee for, 94–95 criteria for, 81–84 definition of, 19–28 documentation for, 47–73 evaluation of, 136 as evidence-based decision making, 130 faculty-dependent model of, 134–35 faculty resistance of, 158 financial budget for, 63–64, 70–71, 106–9, 111 findings presentation of, 145–46 Gather Data cycle of, 27 High Achievement of student learning and development for All Students and, 73n2, 74n3 implementation plans for, 68 implementation reminders for, 121–23 infrastructure support for, 134 job loss fear of, 131 meetings for, 95–96 meta-assessment of, 81, 89 multiple-level implementation of, 30, 34–35, 37
27-10-2018 00:23:38
INDEX
versus predictive analysis, 41–47 primary point of contact for, 92–94 purpose of, 28–35 reflective inquiry process for, 3, 81 resources lack of, 139–48 review of, 133–34 systematic reflective inquiry for, 32, 36–37, 120 time line for, 61–62, 68, 71, 159 time running out on, 149 transparency of, 3, 7, 15, 30, 47–48, 67, 69, 73, 84, 87, 153–55 updated definition of, 26 website for, 67 PAII. See Planning and Institutional Improvement Palomba, C. A., 19, 24, 57 performance indicators, xv 9, 150 in alignment table, 54 for budget, 26–27 Dolence and Norris on, 43, 73 of graduation rates, 77 Personal and Social Responsibility Inventory, 176 Planning and Institutional Improvement (PAII), 133–34, 136–37 PLO. See program learning outcomes predictive analysis, 139, 150 beginning with, 42 compliance reporting and, 79 criteria for good practice and, 81 decisions of, 37 definition of, 43, 73 degree pathways and, 78 goal alignment and, 53 good practice institutions and, 26–27 of how students learn, 157 human flourishing and, 44 leadership in favor of, 131 learning goals and, 38n3 learning organization and, 2 metrics of, 2–3
Ludvik.indb 233
233
multiple levels of, 30 outcomes and, 22 outcomes-based assessment and, 35–36 versus outcomes-based program review, 41–47 performance indicators and, 17 planning committee and, 95, 97–100 primary point of contact and, 92–94 students and, 8 primary point of contact, 92–94 professional development discipline, 145 for faculty, 36, 135, 137, 142–43 by Office of Institutional Assessment, 137 by Student Life Studies, 137 program definition of, 35–36 evaluation of, 22–23 goals of, 51–55 improvement of, 83 mission of, 49 name of, 49, 74n5 review portfolio of, 65 viability of, 70 program evaluation, Fitzpatrick on, 22–23 program learning outcomes (PLO) of annual assessment, 193 assessment of, 188–90 assessment tools for, 197 budget results applied to, 197 capstone class for, 200 credential programs for, 201 critical thinking for, 192 curriculum map for, 199 data collection for, 191 for degree pathways, 82 direct measures for, 193–94 direct method for, 193 doctorate programs for, 201 evaluators for, 195 expectations for, 190–91
27-10-2018 00:23:39
234
INDEX
explicit standards of performance for, 190 external accreditation of, 189–90 indirect measures for, 195–96 indirect method for, 193 inter-rater reliability checks for, 194–95 masters degree programs for, 201 publication of, 191 results of, 192 results use of, 197–99 sample size of, 195 student learning assessment of, 200 students expectations for, 192 undergraduate programs for, 200 university mission alignment with, 189 VALUE rubric and, 190, 194, 197 program review time-line, 136 Psychological Well-Being, 176 quality assurance, 9 Quality Assurance Commons, xix quality questions, xvi reflections, recommendations and, 65–72 reflective inquiry, 75, 120 report authors, 188 resources acquisition of, 112 identification of, 109–14 lack of, 139–48 list of, 63–64 prioritization of, 116 review process, 120 rewards, 156–57 rock climbing, 120, 133 Sacramento State University as good practice institution, 168 SharePoint use of, 48 Sac State Baccalaureate Learning Goals, 188–89
Ludvik.indb 234
San Diego State University (SDSU), 128 cGPA and, 50 high achievement goals of, 54 Latinx students and, 47 program goals and, 52 Scheerens, J., 21–23, 26 Schuh, J. H., 19 scorecard indicators, 43, 73 scuba diving, 120, 133 SDSU. See San Diego State University Self-Control Scale, 176 Self-Regulation Scale, 176 Senge, Peter, xviii, 2, 29 on adaptive learning, 155 on learning organization, 75–76 Sense of Belonging Scale, 14, 50, 59, 176 SharePoint Sacramento State University use of, 48 Sinclair Community College common learning assessment of, 45 outcomes-based program review committee of, 97 process description of, 66 stakeholders achievement expectations of, 82 assessment communicated to, 64 conflict among, 89 strategic planning, 26, 27, 38n4 Student Life Studies, 137 students changing of, 79 competency-based learning of, 156 degree pathways and, 78 development needs of, 156–57 electronic reflective portfolios and, 144 funds allocation for, 132 grade point average of, 151–52 grants for, 85 learning assessment of, 200 learning measurements of, 1–2 learning methods of, 157 learning organization and, 158
27-10-2018 00:23:39
INDEX
loans of, xviii, 9, 76 predictive analysis and, 8 program learning outcomes and, 192 quality education for, 153–54 success definition of, 82 work evaluation of, 195 subject, evaluation of, 63 summative assessment, 155, 160n2 survey development tools, 144–47 Suzkie, L., 19 on assessment, 20 on outcomes-based program review, 28 systematic inquiry process, 3, 15 systematic reflective inquiry, for OBPR, 32, 36–37, 120 TaskStream, 48 tenure, 5, 20, 80, 103, 132 Texas A&M University, xv, 35, 46, 63 barriers and, 130 Center for Teaching Excellence of, 137 outcomes-based program review committee of, 97 thought leaders, 155 Tienda, M., 46 time for outcome-based program review, 140–41 time line, 61, 68 for accreditation review, 62 for establishing good practices, 108 TrakDat, 48 Truman State University data portal of, 136 as good practice institution, 168 results on website, 67 trust culture of, 155 fear overcome by, 133 University of Hawai‘i-Mānoa alignment table of, 55, 57
Ludvik.indb 235
235
assessment results template, 203–4 as good practice institution, 168 report by, 65 University of St. Thomas (UST), 55–56 University of the Pacific, 66 University of Wisconsin at Whitewater, 45 USEM course, 50, 53 U. S. Naval Academy, 98 UST. See University of St. Thomas Vincennes University, 94 Wabash College Center for Inquiry at, 9 Wabash National Study, 9 Warwick-Edinburgh Mental Well-being Scale (WEMWBS), 50, 176 WASC. See Western Association of Schools and Colleges websites of Community College of Baltimore County, 20 of Isothermal Community College, 27 outcomes-based program review on, 67, 98–99, 144 of Truman State University, 67 Weiming, Tue, 32 WEMWBS. See Warwick-Edinburgh Mental Well-being Scale Western Association of Schools and Colleges (WASC) questions from, 66–67 rubrics of, 187–89, 192, 197–98 Willoughby, M. T., 10 working memory, 10–13 Written Communication VALUE rubric, 190, 194, 197 Zelazo, P. D., 10, 13
27-10-2018 00:23:39
Ludvik.indb 236
27-10-2018 00:23:39
Ludvik.indb 237
27-10-2018 00:23:39
Ludvik.indb 238
27-10-2018 00:23:39
Ludvik.indb 239
27-10-2018 00:23:39
Ludvik.indb 240
27-10-2018 00:23:39
Ludvik.indb 241
27-10-2018 00:23:39
Ludvik.indb 242
27-10-2018 00:23:39
Ludvik.indb 243
27-10-2018 00:23:39
Ludvik.indb 244
27-10-2018 00:23:39
Ludvik.indb 245
27-10-2018 00:23:39
This volume provides comprehensive and detailed descriptions of tools for and approaches to assessing student learning outcomes in higher education.
Meaningful and Manageable Program Assessment A How-To Guide for Higher Education Faculty Laura J. Massa and Margaret Kasimatis “This book provides a much needed how-to guide for program-level assessment. Program chairs and faculty charged with conducting assessment will find the authors’ approach practical and the baker’s dozen modules succinct, instructive and rich with program-specific examples of effective assessment.” —Jillian Kinzie, Associate Director, Center for Postsecondary Research, Indiana University
22883 Quicksilver Drive Sterling, VA 20166-2019
Ludvik.indb 246
Subscribe to our e-mail alerts: www.Styluspub.com
27-10-2018 00:23:39
The book is an urgent call for higher education to achieve the values of equity, transparency and quality it espouses; and ensure that all students graduate in a timely fashion with the competencies they need to be active and productive citizens. The Analytics Revolution in Higher Education Big Data, Organizational Learning, and Student Success Edited by Jonathan S. Gagliardi, Amelia Parnell, Julia Carpenter-Hubin Foreword by Randy L. Swing In this era of “Big Data,” institutions of higher education are challenged to make the most of the information they have to improve student learning outcomes, close equity gaps, keep costs down, and address the economic needs of the communities they serve at the local, regional, and national levels. This book helps readers understand and respond to this “analytics revolution,” examining the evolving dynamics of the institutional research (IR) function, and the many audiences that institutional researchers need to serve.
Enhancing Assessment in Higher Education Putting Psychometrics to Work Edited by Tammie Cumming, M. David Miller Foreword by Michael J. Kolen “Good decision making at every level—classroom, program, institution—is advantaged by high quality, actionable data. The stellar cast of contributors to this timely volume offer a treasure trove of measurement principles, practices, and applications that both assessment experts and those new to documenting student learning will find instructive and useful.” —George D. Kuh, Chancellor’s Professor of Higher Education Emeritus, Indiana University (Continues on preceding page)
Ludvik.indb 247
27-10-2018 00:23:40
Also available from Stylus The Neuroscience of Learning and Development Enhancing Creativity, Compassion, Critical Thinking, and Peace in Higher Education Edited by Marilee J. Bresciani Ludvik Foreword by Ralph Wolff, Gavin W. Henning “The right book at the right time. This timely and well-constructed book addresses a vital issue facing higher education today; the need to increase critical thinking, communication, creativity, and resilience for today’s college graduates. Drawing on neuroscience research and the exploration of innovations in teaching and learning, this book explores exciting new approaches to improve the outcomes of the college experience. There is really nothing like this in the higher education literature—very impressive.” —Kevin Kruger, President, NASPA – Student Affairs Administrators in Higher Education
Real-Time Student Assessment Meeting the Imperative for Improved Time to Degree, Closing the Opportunity Gap, and Assuring Student Competencies for 21st-Century Needs Peggy L. Maki Foreword by George D. Kuh “Peggy Maki, not surprisingly, again advances the practice and the fundamental reasons for engaging in assessment of student learning. Maki issues a clarion call for equity to be the center of our commitment for action. Her call is for timely action that produces the evidence of improved student learning, enhancing at every step both retention and completion, and quality of learning for success in life, work and lifelong flourishing.” —Terrel L. Rhodes, Vice President, Office of Quality, Curriculum and Assessment, Association of American Colleges and Universities
(Continues on preceding page)
Ludvik.indb 248
27-10-2018 00:23:42