193 3 3MB
English Pages 157 Year 2019
advance praise for generally speaking: The impact of General Education on Student Learning in the 21st Century
“Generally Speaking: The Impact of General Education on Student Learning in the 21st Century, edited by Madeline J. Smith and Kristen L. Tarantino, is an excellent text that provides an overview of the changes in general education curricula during the 21st Century, and how those changes have positively affected student learning. As someone who spent the last half of her career working to make general education relevant and valuable to all students, I find this text to be an exceptional resource for faculty, administrators, and assessment professionals, as well as graduate students looking to become university professors.” —Gail G. Evans, Ph.D., Senior Fellow, Association of American Colleges and Universities; Retired Dean of Undergraduate Studies, San Francisco State University; Faculty Emerita, San José State University
“Although the book is titled Generally Speaking, its well-qualified contributors actually speak in highly specific terms about the most important purpose of general education: preparing students for satisfying careers and for rewarding lives as contributors to society. Informed by a sense of history, the volume is for the most part forward looking. The criteria for building and assessing effective programs are both principled and pragmatic, and the case studies offer compelling examples of how genuine reform can occur. As we leave behind “boxes to check” in favor of coherent programs that motivate and transform students, we may for the first time enable all students to enjoy the benefits of a genuinely liberal education.” —Paul L. Gaston, Ph.D., Trustees Professor Emeritus, Kent State University Consultant to Lumina Foundation
“General education may be the most potent weapon in educators’ centuries old fight against ignorance. This book, Generally Speaking, is a welcome and necessary addition to the arsenal. I believe the book helps to shore up general education and put it on a firm ground in an age when many have general education in the crosshairs.” —Angelo Letizia, Ph.D., Assistant Professor, Notre Dame of Maryland University
generally speaking
Culture and Society in Higher Education Pietro A. Sasso and Joseph L. DeVitis, Editors Culture and Society in Higher Education is a book series that analyzes the role of higher education as an incubator, transmitter, and transformer of culture. While examining the larger social, economic, and political connections that shape the academy, it seeks to revivify American colleges and universities and to re-explore their core purposes. In so doing, the series reaffirms our social contract and the common public good that should ideally drive the policies and practices of contemporary post-secondary education. Prospective book topics include, but are not limited to, such themes as the purposes of higher education, the worth of college, student learning, new forms of liberal education, race matters, feminist perspectives, LGBTQ issues, inclusion and social justice, student mental health and disabilities, drug-related topics, inclusion and social justice, fraternity and sorority life, student activism, campus religious questions, significant legal challenges, problems of governance, the changing role of faculty, academic freedom and tenure, political correctness and free speech, testing dilemmas, the amenities “arms race,” student entitlement, intercollegiate athletics, technology and social media, and distance instruction.
Books in the Series: Student Activism in the Academy: Its Struggles and Promise (2019) Generally Speaking: The Impact of General Education on Student Learning in the 21st Century (2019) Supporting Fraternities and Sororities in the Contemporary Era: Advancements in Practice (2020) Foundations, Research, and Assessment of Fraternities and Sororities: Retrospective and Future Considerations (2020) Joseph L. DeVitis is a retired professor of educational foundations and higher education. He is a past president of the American Educational Studies Association (AESA), the Council of Learned Societies in Education, and the Society of Professors of Education. He lives with his wife, Linda, in Palm Springs, California. Pietro Sasso is faculty program director of the College Student Personnel Administration at Southern Illinois University Edwardsville. He is the recipient of the Dr. Charles Eberly Research Award from AFA and is the ACPA Men and Masculinities Emerging Scholar-In-Residence for 2017 to 2019. He serves on the board of the Center for Fraternity/Sorority Research at Indiana University.
Generally Speaking The Impact of General Education on Student Learning in the 21st Century
edited by
Madeline J. Smith and
Kristen L. Tarantino
Copyright © 2018 | Myers Education Press, LLC Gorham, Maine Published by Myers Education Press, LLC P.O. Box 424 Gorham, ME 04038
All rights reserved. No part of this book may be reprinted or reproduced in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, recording, and information storage and retrieval, without permission in writing from the publisher. Myers Education Press is an academic publisher specializing in books, e-books and
Copyright © 2018 | Myers Education Press, LLC
Copyright ©Myers 2019 |Education Myers Education Press, LLC Published by Press, LLC P.O. Box 424
Published by04038 Myers Education Press, LLC Gorham, ME P.O. Box 424 Gorham, ME 04038 All rights reserved. No part of this book may be reprinted or reproduced in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including
All rights reserved. No part this bookstorage may beand reprinted or without reproduced in anyinform or by any photocopying, recording, and of information retrieval, permission electronic, mechanical, or other means, now known or hereafter invented, including photocopying, writing from the publisher. recording, and information storage and retrieval, without permission in writing from the publisher. Myers Education Press is an academic publisher specializing in books, e-books and digital content in the field of education. All of our books are subjected to rigorous Myers Education Press is an academic publisher specializing in abooks, e-books, and digipeertal review process produced in compliance withbooks the standards of the Council on content in theand field of education. All of our are subjected to a rigorous peer review Library and Information Resources.
process and produced in compliance with the standards of the Council on Library and Information Resources.
Library of Congress Cataloging-in-Publication Data available from Library of Congress.
13-digit ISBN 978-1-9755-0009-2 (paperback) Library of Congress Cataloging-in-Publication Data available from 13-digit ISBN 978-1-9755-0008-5 Library of Congress. (hard cover) 13-digit ISBN 978-1-9755-0010-8 (library networkable e-edition)
13-digit ISBN (consumer e-edition) 13-digit ISBN978-1-9755-0011-5 978-1-9755-0123-5 (paperback) 13-digit ISBN 978-1-9755-0122-8 (hardcover) Printed in the United States of America. 13-digit ISBN 978-1-9755-0124-2 (library networkable e-edition) 13-digit ISBN 978-1-9755-0125-9 (consumer e-edition)
All first editions printed on acid-free paper that meets the American National Standards
Institute Z39-48 standard. Printed in the United States of America. Books Myerson Education purchased at specialNational quantity Standards disAll firstpublished editions by printed acid-freePress papermay thatbemeets the American Institute count rates for groups, workshops, training organizations and classroom usage. Please Z39-48 standard. call our customer service department at 1-800-232-0223 for details.
Books published by Myers Education Press may be purchased at special quantity discount rates for Cover by Sophie Appelorganizations, and classroom usage. Please call our customer service groups,design workshops, training department at 1-800-232-0223 for details. Visit us on the web at www.myersedpress.com to browse our complete list of titles.
Cover design by Sophie Appel Visit us on the web at www.myersedpress.com to browse our complete list of titles.
Contents
Acknowledgments ix 1. General Education for the 21st Century and Beyond Madeline J. Smith and Kristen L. Tarantino
1
2. Assessing the Impact of General Education on Student Learning Lisa K. Bonneau, Ryan Zerr, Anne Kelsch, and Joan Hawthorne
7
3. Closing the Assessment Loop in General Education Nhung Pham and Doug Koch
23
4. The Impact: Two-Year Institutions Angie Adams and Devon Hall
33
5. The Impact: Four-Year Institutions Kristen L. Tarantino and Yue Adam Shen
49
6. The Larger Impact: Culture and Society Mary Kay Jordan-Fleming and Madeline J. Smith
61
7. Case Studies in General Education: Engaging Through Faculty Learning Communities Su Swarat and Alison M. Wrynn
71
8. Case Studies in General Education: Design Thinking for Faculty-Driven Assessment Tim Howard and Kimberly McElveen
83
9. Case Studies in General Education: Critical Timing for Critical Reading 95 Bridget Lepore
viii contents
10. Case Studies in General Education: Integrating General Education and the Majors Henriette M. Pranger
107
11. Guiding Generation Z’s Future: Transforming Student Learning Opportunities to Career Outcomes Jeremy Ashton Houska and Kris Gunawan
119
12. The Future Relevance of the General Education Curriculum Kristen L. Tarantino and Madeline J. Smith
131
Index 137 Contributors 141
Acknowledgments
The development of a volume like Generally Speaking is a labor of many individuals, whose hard work, flexibility, and dedication have made the completion of this text a success. We would like to extend our heartfelt gratitude to our series editors, Pietro Sasso and Joseph DeVitis, as well as Chris Myers and the entire team at Myers Education Press, for allowing us the opportunity to explore the significance of general education programming for higher education students and for society. We appreciate all of the guidance and feedback provided throughout the conception and development of this project. To our chapter authors, who responded to our calls for drafts and edits, we express our warmest thanks. It was a privilege to work with professionals who have hands-on experience with the topics in this volume. The contributions presented in this text have expanded the audience for this book and have opened new conversations about the future of general education curriculum and assessment. To our graduate faculty, James Barber and Pamela Eddy, we credit our dedication to quality research and writing, critical reflection, and working toward positive change. Their encouragement and leadership during our doctoral programs helped to support our academic and professional collaboration and molded our passion for supporting student learning. To our respective institutions and organizations that encourage academic engagement, we appreciate the support and resources offered in order to complete this text. We would like to extend a special thanks to Sara Henry of Heartful Editor for providing feedback on random formatting questions and for the reminder that all of our work is important for students, regardless of organizational affiliation. To our families and friends who sacrificed their time to let us coordinate, write, and edit this volume, we truly cannot give enough credit to their role in the success of this book. Through holidays, crazy pets, infants, late-night Skype calls, position changes, home selling and buying, and interstate moves, the steadfast support and encouragement have allowed us to put forth a text that not only reflects our proposed intent but showcases our dedication to academic quality and integrity. Generally speaking, we are very grateful.
Chapter 1
General Education for the 21st Century and Beyond Madeline J. Smith and Kristen L. Tarantino
Millions. This is the current magnitude of college students in the United States. Each academic year, these millions of students from across the country enroll in courses that do not necessarily relate to their majors. They collectively spend hundreds of millions of hours completing this coursework, which faculty in turn spend countless hours evaluating. We have structured higher education in this way with the hopes that exposure to general education curricula will yield well-rounded, contributing members of society. By completing core course requirements, we envision the development of scientists with a deep appreciation for literature and the arts, as well as lawyers with an understanding of climate change. However, such hopes and visions are empty if made on the basis of assumptions alone. In other words, we must regularly and systematically test our assumptions to ensure that the structure of postsecondary curricula evolves with the cohorts of students who engage with it. To that end, this text seeks to further the discussion on how we can measure the impact of general education on student learning specifically, as well as on culture and society more broadly. With the first quarter of the current century rapidly coming to a close, the time is now to reflect on where we are, where we have been, and where we are going as a field with regard to general education designed for 21st-century students. In 1996, the Penn Commission on Society, Culture, and Community (PCSCC) formed at the University of Pennsylvania to examine the impact of cultural, moral, and political issues on public discourse. Two members of the PCSCC later convened to focus specifically on the state
2
gener ally speaking
of general education, and this work ultimately led to the inception of the Commission on General Education (CGE) in the United States (The University of California Commission on General Education [CGE], 2007). The work of the CGE took place from 2004 to 2006 and resulted in a report entitled General Education in the 21st Century. In this report, the CGE made several recommendations for strengthening general education among public and private universities across the nation. However, in more than a decade since the publication of this report, minimal comprehensive work has been done to determine how these recommendations have shaped general education curricula, or how general education has impacted student learning. This work is essential to the future development of general education curricula as we continue to navigate the current century and look ahead to the 22nd century.
Defining General Education and Impact Presumably, higher education stakeholders share a broad understanding of general education. We view this curriculum as a compilation of various common core courses from across the disciplines that students typically complete in the early years of their undergraduate education. The specific requirements may differ depending on institutional mission and other priorities, yet the basic philosophy remains the same. We educate the entire undergraduate population, regardless of major, with a curriculum that seeks to contribute to their development as students and citizens alike. However, the nuances of various general education curricula tend to be as unique as the institutions that offer them. The chapters of this text reflect the wide spectrum of practices in the design, delivery, and assessment of general education that exist across institutional types. In order to understand how these nuances have developed over time, we must first consider the origins of general education. During the 19th century, a common belief among higher education stakeholders was that all learning can be considered general and/or liberal education (Schneider, 2016). By the mid-20th century, however, a common core curriculum emerged across institutional types and took the form of a specific subset of requirements. Community colleges in particular began to adopt general education as a required component of innovative degree programs designed for all populations (O’Banion, 2016). The trend of stakeholders embracing general education began to reverse by the 1980s, though, as the amount of course requirements being added to the curriculum continued to increase. This increase resulted in a “cafeteria-style” curriculum that ultimately became a barrier to completion (O’Banion, 2016). With the transition into the 21st century, the general education curriculum continues to expand to foster outcomes that reflect our priorities in an increasingly globalizing world, including global interdependence and civic engagement (Schneider, 2016). We must take these trends into consideration as we continue to
gener al education for the 21st century and beyond
3
develop and measure the impact of general education curricula. In turn, we can better ensure that the relevancy of general education keeps pace with our ever-changing world. Turning our attention to the impact of general education on student learning, how do we define and measure such impact? We can define impact in a variety of ways, but for the purposes of this text, we define it as college students meeting learning outcomes tied to core curricula. As evidenced throughout the chapters of this text, one of the primary means of measuring impact is through formal academic assessment. In other words, we can use research-based, valid assessment instruments and practices to determine how and to what extent general education curricula make a difference in 21st-century student learning. More informally, we can also gauge impact through the various anecdotes shared and recommendations made throughout the text regarding experiences with general education at the institutional level. The wide array of institutional representation uniquely positions the text to serve as a guide for institutions of all missions and types.
Utility and Overview of Text We initially conceived this text to be an opportunity to start a broader discussion about the impact of general education on college student learning in the 21st century. Thanks in part to the spectrum of voices needed to have such a discussion, the chapters in this book reflect the past, present, and future of general education programming. More specifically, this text features contributions from 20 authors representing public, private, 2-year, 4-year, research, and liberal arts institutions across the country. These experts have backgrounds across the disciplines, and many of them have research as well as practical experience in the field of student learning assessment. They have convened to produce a text with content that benefits general education stakeholders of all kinds, including faculty, graduate students, administrators, policy makers, think tanks, and professional associations. Specifically, readers may find this text useful as a guide for the design and assessment of general education curricula or as a conversation starter on the challenges and opportunities that general education provides for student learning as well as for civic and career outcomes. In this chapter, we set the stage for discussion about the need to examine the impact of general education on student learning. We also provide definitions that guide this discussion throughout the text. Further, we explore the historical origins of general education as well as the confluence of forces that have shaped this curriculum in recent centuries. Chapter 2 introduces assessment as a means to measure the impact of general education on student learning. This chapter provides an overview of the most frequently used assessment measures in higher education, highlights potential challenges when developing assessment plans, and provides a common language for assessment practices. The final piece of the assessment cycle, closing the loop, is the focus of chapter 3. Emphasizing the importance of using assessment data to inform both pedagogical decision
4
gener ally speaking
making and revisions to assessment plans, this chapter closes our introductory segment on general education and assessment and highlights how closing the loop can be most effective in higher education. Chapters 4 and 5 discuss the impact of general education on student learning within both 2- and 4-year institutional contexts. Beginning with the historical underpinnings of general education, chapter 4 provides an overview of the educational purpose for community colleges in the United States and uses real-time assessment data from the North Carolina Community College system to showcase the complexity of determining the impact of general education programming. Relatedly, chapter 5 also explores the historical roots of general education, this time from the perspective of 4-year institutions of all types. This chapter also explores the drivers of general education in the 4-year context, as well as the demonstrated impact of general education on student learning according to recent survey and other data. Chapter 6 extends the discussion of the impact of general education beyond student learning to culture and society. Specifically, we focus on what the CGE (2007) refers to as the civic dimension of general education curricula. We apply the CGE’s four-prong framework for the civic dimension in conjunction with Washington Monthly’s (2018) rankings of the top institutions contributing to the greater public good in order to explore the structure of the general education curricula offered by these institutions. Through this methodology, we seek to inform our discussion of future considerations for the development of impactful general education curricula. As alluded to previously, we define impact in this context as the extent to which institutions yield students and graduates who contribute to the greater public good, as guided by the Washington Monthly (2018) rankings. This chapter also discusses areas for future research based on current gaps in the field’s knowledge about the impact of general education on culture and society. Chapters 7, 8, 9, and 10 share specific case studies that highlight innovative strategies for approaching general education at the institutional level. More specifically, Chapter 7 provides recommendations for engaging faculty to participate in the development and assessment of general education through faculty learning communities. Chapter 8 takes a deeper dive into increasing faculty participation in general education assessment using a design thinking approach (Brown, 2008). The case study presented in chapter 9 reveals the opportunities that general education presents for fostering the development of critical skills in the early years of undergraduate education, with a particular focus on critical reading. The case studies close with chapter 10, which discusses integrating general education and major requirements in order to better align these curricula with students’ interests. The final chapters offer readers a look into the future of general education. Chapter 11 takes a broader view of the role of general education, as the focus of the discussion is on transforming learning opportunities into career outcomes—particularly among Generation Z. We close the text with chapter 12, which provides research-based recommendations for the future development of general education curricula and assessment.
gener al education for the 21st century and beyond
5
Although we are still in the early years of the 21st century, this chapter serves as a reminder that we must remain vigilant in ensuring that our curricula keeps pace with the rapidly changing world and thus remains relevant in the 22nd century and beyond. Simply stated, Generally Speaking surveys the past, reflects on the present, and provides insight into the future of general education.
References Brown, T. (2008, June). Design thinking. Harvard Business Review, 84–92. The University of California Commission on General Education. (2007). General education in the 21st century: A report of the University of California commission on general education in the 21st century. Retrieved from University of California, Berkeley, Center for Studies in Higher Education website: https://cshe.berkeley.edu/publications/general-education-21st-centuryreport-university-california-commission-general O’Banion, T. (2016). A brief history of general education. Community College Journal of Research and Practice, 40(4), 327–334. Schneider, C. G. (2016). General education for the 21st century: Mapping the pathways; Supporting students’ signature work [PowerPoint slides]. Retrieved from https://www.aacu.org/ sites/default/files/files/SchneiderPlenary.pdf Washington Monthly. (2018). 2018 college rankings: What can colleges do for you? Retrieved from http://wmf.washingtonmonthly.com/college_guide/2018/WM_2018_Embargoed_ Rankings.pdf
Chapter 2
Assessing the Impact of General Education on Student Learning Lisa K. Bonneau, Ryan Zerr, Anne Kelsch, and Joan Hawthorne
Given the centrality of general education (GE) in the undergraduate curricula of institutions across the United States, stakeholders, both on and off campus, may reasonably expect institutions to describe ways in which they achieve stated GE student learning outcomes and provide evidence of that achievement. In addition, institutions not yet at a mature GE assessment stage may be expected to explain how they plan to get to a point where they can offer such explanations and evidence. While evidence of learning may be a fair expectation, particularly in this era of accountability, those responsible for overseeing GE programs understand that this is not a simple or straightforward expectation to satisfy. Unique in size and span, GE is organizationally idiosyncratic in many ways. Programs typically lie outside primary academic units such as departments and colleges, creating significant challenges to producing meaningful data about learning across the program. The sometimes incohesive nature of GE results in few well-defined ways for faculty to engage with the program beyond their own classes or to have improvement of quality recognized as part of formal evaluation processes. For program assessment data to be actionable, faculty must think beyond their individual courses (i.e., at the program level), and they must see assessment data as useful. The GE assessment process needs to make sense in the local environment while speaking to broader contexts. It needs to
8
gener ally speaking
be sustainable, practical, and efficient. Ideally, it should also capture students’ best work in a way that is authentic to their learning and speak meaningfully to administrators, parents, and members of the public. All of these aims are challenging to meet under the pressure of limited resources. Furthermore, an assessment strategy that answers current questions and addresses the information needs of today’s stakeholders may not be the strategy that was most useful in previous years or will be most appropriate in the future. Institutions change, programs change, students change, and information needs or desires change. The result is that any given assessment solution is likely to be both temporary and contextual. GE assessment may be most productively conceived by analogy with fields such as ecology, sociology, and systems science, where the challenge is to better understand the nature and complexity of organizations while considering interactions between the system and its environment. Peggy Maki summed the GE assessment situation up well as early as 2004: Institutions should develop a cycle of inquiry that includes all academic programs, including GE. “Rather than prescribing an absolute model” for colleges and universities to follow, Maki (2004) recommended an approach in which institutions commit to assessment that is supported by appropriate “structures, processes, decisions, and . . . communication,” has necessary “resources and support,” and results in “campus practices” that demonstrate the sincerity of commitment to assessment that supports learning (p. 172). Guiding principles in Maki’s (2017) approach articulated more recently include ensuring that assessments are “internally driven,” “inclusive of internal stakeholders,” “bolstered by collaboration,” “anchored in continuous reporting and interrogation of assessment results,” “responsive to student needs,” and “valued by the institution” (pp. 81–92). This chapter unpacks some of the structures, processes, decisions, and practices that can be used in support of GE assessment, describing for each what is required in terms of resources and support. Beginning with an articulation of key factors to consider in light of Maki’s (2017) guiding principles when designing assessment strategies, this chapter will summarize some of the pros and cons of common GE assessment tools. Ultimately, wise decision making requires analysis in terms of the campus context, as the principles make clear. In that light, this chapter will also describe two campus process models as examples of how institutional factors and changing needs impact assessment strategies and processes.
Considerations When Selecting an Assessment Strategy Awareness of multiple stakeholders and their competing interests is a major factor to consider in the selection of an assessment strategy. To ensure actionability, assessment must speak first and foremost to the faculty who develop and deliver GE to students. Faculty interest and trust in findings is a vital step toward using assessment data for
assessing the impact of gener al education
9
improvement in student learning around outcomes that the institution has identified as essential. At the same time, assessment is typically expected to address the information needs, such as data for benchmarking, of senior administrators or boards. Accreditors are another stakeholder group in the assessment process. Accreditation has been helpful in moving institutions forward on assessment efforts but has sometimes reinforced a pro forma approach to assessment activities. Perhaps other audiences would be interested in GE assessment findings if they were readily available. The Voluntary System of Accountability (VSA) was developed on the premise that prospective students and their parents would benefit from seeing and comparing assessment outcomes as part of the process of selecting a college or university. Meeting information needs of multiple stakeholders with competing interests and motivations can make GE assessment challenging. A second consideration is faculty engagement. Virtually all assessment strategies require some level of faculty involvement, with varying degrees of investment in terms of faculty time. A faculty committee may be tasked with selecting a standardized tool or developing materials to be used on their campus. Individual faculty members may be asked to set aside class time to administer a survey or test or be expected to build in an assignment intended for program assessment. Faculty also participate in norming and scoring of assessments. At the least, faculty interest and faith in results is essential if findings are to inform decision making for the GE program and courses within it. At the same time, faculty ownership of and engagement in assessment can raise questions, for some stakeholders, about the quality of the findings. Are assignments across courses equal in rigor? Are the artifacts used for assessment valid measures of the outcome in question and are they scored reliably? Has a process of calibrating and norming been included? Can local faculty be trusted to score students fairly, particularly in cases where contract renewals or merit raises are at stake? Are faculty compensated for their participation in assessment activities or is it considered part of their contractual obligation? How are contingent faculty or teaching assistants included in the process? Despite such questions, faculty engagement also presents a significant opportunity. Faculty who are involved in selecting, developing, proctoring, norming, scoring, and/ or decision making as part of GE program assessment gain skills that can be useful in future work in GE or academic departments and gain knowledge about student learning that can increase their effectiveness as teachers. Assessment activities can unite faculty in a common purpose related to learning and encourage communal commitment to GE goals and ownership of the GE program. Faculty who participate in well-planned assessment activities may feel a greater connection to the institution beyond their home department, satisfaction in meaningful collaborative work with colleagues, and commitment to improving student learning. This engagement can encourage a culture of shared institutional stewardship. Participating in assessment, in whatever fashion, is undeniably one more thing asked of faculty who are already stretched thin. Resource considerations, including faculty time, faculty expertise, and actual dollar costs, are a third factor driving assessment
10
gener ally speaking
strategy selection. Institutions deal with resource questions in various ways, including building assessment into faculty load, finding funding for stipends to reward faculty who dedicate time for assessment, incorporating assessment expectations into tenure and promotion processes, and relying on faculty volunteers. Colleges can choose to participate in nationally standardized surveys, tests, and scorings to reduce demands on faculty time and expertise, but such an approach will not diminish the costs and often, at least in out-of-pocket dollars, will increase them. Even with such approaches, however, local faculty involvement in data review and program decision making remains essential. Resource implications, visible or not, are associated with every assessment strategy. A fourth factor to consider when choosing an assessment strategy is the data collection process itself. Institutions may seek to collect work products or conduct surveys at the point of graduation in order to document GE outcomes that are achieved across an entire program. This strategy considers the effect of GE courses, major courses, and cocurricular experiences. Other institutions may focus on GE outcomes at the course level, which in many cases will occur early in the curriculum. Either approach has implications for data collection. Student work products may be collected from individual courses, whether GE courses themselves, capstones, or other courses identified as appropriate. Alternatively, work may be produced out of class, often during designated assessment days. Authenticity of the work assessed is another important aspect of data collection. Although in-class work is undeniably authentic in relation to the course where the work is produced, it may not be an authentic demonstration of the GE outcome expected by the institution. For example, a college’s quantitative reasoning outcome for GE may be defined in terms of drawing reasonable conclusions based on quantitative data found in everyday personal and civic life. Success on a worksheet of math problems or a calculus final exam would say little about quantitative reasoning so defined. Furthermore, the means of data collection is dependent on the outcome being assessed. Many faculty members include assignments in their courses that require students to demonstrate outcomes such as critical thinking, information literacy, or written communication. Outcomes such as quantitative reasoning or intercultural knowledge are less likely to be addressed in classes across the curriculum, meaning that careful attention will be required to ensure a representative sample. Other outcomes, such as oral communication or teamwork, may not yield artifacts that are readily scorable outside of the class setting. Students may also be encouraged or required to participate in GE assessment processes. In either case, student motivation to produce their best work is a consideration. Work produced as part of a course grade is highly likely to reflect genuine effort, making that approach appealing. On the other hand, work produced as part of a grade often varies considerably from course to course (e.g., work generated for writing assignments in capstone courses and used to assess information literacy may vary considerably depending on the discipline). If work is to be produced in class without counting toward a
assessing the impact of gener al education
11
grade or is generated in an out-of-class setting, whether through a standardized test or a locally developed task, ensuring that students will be motivated to produce their best work requires special consideration. Additional complicating factors abound in the current higher education landscape. It is common for courses across locations, taken in different formats (e.g., dual credit, concurrent enrollment, and advanced placement courses taken in high school), or tied to articulation agreements to be accepted for GE credits. Courses with the same catalog number and description may be offered through various modalities, including online, with inconsistent results yet equally fulfill GE requirements. State systems or other overarching organizational structures may mandate that commonly numbered courses offered at multiple institutions are de facto the same, with little or no data to back up that assumption. Since accreditors expect institutions to provide evidence that students are meeting the GE learning outcomes regardless of course modality or location, GE assessment strategies must be selected to address this complexity. Articulation agreements, whether between institutions or tied to state systems or other consortia, allow transfer students to bring in entire blocks of GE credits, sometimes with minimal attention given to alignment of GE program learning goals between institutions. When this occurs, institutions assessing GE at the course level find that transfer students may be, by default, exempt from the assessment process. The simple fact is that fewer students are taking all, or any, of their GE credits at the institution from which they graduate. However, the graduating institution is responsible for ensuring those students achieve the outcomes identified as institutional GE outcomes. This state of persistent flux characterizes faculty as well as students. Higher education instruction is increasingly in the hands of part-time, non–tenure track faculty, many of whom work at multiple institutions. Leaner budgets often lead to larger classes—particularly for the GE service courses, which typically occupy the lower echelons of the curriculum. A growing number of faculty involved in online instruction are not physically present on campuses and thus may be less integrated into academic units and less likely to be engaged with assessment beyond their courses. Furthermore, they have limited access to faculty development opportunities but are more likely to be responsible for the learning of large numbers of students. It is also less likely that these faculty are involved in any norming, calibration, or assessment workshop days to provide ownership or professional development for assessment activities. It is in this environment, with these and potentially additional complicating factors and competing expectations, that GE assessment occurs. Finding the right strategy may be an unrealistic way to frame GE assessment. Finding useful strategies that address the most pressing information needs may be a better approach. And in many cases, that will mean using multiple strategies, perhaps in tandem or sequentially.
12
gener ally speaking
Assessment Strategies The good news is that, depending on the factors most important to a campus, there are plausible strategies that can be used to assess GE programs. In many cases, an institution may want to use more than a single approach as part of a strategy designed to respond to stakeholder information needs, to engage faculty in ways that acknowledge their program ownership and make use of their expertise, and to triangulate findings.
Indirect Assessments The easiest strategy, requiring little of faculty and staff but generating potentially useful information about GE outcomes, is to conduct indirect assessments using nationally standardized tools. The National Survey of Student Engagement (NSSE), for example, provides student perception information about behaviors known to correlate with learning. Furthermore, it allows an institution to compare itself with peers. How often do your students give presentations compared with students at other institutions of your sort? Do they see diverse perspectives included in course discussions or assignments? Data from questions addressing such issues can help faculty understand how students are experiencing learning around GE outcomes. Furthermore, data from nationally standardized tools may be especially meaningful for stakeholders in administration or on a state board. As valuable as standardized assessments can be, like any tool, they have their shortcomings. Use of such data may raise questions about the representativeness of a survey sample (e.g., were respondents a good match demographically and academically with the student population at large? Were respondents more likely to be either the most or least satisfied students?). Faculty may question whether student perceptions of their learning experiences accurately reflect their students (e.g., how can an English faculty member know whether the findings are relevant to her own teaching if few respondents were enrolled in English courses?). Faculty and other stakeholders may wonder if students are accurate judges of either the curriculum or learning effectiveness. Students may believe they are writing frequently and well because they are comparing their college experiences to what they remember from high school (i.e., students’ expectations regarding frequency and length of college writing assignments may be unrealistic, and, thus, their responses less meaningful than desired). Locally developed indirect assessments can also be developed and used and have the advantage of being able to focus on questions meaningful to the institution’s faculty. If faculty develop survey questions, test them with a small sample of students, and then administer them widely (e.g., in a GE capstone), questions about the quality of the sample can be virtually eliminated. With an in-class administration, responses may be more thorough than what can be achieved using a national tool, which students take individually. On the other hand, greater faculty involvement and expertise to develop
assessing the impact of gener al education
13
the tool will be needed; poor results are inevitable with a badly designed tool. In addition, local surveys provide no opportunity to compare findings from students at similar institutions, a key benefit of a standardized approach to assessment.
Direct Assessments Nationally standardized tools are available for direct assessment as well as indirect, and they have many of the same strengths and weaknesses. The Collegiate Learning Assessment (CLA+), developed by the Council for Aid to Education, the ACT Collegiate Assessment of Academic Proficiency (CAAP), and the Educational Testing Service (ETS) Proficiency Profile, among others, aim to provide institutions with reliable, validated, comparative information regarding the learning outcomes of their students. However, recruiting students to take the test, and to do their best work on it, proves surprisingly difficult at many institutions, leaving faculty and GE program directors wondering about the value of the results. Furthermore, it is impossible to know how well the test developers’ and scorers’ definition of, for example, analytical thinking aligns with faculty expectations on a given campus. If there are questions about the meaningfulness of test results, faculty are unlikely to use the data in decisions about GE program needs. Despite those drawbacks, faculty may be grateful to have the assessment work taken out of their hands as test development, administration, and scoring are managed by someone else. Outsourcing the work of assessment, however, is less likely to be perceived as a virtue by program directors or faculty whose interest is in genuinely understanding student learning.
Course-Embedded Approaches The many questions about student motivation and participation in assessments disconnected from their coursework has led many institutions to consider course-embedded approaches to GE assessment. If the student work to be assessed is created in a class for a portion of the course grade, it is reasonable to assume that students’ best work is being scored, which is a key consideration for understanding the outcomes of their learning. In addition, the work created is rooted in a real context, using information relevant to courses students are taking. If the assignment is created for a course, completed in conjunction with a course, and scored by the teacher of the course as part of normal grading, no extra work beyond submitting scores is required on the part of either faculty or students. The only requirement on the part of the faculty member is to score student work using a rubric aligned with the GE outcome of interest, ideally chosen for across-the-institution use, and then to enter rubric scores into a database accessible to GE program administrators. Despite the appealing nature of this low-effort assessment strategy, drawbacks remain. Faculty must agree to follow the rules by using the right rubric and entering scores
14
gener ally speaking
into the approved database. Assessment may need to occur in a GE course in order to ensure such cooperation. For GE outcomes not typically taught in all majors (e.g., diversity, quantitative reasoning), this may mean collecting outcome assessment data from students in their first or second year. If faculty intend for GE outcomes to be achieved across the curriculum and through a combination of in-class and cocurricular experiences, assessing the achievement of outcomes at this early point may not be ideal. Finally, there is often skepticism about the degree to which faculty hold their own students to the highest institutional standards. Will the scores a chemistry professor gives for her students’ written communication reflect high institutional standards? Or will student scores be influenced by concern about how individual faculty teaching will be viewed (e.g., through student evaluations or faculty contract renewal)? Will student scores be influenced by faculty expectations regarding the importance of written communication as an outcome for chemistry students? Concerns about scoring quality and consistency can be alleviated if the work products generated in classes undergo an institution-wide, collaborative scoring process. Depending on institution size, such a strategy may mean faculty who score work are unlikely to know which teacher made the assignment or which student generated the response. However, other problems are introduced. Work products from classes in math, music, and history may all demonstrate written communication skills, but assignments could be quite different. Can they be scored fairly through a single process, with a common rubric, by a cross-disciplinary group of faculty members? Who will do the scoring, and what will be their incentive to participate? What kind of norming process will be used to ensure all scorers apply standards consistently? Consistency of student work products and applicability of rubrics can be resolved using an institution-wide, locally developed tool, intentionally aligned with outcomes for the GE program, to be scored collaboratively by faculty from across the institution using a shared rubric. Work products can be produced in classes, with faculty agreeing to administer the test that they have designed or agreed to and with students experiencing those assessments as course requirements meriting their best work. Alternatively, assessments can be administered outside of class, with faculty incentivizing or requiring their students to participate thoughtfully in the assessment process. Incentives can include bonus points, incorporation of the assessment participation into the grade for the course, or other possibilities appropriate for the faculty member, subject, and student population. There are several benefits to this kind of approach. The faculty involvement in every phase of the process, from creating or accepting the tool to ensuring participation to scoring, means that faculty will have genuine ownership of the process and, therefore, the results. Faculty scorers can participate together in norming as a first step in the scoring process. Debriefing after scoring will allow those involved in scoring to also participate in analysis and conclusion drawing. On the other hand, this kind of process also requires a motivated faculty community. Who will be able to provide that motivation? Will incentives encourage faculty to participate in the necessary work? Will
assessing the impact of gener al education
15
faculty with useful expertise be willing to help, and will faculty across the institution be willing to assist with assessment of GE outcomes that are not directly aligned with their own intellectual interests? Quantitative reasoning, for example, should be assessed by faculty beyond the math department, and written communication as an outcome is not the sole purview of English faculty.
Innovative Approaches Finally, it is worth noting that innovative approaches to GE assessment continue to be developed, refined, and implemented at institutions across the United States. E-portfolios have long been viewed as offering a tantalizing solution to the problem of data collection. All students can simply be required to upload evidence documenting their achievement of identified goals. Unfortunately, the integration of e-portfolios in learning assessment is more complicated, necessitating decisions about a unified understanding of institutional GE outcomes, technical needs such as determining the most appropriate platform, and faculty training and course pilot testing (Andrade, 2013). The Association of American Colleges and Universities’ (AAC&U) VALUE Institute presents a new and highly promising GE assessment strategy. The institute has moved beyond its pilot phase and is now being used by institutions to provide benefits associated with both standardized direct assessments and course-embedded assessments (AAC&U, 2018). Using rubrics created and calibrated by teams of cross-disciplinary faculty from across the nation, trained faculty raters provide standardized scores for work products that subscribing institutions submit from classes on their campus (AAC&U, 2018). Each work product is double-scored, and faculty from whose students’ work is submitted can implement the VALUE rubrics in advance, ensuring they have a clear and accurate understanding of how the outcome is being defined (AAC&U, 2018). Faculty can also choose to complete training and participate in scoring for other institutions (AAC&U, 2018). These factors help improve alignment and increase the likelihood that local faculty will see assessment findings as meaningful and useful.
Case Studies: Examples of GE Assessment in Practice Not every methodology will work for every institution, and not every institution has the same stakeholder needs or GE design. The following two case studies show how different the process can look and elucidate how methodologies may change over time as institutions respond to change.
16
gener ally speaking
University of North Dakota Over the last 10 years, GE assessment at the University of North Dakota (UND) has incorporated each of the assessment techniques outlined earlier in this chapter. The evolution to the current situation represents loop closing on the assessment process itself (i.e., trying something, analyzing the outcome given the local context, and making decisions about future directions based on that analysis). This process has been largely organic and grassroots, both in the sense of examining the pros and cons of each approach and of collaboratively determining new directions. The longest-running approach used during the past 10-year time period, however, has been collaborative institutional assessment. Prior to developing this process, most GE assessment at UND was course based. At the beginning, a newly developed GE program, predicated on learning outcomes rather than the previous focus on distribution, was just getting started. Naturally, the campus wanted to know how well the new program was helping students achieve each of its learning outcome areas. Although course-based approaches initially seemed to be an obvious assessment choice, given that the courses were the means of addressing the program’s learning outcomes, significant drawbacks quickly became apparent. Most of the program courses were at the 100 or 200 level. Assessing outcomes by course meant that assessment information would miss learning effects attributable to the program in its entirety, which extended across the 4-year curriculum by including a capstone requirement. It also meant cocurricular or extracurricular learning, learning in the major, and learning from transferred courses were usually excluded from consideration. A course-based approach would fail to see that GE outcomes were achieved through a combination of all these learning opportunities. Because the new program included a GE capstone, the prospect of obtaining outcome-level (i.e., at the point of graduation) information through course-based assessment initially seemed promising. However, a critical problem became evident. GE capstones were often taught by disciplinary faculty for the department’s seniors and thus had disciplinary content as their focus. As a result, assessing student work for certain learning outcomes was limited based on students’ disciplinary major. For example, the fine arts capstone assignments had little likelihood of generating artifacts that could be used to assess seniors’ quantitative reasoning. The assessment’s representativeness, therefore, became a critical shortcoming that necessitated finding an alternative approach. Such an alternative emerged, in part, through the institution’s previous experience with the Collegiate Learning Assessment (CLA+). The CLA+ uses scenarios intended to be realistic and engaging in order to elicit student work that speaks to a given learning outcome to which the scenario is designed to match (Council for Aid to Education, 2018). The approach at UND came to rely on scenarios like these, usually referred to as performance tasks (Chun, 2010). In addition to addressing many of the issues presented
assessing the impact of gener al education
17
by the course-based approaches, additional benefits related to faculty development and campus buy-in emerged. The initial, ongoing, and significant involvement of faculty was key to the ultimate success of this approach at UND. Beginning at the scenario-development stage, a team consisting of faculty and staff comes together to brainstorm a scenario that matches both a specific learning outcome and the corresponding rubric that articulates the meaning of that outcome. The rubric, and the type of student work for which it calls, can be expected to draw students into the role the scenario asks them to assume. These dual challenges require careful thought. Because these scenarios are usually intended to place students in real-life roles, the work required often takes the form of a product they may want to or be required to produce in a personal or civic setting after graduation, regardless of major or intended profession. Examples include memos, letters to the editor, and responses to a call for applicants, among others. Identifying a suitable scenario with an associated artifact for the student to produce, which is aligned with the outcome and rubric for the assessment, is a critical first step for a team. The scenario must be highly engaging for students so that they will produce their best work. Developing such a scenario requires time and thought. The experience at UND suggests this is work that faculty embrace in the same way they embrace the challenge of developing high-quality and engaging assignments for their classes. Since assignments of this kind represent good pedagogical practice, the task-development process has secondary benefits, namely the exposure faculty gain to an engaging teaching strategy that may be useful in their classes and the experience with an effective assessment tool that may ultimately be applicable in their disciplinary programs (Hutchings, Jankowski, & Schultz, 2016). Additionally, tasks developed by UND teams have been accepted into the National Institute for Learning Outcome Assessment (NILOA) Degree Qualification Profile (DQP) assignment library, a peer-reviewed repository. This public sharing of collaborative assignment and assessment design allows faculty to document their scholarly approach to teaching and their contribution to institutional assessment (Borysewicz et al., 2018; Carmichael, Kelsch, Kubatova, Smart, & Zerr, 2015). With a performance task developed, attention turns to the recruitment of students to produce work responding to the scenario, role, and task. At UND, students are recruited from GE capstone courses, ensuring a sample of students who are near graduation and have experienced the cumulative effect of both GE and the rest of their college curriculum and experience. The next consideration is whether students will complete the scenario within the designated capstone class time or outside of class, as well as whether capstone course credit or another incentive will be given for the work. These are local implementation decisions requiring varying degrees of faculty involvement and buy-in. At UND, students complete the task outside of class time during 2-hour blocks set aside specifically for the purpose. This requires faculty buy-in to recruit student participants as well as a significant level of coordination and administrative support. It does, however, avoid challenges related to different class meeting durations. In addition, some students
18
gener ally speaking
may be taking asynchronous online capstones. Completing the assessment outside of class provides opportunities to involve students for whom there is no designated class time during which the assessment would normally occur. The third and final aspect of the UND GE assessment process is translating student responses to the scenario into information about achievement relative to the scenario’s associated learning outcome. Here, again, the implementation at UND has a beneficial outcome transcending the explicit purpose. By gathering faculty and staff from across campus for a half-day scoring session, it becomes possible to efficiently score hundreds of student work products in a meaningful way. This scoring process involves an initial and careful norming discussion, during which substantive conversations about the faculty’s expectations are juxtaposed with actual student work. Scoring then proceeds, and the session ends with a debriefing focused on overall impressions of student work and implications for course-level or GE program–level changes. These loop-closing conversations continue over subsequent months. Because a significant number of faculty are part of this process from beginning to end, the endeavor has a feel of collective ownership. An administrative office oversees the logistical details and organizes the various components, but the work itself is hands-on for a sizable segment of campus. The resulting buy-in has led to meaningful conversations about GE program outcomes, served as a faculty development opportunity, and generated crucial information for making program improvements.
University of South Dakota The University of South Dakota (USD) has taken a different strategy in its approach to GE assessment, having begun with a standardized tool and moving more recently to an embedded, collaborative process similar to that of UND. USD is governed by a statewide board of regents (BOR) who oversee GE assessment policy. In the late 1990s, the BOR adopted the Collegiate Assessment of Academic Proficiency (CAAP) exam as the mechanism through which to assess GE goals and learning outcomes alongside annual reporting of course grades for courses that meet those goals. USD provided the CAAP exam to all students after completion of 40 credit hours, including transfer credits. Passage of the CAAP at designated cut scores for each CAAP area was required for the student to be eligible to graduate. Students not reaching the required score on the first attempt were provided remediation and allowed to retest up to two additional times. If students failed to reach the cut score on the third attempt, they would be unable to enroll in subsequent coursework at any of the state BOR institutions, essentially eliminating their opportunity to get a degree from a state university. This method of assessment not only provided information on how well each campus was achieving GE outcomes, but also served as a direct requirement for students to meet the tested GE outcomes in order to graduate.
assessing the impact of gener al education
19
After 10 years of implementation, an internal study completed on data from all BOR institutions found that success on the CAAP was directly related to student preparedness as determined by college readiness scores. Essentially, if students met the ACT college readiness benchmark upon entry, they were 99.9% likely to pass the CAAP at the appropriate cut score. Additional review found that some students passing the math or English GE course were not passing that subsection of the CAAP exam. Based on the data review, the increase in the number of students transferring GE credit, and the growing interest in competency-based courses and programs, the BOR initiated a review and update of the GE assessment process at the system level. In 2016, the BOR eliminated the CAAP exam for GE assessment and revised the process into a two-pronged approach involving course-level assessment of six GE outcomes using an embedded artifact methodology modeled after the multi-state collaborative (Crosson & Orcutt, 2014; Pike, 2014) and program-level assessment of cross-curricular skills. At the course level, the new plan calls for the six outcomes to be assessed in a 3-year cycle, with two goals assessed each year. Faculty who teach approved GE courses assess course-embedded artifacts with rubrics developed by system-level, discipline-specific faculty councils. A sample of artifacts is then submitted to a system-level assessment summit for norming and calibration sessions with disciplinary faculty. At the program level, GE is assessed via cross-curricular skills. Each program must select five of 11 cross-curricular skills and include assessment methods for those skills in the academic program assessment plan. Results of this assessment are then summarized and reported at the system level via the program review process. Moving to a new strategy for GE assessment has brought several challenges. Chief among those challenges has been communication, especially regarding mechanics and logistics. With multiple levels involved (e.g., system, institutions, colleges, departments, courses), communication can break down in numerous places and for various reasons. Initially, there was minimal standardization at the system level regarding artifact collection and submission across institutions to provide maximum flexibility for campus-level processes. Discussions after the first summit led to modifications in sampling strategies and artifact submission methodologies, resulting in a more standardized process with common templates to facilitate better data analysis at the system level. Development of reference documents, frequently asked questions, and common reporting templates have improved clarity and efficiency while still preserving campus-level autonomy. Other logistical challenges with artifact cataloging and upload also existed. For example, the process is straightforward and easy to accommodate for typical text or image files, but video files for fine arts, communication, and other multi-format disciplines proved more problematic due to file size, location of the area for assessment within a larger video file, and the number of video submissions. Experience from the first summit highlights two key considerations: the need to be flexible in adapting to changes or improvements in assessment processes and approaches and the importance of the input that comes naturally as part of norming and debriefing
20
gener ally speaking
discussions. These discussions bring out many curricular and assessment improvement suggestions but are not always captured, because too many times the emphasis is placed on checking a proficiency box. For example, questions may arise about whether it is appropriate to include a specific course as part of the GE curriculum. Listening to disciplinary faculty discuss whether the content of a specific course, based on the review of a dozen artifacts from that course, matches the stated GE learning outcomes should lead to a discussion about whether the course should be modified or removed from the GE approved course list. Without this context, results indicating a failure to meet GE outcomes can lead to course improvement strategies that may not be necessary or prudent; the wiser decision may be simply to remove the course from the list. Another example may be a discussion around the realization that a prerequisite and the subsequent course are both included in the GE list. Should both courses be included as meeting GE requirements and should work from both be assessed? If so, should there be a higher expectation of student performance for the subsequent course? Discussions in these sessions not only lead to curricular improvement but facilitate faculty ownership of the GE program and assessment process. Artifact alignment with learning outcomes is crucial to an embedded assessment process. Faculty need to select the artifacts and serve as the discipline experts to determine if outcomes are being met. However, not all faculty are assessment experts or proficient in assessment methodology, and it is possible that some submitted artifacts may not actually align with learning outcomes. This was recognized in the USD calibration and debriefing sessions when some faculty recommended that better descriptions and guidance should be provided in artifact selection and rubric alignment prior to artifact collection and review. In the earlier standardized exam strategy at USD, assessment was taking place at the student level and faculty were largely disconnected from the assessment process. With the new embedded artifact process, faculty participation has greatly increased. While this is a clear benefit, concerns have been expressed about possible use of results for evaluating faculty performance. Fear of reprisal is not the only symptom of faculty resistance to assessment, and resistance may be present even where faculty have routinely been involved with GE assessment if the process is perceived as compliance driven (Kramer, 2009). At USD, data were provided in aggregate form to allay these concerns. The continued ownership of the process, from development and modification of rubrics to calibration and debriefing sessions, to closing the loop with campus-level recommendations for improvement, should alleviate concerns of reprisal and move the culture toward improvement.
assessing the impact of gener al education
21
Conclusion The key in developing any GE assessment is finding the right approach. Institutional context and culture, external factors, GE program type, and other needs that will naturally evolve with time are key variables to be considered. Including appropriate stakeholders, especially faculty who may be performing much of the assessment, is a necessary component of defining learning outcomes, determining best practices, and ensuring quality results are obtained and shared. The considerations described in this chapter, along with the various strategies for GE assessment, represent a starting point from which institutions can craft approaches responding to their own needs and challenges.
References Association of American Colleges and Universities. (2018). The VALUE institute: Assessment as transformative faculty development. Retrieved from https://www.aacu.org/aacu-news/ newsletter/2018/november/campus-model Andrade, M. (2013). Launching e-portfolios: An organic process. Assessment Update, 25(3), 1–16. doi:10.1002/au.253 Borysewicz, K., Cavanah, S., Gable, C., Hanson, D., Kelsch, A., Kielmeyer, A., . . . & Zerr, R. (2018). Information literacy performance task. Retrieved from https://und.edu/ academics/essential-studies/_files/docs/information-literacy-scoring-session-summaryreport-may-2018.pdf Carmichael, J., Kelsch, A., Kubatova, A., Smart, K., & Zerr, R. (2015). Assessment of essential studies quantitative reasoning skills. Retrieved from https://www.assignmentlibrary.org/ assignments/559c3986afed17c65f000003 Chun, M. (2010). Taking teaching to (performance) task: Linking pedagogical and assessment practices. Change: The Magazine of Higher Learning, 42(2), 22–29. doi:10.1080/00091381003590795 Council for Aid to Education. (2018). CLA+: Measuring critical thinking for higher education. Retrieved from https://cae.org/flagship-assessments-cla-cwra/cla/ Crosson, P., & Orcutt, B. (2014). A Massachusetts and multi-state approach to statewide assessment of student learning. Change: The Magazine of Higher Learning, 46(3), 24–33. doi:10.1080/00091383.2014.905423 Hutchings, P., Jankowski, N., & Schultz, E. (2016). Designing effective classroom assignments: Intellectual work worth sharing. Change: The Magazine of Higher Learning, 48(1), 6–15. doi:10.1080/00091383.2016.1121080 Kramer, P. I. (2009). The art of making assessment anti-venom: Injecting assessment in small doses to create a faculty culture of assessment. Assessment Update, 21(6), 8–10. doi:10.1002/au.216 Maki, P. L. (2004). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus.
22
gener ally speaking
Maki, P. L. (2017). Real-time student assessment: Meeting the imperative for improved time to degree, closing the opportunity gap, and assuring student competencies for 21st-century needs. Sterling, VA: Stylus. Pike, G. R. (2014). Course embedded assessment designs: Lessons from the MSC. Assessment Update, 26(6), 7–10. doi:10.1002/au.30004
Chapter 3
Closing the Assessment Loop in General Education
Nhung Pham and Doug Koch
Assessment of general education in higher education is not a new concept. According to Penn (2011), one of the first comprehensive assessments of general education was in the late 1920s. Over the past several years, various individuals, organizations, and legislators have continued to express concerns about the quality of higher education. Those concerns have triggered legislation and requirements at the federal and state levels and by regional accreditors to assess and report on student learning (Bassis, 2015; Jones, 2009; Nelson, 2014). The regional accrediting organizations identified and recognized by the Council for Higher Education Accreditation (CHEA) all include requirements related to assessing student learning outcomes for general education. The accreditors have requirements for articulating the outcomes as well as measuring and documenting student success (Council for Higher Education Accreditation, n.d.). Although assessment has been ongoing for many years, there are still components of the process that lack clear understanding and specific strategies or procedures for completing the assessment cycle. “Closing of the loop,” as it is termed, is one of the more challenging aspects of the assessment process and one of the key reasons as to why the assessment process is undertaken. Fletcher, Meyer, Anderson, Johnston, and Rees (2012) state, “assessment provides information about student learning, student progress, teaching quality, and program and institutional accountability” (p. 119). Closing the loop requires effective and appropriate decisions to be made and actions carried out that drive improvement. Assessment efforts seek to answer whether students are
24
gener ally speaking
learning (Rawls & Hammons, 2015), but the secondary question is, “What are we doing to improve that learning or how are we using the data from assessment to improve the learning and learning process (i.e., closing the loop)?”
Steps Leading to Closing the Loop Before engaging stakeholders in closing the loop, some considerations need to be made to facilitate the discussion on the appropriate actions for continuous quality improvement. First is to choose an assessment measure or measures for the assessment of the general education (GE) program. Institutions are trending toward greater use of authentic measures of student learning such as rubrics, classroom-based performance assessments, and capstones to yield actionable information for the university (Jankowski, Timmer, Kinzie, & Kuh, 2018). About 400 institutions in the United States used the Valid Assessment of Learning in Undergraduate Education (VALUE) rubric in 2017 as a common tool to assess their GE programs (Association of American Colleges & Universities [AAC&U], 2017). This authentic assessment approach for classroom-based assessment is an indicator for the Excellence in Assessment (EIA) designation, which recognizes institutions for their intentional integration of campus-level learning outcomes assessment. The second consideration is to determine what evidence of assessment can lead to change (Kuh et al., 2015). An institution needs to consider collecting representative and authentic artifacts across courses aligned with the GE program. An institution can require that all courses submit an assignment that is most aligned with a specific GE competency or rely on sampling techniques to collect three artifacts from each GE course (e.g., the first, middle, or last students in the roster). The reliability of GE results is the most important step to facilitate closing the loop. Normally, it is necessary to first allow faculty to do a norming or calibrating with the rubrics and then to assign two raters to score each artifact to ensure reliable results (Eicholtz, 2018). Once assessment data has been collected, disseminating an assessment report helps facilitate the campus discussion and model transparency in the assessment process. In addition to providing overall assessment results for the whole campus, assessment results need to be further analyzed in each competency to facilitate individual competency discussions. Understanding the data context is key for the process of closing the loop; therefore, additional information in the assessment report such as course goals, pedagogical choices, or student demographics can support the interpretation of data (Schen, Bostdorff, Johnson, & Singh, 2018). The assessment report is also a major tool to facilitate the communication of assessment results to stakeholders. However, institutions need to develop the appropriate infrastructure and mechanisms through which to share those results. Baker, Jankowski, Provezis, and Kinzie (2012) emphasized the importance of communication in
closing the assessment loop
25
documenting the actions for improvement. Clear avenues for communicating about assessment make it easier for institutions to provide examples for the use of assessment results in their day-to-day practice. Therefore, it is significantly necessary to have a transparent system to communicate how assessment results are being used as well as to share institutional examples internally and externally.
Common Actions in Closing the Loop The National Institute of Learning Outcome Assessment (NILOA) administered a survey in 2017 asking provosts for feedback on how their universities used the assessment results to close the loop. At the institution level, examples of changes informed by assessment results included modifying institutional assessment policy, changing placement policies for GE programs, revising course prerequisite policies, changing GE recertification processes, modifying advising processes, shifting the manner in which resources were deployed, and developing workshops and seminars focused on specific learning outcomes (Jankowski et al., 2018). In addition, universities also used assessment data to frame the assessment process itself and to improve GE assessment practices and processes with GE committees, GE assessment committees, and institutional assessment offices. The GE committee or GE assessment committee serves a critical role in reviewing data and improving the assessment processes. The committees often make recommendations and suggestions for practices such as the recertification process for GE courses, policy adjustments, recommendations for additional resources, and budget allocation requests for institutional leaders. Those recommendations and actions of the committees are documented in the GE assessment report to facilitate institutional leaders’ decision making and are embedded in the next year’s assessment planning to improve the whole assessment process. The subsequent actions from leaders are incorporated into planning accreditation efforts, setting institutional priorities and strategic plans to improve student engagement and success, and revising institutional outcomes (Jankowski et al., 2018). At the program level, common actions for internal improvement of the GE program include setting faculty priorities; securing resources for professional development; revising curriculum, courses, and assignments; and informing the recertification process (Jankowski et al., 2018). Faculty are also engaged in scoring artifacts from their classes. When reviewing the assessment results, the GE assessment committees often find that different faculty interpret the rubric differently and, as a result, score students’ artifacts differently. Often, additional faculty professional development is needed, such as assignment design training to ensure validity of the assignment and norming or calibration training to ensure the reliability of the assessment results. Professional development
26
gener ally speaking
creates new opportunities and initiatives to support instruction across a campus and enhances support for instructors, courses, and students to achieve learning outcomes. The assessment results often highlight any needs for the GE committee to revise and update GE outcomes, to reconsider the alignment of GE courses in each competency, and to consider the sequence of course delivery (Blakeslee & Baker, 2018). As for GE skill-based competencies, students not only learn inside the classroom but also outside of the classroom through the support of cocurricular units. Therefore, the GE committee should consider mapping the GE curriculum with diverse cocurricular experiences on campus, such as TRIO and student support services, to provide the best learning opportunities for students (Albert, Harring, Heiman, & McGuire, 2018). In addition, assessment results also serve as evidence for the GE assessment committee to improve assessment measures for faculty. For example, some practices to revise rubrics involve clarifying the language used in the rubric, standardizing the number of levels of performance, reducing the number of dimensions of performance, and coding the common scheme to produce more meaningful assessment results from which faculty can improve their class instruction (Bruce, 2018). At the course level, GE assessment results can also help faculty make course adjustments, refine course outcomes, and revise assignments for intentionality and pedagogy to match the GE competency. To avoid discrepancies in scoring a GE assignment, faculty often utilize a common syllabus and a common assignment to assess students’ GE knowledge and skills across the campus. Using common tools can also enhance faculty collaborations across campus (Bruce, 2018). In addition, faculty can compare the GE assessment results from their own courses with university competency results, make a self-reflection on what the course has achieved, and identify the weaknesses that need adjustment for the next class to support better learning opportunities for students. Course-level assessment, then, creates a positive feedback loop, grows the culture of assessment, and helps improve the faculty member’s daily teaching practices. To engage all faculty across campus in GE assessment activities, institutions should participate in assessment initiatives such as mapping curriculum, designing intentional assignments, developing or implementing pathways to completion, revising general education, increasing quality or scaling up high-impact practices, using VALUE rubrics and Liberal Education and America’s Promise (LEAP) essential learning outcomes, developing competency-based programs, participating in statewide completion initiatives, and developing comprehensive student records (Jankowski et al., 2018). The NILOA provost survey indicated that the initiatives with the highest participation among institutions are mapping curriculum and assignment design (Jankowski et al., 2018). The types of initiatives in which institutions are currently involved point to some of the ongoing efforts to align and embed student learning outcomes assessment throughout institutions. These initiatives connect and align curriculum and student learning at multiple levels and significantly increase campus engagement in teaching, learning, and assessment.
closing the assessment loop
27
There are three key constituent groups that must work collaboratively in closing the loop for GE programs: faculty, assessment committees, and the administration. Faculty engagement in closing the loop normally involves making adjustments to assignments, rubrics, and pedagogy. The assessment office and assessment committees would adjust the assessment processes and practices as a whole to ensure the assessment activities are the most meaningful to a specific campus. Recommendations and suggestions from faculty and committees that are documented in the assessment report facilitate administrator decision making regarding adjusting institutional policies and allocating appropriate resources to assessment activities.
Ongoing Process of Closing the Loop Closing the loop is a continuous, ongoing process and needs to be carried out at multiple levels. In addition to the campus stakeholder levels (e.g., faculty and committees), leadership engagement in assessment activities plays a key role in fully closing the loop. Normally, campus stakeholders discuss results and document the recommendations and suggestions which are then given to leaders (e.g., department chairs, dean, and provost) to support decision making. In order to close the loop effectively, it is very important that higher-level leaders follow up with the actions and make appropriate interventions to ensure support and resources are provided to improve the issues. For example, if the assessment results indicated students across campus have insufficient skills in writing, it might be recommended to have the writing center support skills development in writing. Campus leaders need to be able to respond to such a recommendation by making sure that there are resources for the writing center to support the initiatives. Jankowski, Timmer, Kinzie, and Kuh (2018) concluded in the 2017 provost survey that governing boards have a key role to play in sustaining and developing meaningful assessment. These boards can endorse policies and priorities that support and encourage assessment and invite wider stakeholder involvement. After implementing identified changes or improvements, it is necessary to assess the impact of the change(s). The common practice is to repeat the assessment measures to compare student performance across classes in order to determine patterns in the data over time or to identify additional metrics before designing action (Matuga & Turos, 2018; Schen et al., 2018). Assessing the impacts of the changes is a step in the process that is often lacking. Jankowski et al. (2018) reported that provosts provided numerous examples of expansive changes at their institutions, drawing on assessment data, but too few had examples of whether the changes had the intended effects. In other words, institutions could document the evidence of use of assessment results for quality improvement, but still fall short of documenting the impact of the actions on the institutions’ improvement. So, the next step in closing the loop is to measure and evaluate
28
gener ally speaking
institutional improvements to ensure the actions implemented by an institution have the desired impact (Kuh & Ikenberry, 2018). GE assessment, closing the loop, and assessing the impacts of GE can go beyond the typical classroom. In addition to closing the loop in the classroom, for a skills-based GE competency such as writing, communication, or critical thinking, students learn a great deal outside of the classroom and through additional experiences offered by the college or university. Institutions also utilize assessment results from the cocurricular units such as a center of teaching and learning, student services, and/or library services to support the GE assessment results and to improve the cocurricular efforts and resource allocation. Jankowski et al. (2018) found that internal improvement efforts regularly benefited from academic assessment results, but significant improvements were not made from cocurricular assessment results. It is important to share the assessment results not only with academic units but also to cocurricular assessment units to ensure the cocurricular units are also engaged in closing the loop to best facilitate student learning across campus.
Challenges GE assessment often requires the coordination of many faculty and departments, making the process more complicated to schedule and coordinate than a single-department review. For a mid-size university with around 12,000 students, the data collection pertaining to one competency engaged about 30 faculty from 150 sections. Therefore, the first typical challenge is to engage more faculty in the process, to have a high participation rate, and, most importantly, to set up a time for faculty groups to discuss the assessment results. For example, critical thinking is a typical competency that engages diverse faculty groups from all colleges in a university. Scheduling a meeting to discuss the assessment results is a challenge. A possible solution for this issue is to conduct multiple closing-the-loop meetings so that all faculty members have a chance to discuss student performance and provide feedback for the assessment process. Closing the loop at the faculty level is an important early step to move the actions for improvement forward because faculty have direct knowledge of their students’ learning and are the foremost experts on what happens in their classrooms. Naturally, some issues arise at that level of the assessment process. Often, GE assessment involves a high proportion of adjunct faculty teaching GE courses, and they are not expected to contribute work beyond teaching the courses they were assigned (Allen, 2006). Since the compensation for adjunct faculty is normally lower than tenure-track faculty, it is difficult to expect adjunct faculty to do additional work. A discussion of assessment results and needed changes in a course without including adjunct faculty would not provide a complete picture of student learning across the university. This issue cannot be fixed at the faculty or assessment committee level; however, they have an opportunity to
closing the assessment loop
29
document the challenges of including adjunct faculty in the assessment process in their meeting minutes so that higher-level administration can use it as evidence to improve the adjunct hiring policy, which can benefit the whole assessment process. Communication and transparency of the assessment process and results remains an opportunity and a challenge. With the wide variety of faculty engagement from diverse colleges or departments, effectively communicating information about student learning in the GE program remains a target of opportunity for assessment work. Determining how to effectively communicate assessment results continues to be a challenge for the vast majority of colleges and universities (Jankowski et al., 2018). However, having a mechanism to communicate assessment results facilitates closing the loop to internal and external stakeholders (Baker, Jankowski, Provezis, & Kinzie, 2012). Outfitting a central assessment office with adequate personnel and resources can serve a useful strategy for institutions looking to facilitate widespread communication and transparency about the assessment process. Having a centralized office can assist with the capture, maintenance, and dissemination of meeting minutes and notes pertaining to individual GE competencies, can serve as a liaison between departments and with the GE committee to close the loop, and can document evidence of closing the loop in the assessment report for university leaders’ decision making.
Sustainability Assessment is a continuous cycle. As soon as we close the first loop, we go over the same steps and close the second loop. Therefore, it is very important to have multiple-year strategies to sustain the GE assessment process. The first strategy is to have an effective infrastructure in place to conduct GE assessment annually. Second, human resources such as a GE coordinator or staff in an assessment office are the common practice to sustain the assessment process. Third, achieving meaningful faculty engagement in the GE process is the key to assessment longevity. Blakeslee and Baker (2018) recommended clarifying the purpose of assessment and addressing academic freedom in the assessment process. Jankowski et al. (2018) emphasized adequate resources to support assessment activities such as investing in ongoing training, incentives for additional assessment responsibilities, and stipends for the calibration process as sustainable strategies for effective assessment. Continuous professional development for faculty could be more meaningfully integrated with assessment efforts to support faculty use of results, technology implementation, and integration of efforts across an institution. In addition to continuous support (i.e., personnel) and investment in assessment training (e.g., assignment design, calibration), McConnell and Horan (2018) found that additional intrinsic desires to engage faculty in meaningful assessment include receiving a useable and digestive report, having assessment count toward scholarship (i.e., involving faculty by including their own disciplinary interests), and receiving recognition in
30
gener ally speaking
assessment. Campus recognition and reward systems acknowledge contributions to student learning, policy and procedures, and budgeting that rely on empirical evidence for decision making. Middaugh (2009) emphasized that effective assessment translates to effective use of human and fiscal resources. Not only should resources support the institutional teaching and learning process, but there should also be a clear linkage between what is gleaned from measures of student learning across disciplines and the way in which human and fiscal resources are allocated. Closing the loop and efficient resource allocation based on the evidence of assessment results is a typical institutional effectiveness cycle. Baker et al. (2012) found excellent assessment practices and processes that successful institutions have in common: embedding assessment into institutional processes, such as GE recertification or governance structures; securing support from administrative leadership by making resources available and supporting the professional development of faculty and staff; providing and encouraging space for discussion and collaboration; engaging faculty and fostering ownership of assessment; and sharing information widely regarding assessment and results of assessment to both internal and external audiences. In general, infrastructure (i.e., human resources), communication (e.g., assessment report, assessment committee), and secure resource allocation from leadership are the three key indicators necessary to sustain effective assessment practices for the whole university (Baker et al., 2012). To improve the assessment practices from the current process, institutions must engage a variety of stakeholders in their assessment practices (e.g., students, faculty, committees, surrounding community, alumni, employers), establish a more robust assessment of assessment processes (i.e., meta-assessment) to determine the impact of assessment on internal institution quality improvement, and become more transparent with assessment processes, results, and promising practices both internally and externally (Baker et al., 2012).
Conclusion To maximize the benefit and drive improvement, the assessment cycle must go full circle and complete all steps in the cycle. Constituents at all levels of the institution need to be engaged and support assessment efforts. Faculty should be engaged and drive the process from a classroom instruction and course outcomes perspective. GE committees or GE assessment committees can support the process and impose necessary structures such as schedules and reviews. The committees also often review and update GE outcomes as well as generate reporting information to foster discussion and action at all levels of the institution. Administration needs to be supportive of the process and allocate resources in support of assessment efforts. Closing the loop requires that these entities collaboratively work to define actions for improvement and then, most importantly, measure the impact of those changes and report findings. Assessment and
closing the assessment loop
31
improvement findings and continued recommendations form the foundation for ongoing assessment plans. The “loop” is a continuous, ongoing cycle that relies on data to drive not only our students’ learning but also the process of improving their learning.
References Allen, M. J. (2006). Assessing general education programs. San Francisco, CA: Jossey-Bass. Albert, S., Harring, K., Heiman, K., & McGuire, L. (2018, February). Integrating assessment and faculty development to lead curricular change. Presentation at the 2018 Association of American Colleges and Universities, General Education and Assessment Conference, Philadelphia, PA. Retrieved from https://www.aacu.org/sites/default/files/files/ gened18/CS%207%20Presentation.pdf Association of American Colleges and Universities. (2017). On solid ground: A preliminary look at the quality of student learning in the United States. Washington, DC: Author. Retrieved from https://www.luminafoundation.org/resources/on-solid-ground Baker, G. R., Jankowski, N., Provezis, S., & Kinzie, J. (2012). Using assessment results: Promising practices of institutions that do it well. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Bassis, M. (2015). A primer on the transformation of higher education in America. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from http://learningoutcomesassessment.org/documents/ BassisPrimer.pdf Blakeslee, A., & Baker, W. (2018, February). Assessing student writing across programs and times: (Inter)disciplinary and programmatic perspectives. Presentation at the 2018 Association of American Colleges and Universities, General Education and Assessment Conference, Philadelphia, PA. Retrieved from https://www.aacu.org/sites/default/files/files/ gened18/CS%205%20Presentation.pdf Bruce, R. T. (2018). Assessment in the core: Centering student learning. New Directions for Teaching and Learning, 2018(155), 73–80. doi:10.1002/tl.20305 Council for Higher Education Accreditation. (n.d.). Regional accrediting organizations. Retrieved from https://www.chea.org/regional-accrediting-organizations Eicholtz, M. (2018, February). Using campus institutional research to identify appropriate students as sources of data in general education assessment. Presentation at the 2018 Association of American Colleges and Universities, General Education and Assessment Conference, Philadelphia, PA. Retrieved from https://www.aacu.org/sites/default/ files/files/gened18/CS%2013%20-%20Eicholtz.pdf Fletcher, R. B., Meyer, L. H., Anderson, H., Johnston, P., & Rees, M. (2012). Faculty and students’ conceptions of assessment in higher education. Higher Education, 64(1), 119–133. doi:10.1007/s10734-011-9484-1 Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018). Assessment that matters: Trending toward practices that document authentic student learning. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Jones, D. A. (2009). Higher education assessment: Who are we assessing, and for what purpose? Peer Review, 11(1), 35. Retrieved from http://www.aacu.org/sites/default/files/ files/peerreview/Peer_Review_Winter_2009.pdf
32
gener ally speaking
Kuh, G. D., & Ikenberry, S. (2018). NILOA at ten: A retrospective. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Kuh, G. D., Ikenberry, S. O., Jankowski, N. A., Cain, T. R., Edwell, P. T., Hutching, P., & Kinzie, J. (2015). Using evidence of student learning to improve higher education. San Francisco, CA: Jossey-Bass. Matuga, J., & Turos, J. (2018). Infrastructure support for using assessment data for continuous improvement. New Directions for Teaching and Learning, 2018(155), 81–88. doi:10.1002/ tl.20306 McConnell, K., & Horan, E. (2018, February). VALUE: Lessons learned. Presentation at the 2018 Association of American Colleges and Universities, General Education and Assessment Conference, Philadelphia, PA. Retrieved from https://www.aacu.org/sites/ default/files/files/gened18/CS%2026%20Presentation.pdf Middaugh, M. F. (2009). Closing the loop: Linking planning and assessment—What can be done about the disconnect on most campuses between planning processes and assessment processes. Planning for Higher Education, 37(3), 5–14. Nelson, C. (2014, November 24). Assessing assessment. Inside Higher Ed. Retrieved from https://www.insidehighered.com/views/2014/11/24/essay-criticizes-state-assessmentmovement-higher-education Penn, J. D. (2011). The case for assessing complex general education student learning outcomes. New Directions for Institutional Research, 2011(149), 5–14. doi:10.1002/ir.376 Rawls, J., & Hammons, S. (2015). Disaggregating assessment to close the loop and improve student learning. Assessment & Evaluation in Higher Education, 40(1), 60–73. doi:10.10 80/02602938.2014.885931 Schen, M., Bostdorff, D., Johnson, M., & Singh, R. (2018, February). Lessons learned from using the VALUE rubrics for the course-embedded departmental assessment. Paper presented at the 2018 Association of American Colleges and Universities, General Education and Assessment Conference, Philadelphia, PA. Retrieved from https://www.aacu.org/sites/ default/files/files/gened18/CS12%20-%20Presentation.pdf
Chapter 4
The Impact: Two-Year Institutions Angie Adams and Devon Hall
Publicly supported community colleges play a critical role in the higher education environment. They have inspired a national discussion on college affordability and labor force preparedness. Currently, according to the American Association of Community Colleges (AACC, 2018), there are a total of 1,103 community colleges in the United States. Today, 980 are public institutions, 35 are tribal affiliated, and 88 are independent. Moreover, the AACC (2018) reported that for the 2016–2017 academic year, American community colleges awarded a total of 833,093 associate degrees and 533,579 certificates. To award these degrees, each community college must undergo a process of accreditation. General education courses and assessment play a critical role throughout the process for all community colleges. Sadly, on campuses across the country, there remains an underlying internal debate over the primary mission of these institutions. Consequently, at many community colleges, technical training and traditional general education programs compete for declining financial resources.
The Evolution of General Education Courses in Junior Colleges and Community Colleges In 1901, William Rainey Harper, president of the University of Chicago, co-founded the nation's first private junior college to provide the traditional first 2 years of undergraduate education to those wishing to transfer to a university. Thus, 2-year colleges
34
gener ally speaking
offered general education courses taught by faculty committed to passing on academic disciplines to their students (Hanson, 2017). Furthermore, from their inception until around the 1940s, 2-year colleges were commonly referred to as junior colleges (Cohen & Brawer, 2008). In fact, in 1922, during the second annual meeting of the American Association of Junior Colleges, a junior college was defined as “an institution offering two years of instruction of strictly collegiate grade” (Bogue, 1950, p. xxvii). Two-year higher education institutions that focused exclusively on skills training, therefore, did not meet the true definition of a junior college (Bogue, 1950). During the earlier decades, the term junior college was used to describe private 2-year colleges while the term community college was used to describe publicly supported institutions (Cohen & Brawer, 2008). This changed between World War I and World War II with the transformation of private junior colleges into public community colleges; the change was accompanied by a focus on technical and vocational curricula. However, by the 1930s, state universities grew frustrated with their lack of control over community colleges, so advocates began pushing community colleges to focus solely on vocational training (Levin, 2000). This movement gradually saw the mission creep of 2-year colleges from a commitment to the liberal arts to economic growth through workforce training (Hanson, 2017). In 1947, President Truman’s Commission on Higher Education issued a report that recommended the establishment of the community college system. The commission’s goal was to provide the nation’s population with free access to 2 years of education beyond the secondary level (Cohen & Brawer, 2008). The concept continues to have appeal today, with several presidential candidates supporting the concept of a free community college education. The concepts of open access and low-cost postsecondary education, coupled with the G.I. Bill signed by President Franklin Roosevelt years earlier, resulted in many veterans seeking higher education. The G.I. Bill, combined with the population growth of baby boomers in the fifties and sixties, resulted in 4-year institutions not being able to accommodate the growth of applicants during these decades. Thus, the need to alleviate the demand for postsecondary education resulted in the establishment of approximately 1,000 public community colleges, beginning operations in a period of less than 50 years (Cohen & Brawer, 2008). In the end, community colleges served an important role by helping to meet the demand for higher education in the country. Community colleges in rural and suburban communities experienced growth after the creation of a federally funded highway system that allowed students to drive to campus (Cohen & Brawer, 2008). Many colleges were built near highways to allow easier access for potential students. Indeed, the highway system really helped provide a community aspect to community colleges. For instance, the community college system in North Carolina was specifically designed so that residents of the state would not have to travel more than 30 minutes to attend one of the 58 community colleges within the state (North Carolina Community College System, 2006). Moreover, a study by Horn and Nevill (2006) concluded that 96% of community college students nationwide travel a median of 10 miles between their homes and campus.
the impact: two-year institutions
35
The G.I. Bill, coupled with a booming economy after World War II, also helped to slowly shift the mission of community colleges. Gradually, the emphasis on the study of the liberal arts in community colleges was replaced with community colleges emphasizing three areas: academics, vocations, and remediation (Cohen & Brawer, 2008). This gradual shift in emphasis was acknowledged by Brint and Karabel (1989), who found that, beginning in the 1960s, community colleges gradually moved from a liberal arts emphasis to a vocational emphasis. According to Levin (2000), community college administrators responded to the desires of students and industry leaders to fill the need for skills training. This resulted in many community college curriculums adjusting to the skill needs of the job market. Hanson (2017) argued that the community college movement away from the liberal arts, which served the broader social good, to skills training, which served a narrower economic interest, took place with very little opposition. Undeniably, during the 1990s, curriculum modifications took place in subject areas to emphasize remedial skills and the skill needs of the local business community (Levin, 2000). Thus, the academic culture in community colleges was gradually replaced by more of a business and economic philosophy dictated by the needs of the local economy. Given that community colleges are primarily funded with state and local tax dollars, the state and local business communities were able to influence how public funds for higher education were spent. Today, there are fewer public 2-year colleges devoted to the study of the liberal arts. Furthermore, with the declining number of private liberal arts 4-year colleges nationwide, American college students have fewer options for the study of public service, character development, and an overall general education (Hanson, 2017). Presently, the national conversations regarding the purpose of higher education have placed renewed focus on the dual purposes of community colleges as providing an affordable academic education that can be transferred to 4-year colleges and as developing a workforce to immediately bridge the skills gap. Many community colleges really do two things that are exceptionally important. First, they provide direct skills for workforce needs, ranging from healthcare jobs to high-tech manufacturing. These programs prepare students for a range of careers in specific fields. Second, community colleges provide an on-ramp to a 4-year degree (Mullin, 2012). In many cases, community colleges act as the first 2 years of a 4-year degree. Hence, the dual history of community colleges has led to a twofold purpose, and since community colleges have proven to be capable of adapting to the changing educational landscape, they are able to offer students employable technical skills as well as the general education and critical learning skills essential to their economic and social engagement.
Current Community College Realities Community colleges differ from universities regarding their admission policies. They have an open-door policy, and students enter whenever they have a desire, need,
36
gener ally speaking
or interest to change their life circumstances. Today, community colleges have gained attention for their potential of increasing the nation’s economic competitiveness along with the standard of living for people in the nation (Boggs, 2008). Community colleges have an obligation to provide education, and, at times, that means meeting the social, emotional, and physical needs of the students. According to deBary (1975): The educational needs of students cannot be considered in isolation from the needs of the society in which they live, which nurtures and subsidizes them and which justifiably expects that they will be active, mature, and responsible citizens. Indeed, the chief justification for the community’s underwriting the immense costs of universal access to higher education is not that it will increase earning power or social status or even provide enjoyment but that it will enhance the prospects of developing an intelligent and responsible citizenry. (p. 29)
An analysis of fall 2016 enrollment data indicated that of the 12.1 million students in community colleges, 59% (7.1 million) were enrolled in credit courses and 41% (5 million) were enrolled in non-credit courses (AACC, 2018). Non-credit courses are not part of a curriculum credential and are usually taken for personal development or skills enhancement. Furthermore, the data indicated that of the students enrolled in credit-bearing courses, 37% attended on a full-time basis, while 63% attended on a parttime basis. Moreover, the data revealed that the majority of credit-seeking community college students were female (56%), with male community college students’ enrollment continuing to decline (44%). The 2016 data also revealed that 36% of community college students were first-generation college students and 17% of the student body were single parents. According to data from the National Center for Education Statistics (NCES) on U.S. community college student demographics for the 2011–2012 academic year, the average age of a community college student was 28, with a median age of 24 (AACC, 2018). In addition, 51% of community college students were 21 years old and younger, 39% of students were between the ages of 22 and 39, and 10% of students were 40 or older. Regarding race and ethnicity, AACC (2018) reported that 47% of community college self-identified as White, whereas 24% identified as Hispanic. The data further revealed that Black students represented 13% of the community college student population while Asian/Pacific Islanders represented 6% and Native American students represented 1%. Two-year colleges award associate degrees after a student completes 2 years of a prescribed program of study. Conventional types of associate degrees include associate of arts (AA) degrees, associate of science (AS) degrees, and associate of applied science (AAS) degrees. While the AAS degree is generally considered a terminal degree, both the AA and the AS are considered as primarily for transfer to 4-year senior institutions. The programs of study for the transfer degrees (AA and AS) consist primarily of general education courses. However, when 2-year colleges shifted their focus away from the AA
the impact: two-year institutions
37
and AS degrees, which prepare students for continued studies and the baccalaureate, our education system became more tiered. Now, top-ranked 4-year institutions continue to serve the sons and daughters of the socioeconomically privileged. Those students continue to receive educations in subjects such as music, history, and physics. At the same time, lower-middle-class students attend 2-year schools where they earn AAS degrees while learning a set of skills of immediate use to businesses (Hanson, 2017). Today, most 4-year schools are committed to providing 2 years’ worth of general education prior to immersion in a specific field of study. Meanwhile, for a growing number of community college students earning applied associate degrees, such degree programs contain fewer general education courses (Hanson, 2017).
Student Learning Outcomes Colleges and universities are increasingly being asked to prove how they are making a difference in the lives of their students. Consequently, community colleges need to be able to demonstrate how their students are contributing to the economic development of their local communities and states. To do this, community colleges increasingly validate the learning outcomes of their students. Regarding technical skills, many of the student learning outcomes can be assessed by outside licensure and certification exam pass scores. For example, in the fields of aviation, cosmetology, mechanics, and nursing, student learning outcomes are measured by the percentage of students who pass the licensure exam on the first attempt. Conversely, the measurement of student learning outcomes in general education courses can be more challenging. Bresciani, Gardner, and Hickmott (2010) suggested that many community college faculty members do not trust professionals outside of academia to assess student learning outcomes. Additionally, Cohen and Brawer (2008) asserted that “the way colleges are organized leads most staff members to resist measurement of learning outcomes” (p. 214). This resistance is due, in part, to concerns about how assessment results are determined and how they will be used. Historically, the setting of standards and the assessment of these standards have been the responsibility of classroom faculty members. However, with the public demand for more accountability, many states are moving toward an assessment matrix that includes program retention and completion. Given the demographic and socioeconomic make-up of the current community college student population, life issues often hinder student completion rates when compared to students at 4-year institutions. In fact, according to the National Center for Education Statistics, the completion rate for community college students was only 20%, compared with a 57% completion rate for public 4-year postsecondary institutions (Snyder & Dillow, 2015). While most states continue to fund their community colleges based on enrollment, there is a gradual shift toward including retention and completion in the funding model (Moore Gardner, Kline, & Bresciani, 2014).
38
gener ally speaking
According to the Association of American Colleges and Universities (2009), there seems to be a shift from solely distributional models of funding toward models that combine distributive features along with more integrative parts. Sixty percent of community college students receive some form of financial aid (AACC, 2013), and about half of the nation’s college students are educated at a community college (Nunley, Bers, & Manning, 2011). As a result, state funding allocations for operating and capital budgets for community colleges create an environment of accountability. Accrediting agencies are clear in defining their purpose of assessment, noting that it should be able to demonstrate how institutions are improving both teaching and learning (Middaugh, 2009). This culture of accountability and assessment places community colleges under a significant amount of pressure to meet increasingly high external demands.
Purpose and Process of General Education The overall purpose of general education is to help students achieve success in their chosen major fields, career choices, and jobs. Hence, it is important to note that there is no one-size-fits-all model for general education that works in all colleges. The implementation, syllabi, assignment design, instruction, gathering of artifacts, and so forth differ from one institution to the next. The term general education embodies a reality that has been socially constructed; general education courses assist with providing students with a knowledge of cultures, the physical and natural worlds (e.g., science, mathematics, social sciences, humanities, histories, and languages), critical and creative thinking, personal and social responsibility (including local and global), and integrative learning. While courses being taught are similar in name and description, the types of students vary among community colleges and universities. According to Cohen and Brawer (2003), general education as a component of postsecondary education in community colleges where career and technical credentials are awarded have received a more significant emphasis in the United States than in other parts of the world. Today, even career and technical programs at community colleges include a core set of general education courses alongside career-specific courses or modules (Stumpf, 2007). For example, a student seeking an associate degree in nursing will also complete English, math, psychology, and sociology courses. This same concept applies to welding and business students. Community colleges have designed a core curriculum of general education courses for every degree or certificate, with the most general education credits required for an AA or AS degree and significantly fewer for an AAS degree. Therefore, at community colleges, students must complete a substantial number of general education courses in order to graduate. For those seeking a baccalaureate-level degree, general education courses are their foundation courses.
the impact: two-year institutions
39
General Education Assessment General education is the foundation for undergraduate education, so the assessment of general education has the potential to encompass the entire institution. The focus is on student learning, but the assessment, at times, is quite challenging. A general education assessment differs from a single program assessment due largely to the fact that the classes being assessed are taught throughout the campus by various faculty, not necessarily the person in charge of collecting data. A general education assessment requires the coordination and cooperation of many faculty members and departments. Consequently, general education leaders often have to advocate for support while feeling as though they have all the responsibility and no authority over the faculty of varying programs (Allen, 2006). Additionally, there is a percentage of part-time faculty members who generally teach in the evening. As a result, these part-time faculty typically have less interactions with other faculty members regarding student learning and assessment (Nunley et al., 2011). Thus, colleges need to embrace a variety of ways in which to focus their general education assessments. Many general education leaders in community colleges use course-embedded assessments in which there is collaboration between faculty and an institutional leader. An alignment matrix can be used to provide cohesion, discussion, and orientation, ensuring that individual courses and assignments are aligned with general education outcomes for the program. There is little consistency in the assessment of general education courses. There is also some debate as to exactly which courses are considered general education courses. This lack of consistency in definition and assessment makes the assessment of community college general education courses difficult. In the North Carolina Community College system, there is general agreement that all college-level English courses and college-level math courses are considered general education courses. Nationwide, all students enrolled in AAS degree programs are usually required to take at least one college-level English course and one college-level math course. Therefore, nationwide, all college-level English and math courses should consist of both transfer students (AA and AS) as well as career and technical students (AAS).
English and Math in North Carolina Community Colleges Having one of the larger community college systems in the nation, the North Carolina Community College System (NCCCS) compiles student academic data from all 58 community colleges in the state. Specifically, NCCCS compiles data on the number of students from each college who were successful in all college-level English and math classes. Since community colleges generally ascribe to the notion that English and math courses constitute general education courses, an investigation of these two areas highlights success in student outcomes based on college size (i.e., rural vs. urban), student
40
gener ally speaking
age, student gender, and student race/ethnicity. Here, success is defined as students who earn a final course grade of A, B, or C in any college-level English or math course, except developmental courses. Because North Carolina uses a common course library, all English and math courses taught at each of the 58 community colleges have the exact same course content. This makes the statewide comparison of these assessments possible.
Student Success Rate in College-Level Courses Student success was defined as the percentage of first-time associate degree–seeking or transfer pathway students passing a credit-bearing English or math course with a C or better within 2 years of their first term of enrollment. The population under review included first-time fall 2015 curriculum students who were enrolled in an associate degree program or a transfer pathway program (i.e., their curriculum code begins with an A or P) during the fall of 2015. Of these students, course success was determined as a grade of C or better in at least one credit-bearing English or math course (not including the lab record) during their first 2 academic years (through the end of the summer 2017 term). In both cases, the data sources were the Curriculum Registration, Progress, Financial Aid Report (CRPFAR) data file of the National Student Clearinghouse (NSC).
First-Year Progression First-year progression was defined as the percentage of first-time fall curriculum students attempting at least 12 credit hours who successfully completed at least 12 hours within their first academic year (i.e., fall, spring, and summer). This examination included first-time fall 2016 curriculum students attempting at least 12 hours during the 2017 academic year (i.e., fall 2016, spring 2017, summer 2017). Hours attempted were calculated for all courses, including developmental and course withdraws in which the student earned a standard letter grade of A, B, C, D, F, P, or W. The number of hours attempted did not include courses in which the student earned a standard letter grade of AU (audit), CE (credit by exam), I or IP (incomplete), O (other), or U (unknown). Successful students completed at least 12 hours, including developmental courses, with a standard letter grade of A, B, C, or P within their first academic year.
First-Time Fall Cohort Definition First-time cohorts were fall credential-seeking and dual enrollment (e.g., Career and College Promise) students enrolled in curriculum courses at a North Carolina community college for the first time. Fall first-time students were identified based on the following criteria: fall semester was the first curriculum enrollment term at any North Carolina community college; the first-term curriculum code was not basic skills plus or special credit; and there was no previous postsecondary enrollment, as verified from
the impact: two-year institutions
41
the National Student Clearinghouse, before the start date of the fall semester (i.e., August 2015). The fall first-time cohorts were used to determine the student success rate in college-level English courses, the student success rate in college-level math courses, and first-year progression. Data was provided by the NCCCS and was published in the 2018 Performance Measures for Student Success. Fortunately, the data was presented in a format that allowed for analysis by age, gender, and race. The 58 North Carolina community colleges were divided into three broad categories based on their fall 2015 headcount enrollment: (a) small, (b) medium, and (c) large. The stratification was arbitrary but provided a basis for comparison of the English and math general education course as well as first-year progression. Small colleges were defined as those that had a fall semester enrollment of 5,000 or fewer curriculum students. Forty-seven of the 58 community colleges had a fall 2015 curriculum enrollment of 5,000 or fewer students (n = 47). Medium-sized colleges were defined as those North Carolina community colleges with enrollments greater than 5,000 but less than 10,000. Seven of the colleges had a fall 2015 curriculum headcount of more than 5,000 but less than 10,000 (n = 7). Large community colleges were defined as those with a fall 2015 curriculum enrollment of greater than 10,000 students. Four of the colleges had enrollments of 10,000 or more students (n = 4). In general, the smaller colleges are mostly located in rural communities, whereas medium-sized community colleges are often located in suburban communities. The larger colleges are located in communities that are considered urban in nature. The total number of students enrolled in small colleges was 110,271, whereas the total number of students enrolled in medium-sized colleges was 48,934 and the total number of students enrolled in the large colleges was 63,461. North Carolina is generally considered a rural state; therefore, it was not surprising that the majority of community college students were enrolled at more rural colleges with smaller enrollment numbers. Smaller enrollment usually results in small class size and smaller faculty-to-student ratios. With smaller faculty-to-student ratios, it was expected that student success rates would be higher—all other things being equal. Based on the data summarized in Figure 4.1, community college students age 15 years and younger (i.e., dual-enrolled and early college students) who enrolled in the larger colleges performed better in college-level English courses than students of the same age range who enrolled in small and medium-sized colleges. Traditional freshman-aged students, 18, had the highest success rate of all age groups, regardless of college size. Similarly, older students (40+) generally had less success in college-level English classes. Interestingly, students aged 18 and younger generally performed better than older students on college-level math courses (as seen in Figure 4.2). One possible explanation for this result could be that high school–age students enrolled in college math classes are usually high-performing high school students at the top of their high school class. Students aged 19 and older may consist of a higher percentage of academically weaker
42
gener ally speaking
students or students who have been away from academic study because of life demands. The data further indicated that students who were 40 and older and enrolled in the smaller colleges performed better than their counterparts enrolled at the larger colleges. These results are consistent with the literature that indicates that older community
COLLEGE-LEVEL ENGLISH COURSES 70%
SUCCESS RATE
60% 50% 40% 30% 20% 10% 0% STUDENT AGE Small Medium Large
–15 16–17 18 19–24 25–39 40+ 46.5%
54.9%
60.8%
46.5%
48.1%
45.7%
38.3%
55.1%
64.6%
48.3%
48.6%
39.4%
56.3%
54.3%
59.3%
47.5%
49.5%
41.0%
Figure 4.1. Success Rates in College-Level English Courses COLLEGE-LEVEL MATH COURSES 70%
SUCCESS RATE
60% 50% 40% 30% 20% 10% 0% STUDENT AGE Small Medium Large
–15 16–17 18 19–24 25–39 40+ 25.6%
45.0%
37.6%
21.4%
17.4%
17.9%
39.7%
39.4%
35.3%
22.1%
18.7%
14.1%
46.0%
37.0%
35.3%
25.3%
22.0%
16.8%
Figure 4.2. Success Rates in College-Level Math Courses
the impact: two-year institutions
43
college students often perform lower than traditional-age students because of the demands of family and employment obligations. Summarized in Figure 4.3, the data indicated that African American male and female students generally performed better at smaller colleges. Consistent with the literature, African American female community college students performed better than their male counterparts regardless of college size. Similarly, Caucasian female students enrolled in North Carolina community colleges performed better than their male counterparts. However, the gap in the performance between male and female Caucasian students was smaller for students enrolled in the smaller colleges. Likewise, Hispanic female students performed better than Hispanic male students. Still, it is interesting to note that Hispanic students appeared to perform better in small and medium-sized community colleges compared to the larger community colleges. Regrettably, the provided data did not designate Native American and Asian students, the sizable other two racial groups enrolled in North Carolina community colleges. It is, therefore, assumed that along with other racial groups, Asians and Native Americans were consolidated in the racial groups classified as “other.” Again, female “other” students performed better than male “other” students. However, unlike other gender and racial categories, female “other” students performed best at larger colleges instead of at smaller or medium-sized colleges. Based on the success rates of North Carolina community college English and math courses, the data indicated that the age of the student has some relationship to student
FIRST-YEAR PROGRESSION 70%
PROGRESSION RATE
60% 50% 40% 30% 20% 10% 0% Race & Gender Small Medium Large
African American Female
African American Male
Caucasian Female
Caucasian Male
Hispanic Female
Hispanic Male
Other Female
Other Male
63.3%
54.4%
76.4%
74.1%
73.0%
70.4%
72.4%
69.6%
49.1%
45.9%
75.4%
70.4%
74.3%
65.3%
71.7%
68.6%
55.5%
46.5%
72.3%
69.8%
69.8%
62.5%
75.8%
66.0%
Figure 4.3. First-Year Progression Rates
44
gener ally speaking
learning outcomes, with younger students generally performing better than older students. Since English and math are accepted as critical general education courses in North Carolina, a supposition can be made that the age of the student may also relate to student success in other general education courses. In the analysis of the two general education courses, when evaluating the age of the student in relation to college size, the age of the student appeared to be the more important variable. However, in an analysis of first-year progression, when evaluating the influence of race and gender on community college student success, college size appears to be a factor worthy of consideration. While most students performed better in small colleges, the data indicated that college size—and presumably urban/rural location— appeared to correlate more with student learning outcomes among African American and Hispanic students. An analysis of the data also indicated that female students— regardless of race—generally progressed at a better rate than their male counterparts. These findings suggest that college size (rural vs. urban), student age, student gender, and student race and ethnicity correlate to student success in community college general education courses in North Carolina. It is therefore possible that such correlations exist in other states, and possibly nationwide. Further investigation into this hypothesis is therefore needed. Community college administrators can use these findings when considering which demographic group of students are likely to benefit from early intervention in general education courses and plan accordingly.
Accreditation and Assessment North Carolina colleges earn accreditation from the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), often simply referred to as SACS. The overarching purpose of SACS is “to assure the educational quality and improve the effectiveness of its member institutions” (Southern Association of Colleges and Schools Commission on Colleges, 2018). Community colleges, like other institutions, face the challenge of maintaining their accreditation, which evaluates other areas beyond student learning, while also assessing their programs for student learning success. Despite these challenges, assessment does not necessarily need to be complicated, for when it is used correctly, it can be a powerful tool for improvement. After all, assessment of student learning is integral in determining what institutions and faculty members, as a whole, can do to improve learning. Nonetheless, according to Nichols and Nichols (2001), assessment of general education is the most complex and “controversial assessment area on any campus” (p. 12). This is largely due to the size of general education programs as a part of the curriculum, regional accrediting association attention, and state accountability requirements. One of the most complicated aspects of general education is that no two colleges collect and assess data in the same way, nor is the evaluation process the same for all colleges. Many community colleges rely on embedded assessment for their direct assessment
the impact: two-year institutions
45
data. A matrix can be created that maps out the outcomes along a timeline for when data will be collected for a specific course or semester. Embedded assessment data can be based on a variety of activities, such as exams, parts of exams, oral presentations, fieldwork activities, portfolios, community service learning, group projects, learning journals, in-class writing assignments, and general education capstone projects. These capstone projects can be assessed for critical thinking skills, communication, and content. For community colleges that have outcomes related to community service or civic engagement, students can reflect on their experiences and field supervisors can provide direct feedback about student learning during the process. It is important to note that the general education process begins the moment a student enrolls in a community college. Likewise, the data collection and assessment practices should run parallel. There is also indirect assessment of general education learning outcomes. This process involves students’ opinions, which can offer insight into what is learned in direct assessment studies. Colleges can use surveys, interviews, or focus groups to collect self-assessments of students, alumni, employers, and field supervisors. Questions in these assessments are designed to examine the affective impact (e.g., “How did your advisor help you make academic or personal decisions?”), cognitive impact (e.g., “Describe your first registration experience”), social impact (e.g., “How did your advisor explain the different types of format in which courses are offered?”), and developmental impact (e.g., “How did your advisor help you understand options, resources, and opportunities?”) of the community college processes.
Future Outlook According to Supapidhayakul (2011), general education and learning practices are concerned with knowledge and skills because there is a need for increased complexity in competition, social gaps, technology, and natural fluctuation. Therefore, it is imperative that general education leaders receive the needed support from administration to ensure that the evaluated content is completed properly by the faculty. According to Hanstedt (2012), a key component is to make sure that students demonstrate a familiarity with appropriate knowledge and methodologies. The general education leader and faculty member must collaborate to ensure that the evaluation standards of the assignment are appropriate for an introductory-level course. Faculty also have a responsibility to deliver the content of an undergraduate course in a way that can feed into the upper division of other courses. As general education assessments begin with providing training for faculty and staff in the assessment planning, design of the subject being assessed, and analysis of assessment results (Allen, 2006), there are specific recommendations for community college administration and faculty to consider. Among institution administration, there should be a designated individual responsible for overseeing general education assessment and
46
gener ally speaking
adherence to learning outcomes. This must be made a priority on campus because the selected individual will need the cooperation of the assessment team, administration, and faculty. There should also be a committee in place for assessing the collected artifacts, with the dean or director of general education maintaining responsibility for obtaining and tracking these artifacts. Administrators should also be prepared to provide the needed resources for assessment and change as well as to provide and request participation from institutional research departments. The general education committee or team should make general education assessment and data collection a priority, and the members should work collaboratively with faculty to develop goals, policies, procedures, and outcomes. The team should also develop a matrix that details the data collection and assessment process along with the implementation plan. Here, a timeline would be invaluable. Members should share assessment results and support the necessary changes that need to be implemented. The team must always document the use of results thoroughly. All faculty members should define and understand the learning outcomes and goals in the program. They should align courses and assignments with the general education outcomes and be able to cooperate with the general education leader and team. When needed, faculty members should serve on the general education assessment team. In addition, staff members should also provide support for student learning and be aware of the general education process. They should collaborate with faculty to implement changes and should be available when needed to assist in designing and supporting student learning plans.
Conclusion Appropriate and timely planning for a general education program and its assessment is a key component of a program’s success. Institutions must determine which outcomes or learning goals are essential to student success. Those general education learning outcomes must also align with each course or set of courses determined to be necessary in the overall general education program. By agreeing on outcomes and corresponding courses that meet those outcomes, institutions develop a common understanding of what success looks like in general education programs. This common understanding, as well as an established plan and timeline for assessment activities, can be useful for creating a culture of transparency in the institution. Realizing that every assessment plan will require revision, results must be documented concisely and meticulously. If a method did not work, there should reported evidence that indicates such. For example, every student artifact may not be suitable to use in assessing general education outcomes. Those in charge of assessment should consider only those methods of assessment or representations of student work that are reflective of the student outcomes. Tools such as rubrics and embedded assignments,
the impact: two-year institutions
47
developed by both faculty and the assessment team, can make the assessment process more efficient and consistent. Collaboration between faculty, staff, and administrators can be the determining factor in the success of any assessment strategy. If the strategy is not cohesive, incorporating multiple viewpoints and considerations, then the chances of having useable, meaningful results that are reflective of the entire institution diminish. Faculty time may already be stretched thin by institutional demands. Being cognizant of the effort necessary to perform assessment tasks can help committees and administrators consider intentional ways faculty can be involved without placing excessive demands on faculty time. In addition, having the appropriate buy-in or engagement from administration can decrease constraints around potential resources or improvements that can be made. General education is the responsibility of everyone on campus. Assessment, when completed correctly, is a key component to successful student learning.
References Allen, M. J. (2006). Assessing general education programs. San Francisco, CA: Anker. American Association of Community Colleges. (2013, August 20). Nearly 60% of two-year students get financial aid. Community College Daily. Retrieved from http://www.ccdaily.com American Association of Community Colleges. (2018). AACC fast facts 2018. Retrieved fromhttps://www.aacc.nche.edu/research-trends/fast-facts/ Association of American Colleges and Universities. (2009). Talking points: AAC&U 2009 member survey findings. Retrieved from https://aacu.org/about/membership/survey talkingpoints Boggs, G. (2008). Preface. In G. Boggs, P. A. Elsner, & J. T. Irwin (Eds.), Global development of community colleges, technical colleges, and further education programs (pp. ix–x). Washington, DC: Community College Press. Bogue, J. P. (1950). The community college. New York, NY: McGraw-Hill. Bresciani, M. J., Gardner, M. M., & Hickmott, J. (2010). Demonstrating student success: A practical guide to outcomes-based assessment of learning and development in student affairs. Sterling, VA: Stylus. Brint, S., & Karabel, J. (1989). The diverted dream: Community colleges and the promise of educational opportunity in America, 1900–1985. New York, NY: Oxford University Press. Cohen, A. M. & Brawer, F. B. (2003). The American community college (4th ed.). San Francisco, CA: Jossey Bass. Cohen, A. M., & Brawer, F. B. (2008). The American community college (5th ed.). San Francisco, CA: Jossey Bass. deBary, W. T. (1975). General education and the university crisis. In S. Hook, P. Kurtz, & M. Todorovich (Eds.), The philosophy of the curriculum: The need for general education (pp. 3–37). Buffalo, NY: Prometheus Books. Hanson, C. (2017). The community college and the good society: How the liberal arts were undermined and what we can do to bring them back. New York, NY: Routledge. Hanstedt, P. (2012). General education essentials: A guide for college faculty. San Francisco, CA: Jossey-Bass.
48
gener ally speaking
Horn, L., & Nevill, S. (2006). Profile of undergraduates in U.S. postsecondary education institutions: 2003–04: With a special analysis of community college students (NCES 2006-184). Washington, DC: National Center for Education Statistics, U.S. Department of Education. Levin, J. S. (2000). The revised institution: The community college mission at the end of the twentieth century. Community College Review, 28(2), 1–25. doi:10.1177/009155210002800201 Middaugh, M. (2009). Closing the loop: Linking planning and assessment—What can be done about the disconnect on most campuses between planning processes and assessment processes. Planning for Higher Education, 39(3), 5–14. Moore Gardner, M., Kline, K., & Bresciani, M. (2014). Assessing student learning in the community and two-year college. Sterling, VA: Stylus. Mullin, C. M. (2012, October). Transfer: An indispensable part of the community college mission (Policy Brief 2012-03PBL). Washington, DC: American Association of Community Colleges. Nichols, J. O., & Nichols, K. W. (2001). General education assessment for improvement of student academic achievement: Guidance for academic departments and committees. New York, NY: Agathon Press. North Carolina Community College System. (2006). A matter of facts: The North Carolina community college system fact book 2006. Raleigh, NC: Author. Retrieved from http:// digital.ncdcr.gov/cdm/fullbrowser/collection/p249901coll22/id/18392/rv/compound object/cpd/18394/rec/8 Nunley, C. R., Bers, T. H., & Manning, T. (2011). Learning outcomes assessment in community colleges (NILOA Occasional Paper No. 10). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomeassessment.org/documents/CommunityCollege.pdf Snyder, T. D., & Dillow, S. A. (2015). Digest of education statistics 2013 (NCES 2015-011). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Southern Association of Colleges and Schools Commission on Colleges. (2018). Southern Association of Colleges and Schools Commission on Colleges. Retrieved from http://www .sacscoc.org Stumpf, J. M. (2007). Meeting the needs: Does technical college education meet the needs of employers (Doctoral dissertation). Retrieved from ProQuest Digital Dissertations. (AAT 3256857). Supapidhayakul, S. (2011). The basic principles for teaching in general education courses. Thai Journal of General Education, 2(2), 41–45.
Chapter 5
The Impact: Four-Year Institutions Kristen L. Tarantino and Yue Adam Shen
Institutions of higher education in the United States have a long history that supports both the development and variety of curricula designed to meet the general education needs of students. General education in the 4-year institution, however, has not always been part of the curriculum. With roots as early as the 18th century, general education curricula have evolved over time and as a result of numerous forces. These forces have included, among others, political motivations, government funding, the influence of science and technology, and student demographics. As various approaches to general education development have emerged, higher education, particularly among 4-year institutions, has also sought to verify the impact of these programs for student learning. In the face of the seemingly changing needs of the future, in terms of knowledge and skills, general education previously built on the fundamental ideologies of the traditional liberal arts education has shifted its focus toward preparing students to be the next generation of knowledge creators and curators. The different institutional types among 4-year institutions call for different approaches and practices, both toward curriculum development and in assessing the effectiveness of general education. This chapter aims to navigate through the historical roots that have long characterized American higher education to support a discussion about the current narrative of general education in the United States. Further, this chapter details the challenges inherent in developing curricula and assessment practices in 4-year institutions and reviews current research that reflects the impact of general education for student learning at these institutions.
50
gener ally speaking
Historical Roots of General Education American higher education goes back to the first colonial colleges of the 17th century. Born out of desire to provide moral and character development to young men of the elite, the curriculum primarily focused on the language of the classics. By 1820–1860, the number of institutions had increased, and the urge to educate students in more than just language transitioned models of education to more humanist methods (Winterer, 1998). Through literature and the study of the ancient civilizations, students gained opportunities to make connections between the ancient world and early America. Education became influenced by the shifting definition of democracy in the 19th century. Democracy, as a model for access to education, shifted higher education from an elitist view to one that supported the offering of education for the masses (Veysey, 1965). The creation of a free elective system at Harvard introduced the idea of a non-structured curriculum that allowed students to explore areas of interest. However, as the utilitarian mindset toward vocational training in undergraduate education grew, free electives were no longer viewed as having much purpose (Veysey, 1965). In fact, some institutions began restricting students from taking electives in the first 2 years of study, reserving electives to serve as preparatory courses for areas of future career development, such as medicine (Veysey, 1965). With the intrigue of the German research universities in the mid- to late 19th century, institutional leaders and faculty began to push for science in addition to the classics in the prescribed curriculum. The elective system served as a means for these institutions to offer such science courses without having them as formal courses in the curriculum. This emphasis on research and science was only accentuated by the Morrill Act of 1862, which provided land in each state for the creation of at least one college that would provide education in agriculture and mechanical arts, in addition to existing classical and scientific curricula (Carstensen, 1962). The expansion of science and utilitarian views of education (i.e., that education should be practical and serve a specific purpose), edged away the original intent of higher education to provide character and moral development in favor of more comprehensive education. Tensions emerged between specialized education and providing a well-rounded education. The new institutions, built as a result of the Morrill Act and the influence of the German universities, protected specialized departments, particularly those associated with vocational aspirations (Veysey, 1965). This protection also led to the specialization of higher education among faculty. In previous models of higher education, faculty would often teach multiple courses in a variety of disciplines. As a result of this trend in specialization, multi-dimensional faculty disappeared. In addition, newer departments based in the natural and social sciences established their legitimacy through the research they produced. What started in early-19th-century American higher education as a focus on liberal education began to shift toward the idea of general education at the turn of the 20th
the impact: four-year institutions
51
century. The idea behind general education emerged from the work of individuals such as John Dewey, who felt that knowledge should be integrated to support engagement with contemporary issues (Jencks & Riesman, 1968; Miller, 1988). In contrast, liberal education had been focused on supporting the intellectual development of students. Over time, however, the movement toward Dewey’s notion of general education became less reflective of engagement with contemporary issues and more associated with a breadth of requirements (Brint, Proctor, Murphy, Turk-Bicakci, & Hanneman, 2009). In the 1940s and 1950s, colleges and universities adopted the idea of general education as a curricular model involving the organization of majors, electives, and distribution requirements (Rudolph, 1977). By the end of the 20th century, there was little consensus among higher education institutions when it came to the general education curriculum. Brint et al. (2009) reported four main models that existed at the end of the 20th century. The first model focuses on core distribution areas. In response to Harvard University’s free elective system, Yale University adopted a “curricular structure of concentration and distribution” (Brint et al., 2009, p. 608). The predominant core areas included the humanities, social sciences, and natural sciences. The second model of general education focused on the traditional liberal arts. Originating from the classical curriculum used by the colonial colleges (Rothblatt, 1988), this curricular model did not include natural or social sciences. Instead, this model focused primarily on areas such as literature, history, philosophy, and foreign language (Brint et al., 2009). By the 1960s, the traditional liberal arts model was predominantly used by denominational colleges (Jencks & Riesman, 1968). A third model identified by Brint et al. (2009) focused on cultures and ethics. These general education programs were developed by institutions in opposition to higher education’s traditional focus on Western civilizations. Finally, a fourth model for general education that emerged focused on civic or utilitarian models. These programs were developed in response to state and federal mandates and “focus[ed] on preparing students for civic and business life by exposing them to US government, business, and technology courses” (Brint et al., 2009, p. 609). As noted here, the development and evolution of general education has been influenced by many trends and continues to be in further refinement in the 21st century.
Drivers for General Education in the 21st Century Historic conventions of higher education in the United States have shaped how current institutions of higher education operate today. Though some of the traditions that have characterized the higher education landscape have continued into the present century, the mass expansion of higher education institutions has created variation in how general education programs are developed and student learning is assessed. Even among 4-year institutions, the institutional size, public or private status of an institution,
52
gener ally speaking
and institutional mission can all have an impact on how a general education program is structured, as well as how students will be impacted by those curricula. A closer look at each of these characteristics highlights the nuances that exist among 4-year institutions when considering general education programming and assessment for student learning.
Institutional Size In the last century, the look of the 4-year institution has undergone many changes. To this day, there remain 4-year liberal arts institutions that operate on their own, with little governance beyond their own boards. There are also large research universities that come equipped with medical or law schools, as well as colleges of business and education. These institutions may have national or international reputations for research, compete for the top students, and tend to enroll larger numbers of students. Some of these universities also belong to a network of state institutions, which operate via a multi-tiered governance structure, allowing for some flexibility in institutional decision making yet providing a sense of unity among member institutions. Wherever a 4-year institution falls in this continuum, its size can be a primary driver for the curriculum and assessment methods utilized in general education programming.
Single Liberal Arts Institution Labeled here as liberal arts institutions, these colleges typically only include the traditional 4 years of undergraduate study. In the rare case, a 4-year liberal arts college may have an optional fifth year for teaching programs. These institutions utilize many of the same decision-making procedures as a single, large university, such as faculty governance and board approvals. However, the balance of power in decision making can be vastly different. For example, in liberal arts institutions that house many tenured faculty members, the opinion of the faculty member in decision making may carry significant weight compared to a larger institution that may rely primarily on adjunct faculty members. Coupled with the burden of securing faculty buy-in, liberal arts institutions may also specify particular requirements for assessment that rely on more qualitative methods (i.e., portfolios, student interviews, etc.). Having fewer students to assess, these institutions have the potential to utilize more grounded, qualitative assessments that require increased involvement from faculty. Assessment practices in liberal arts institutions also benefit from a curriculum and organizational structure that is typically less complex than a large university because there is more opportunity for innovative assessment methods. For example, Evergreen State College focuses on the interdisciplinary nature of coursework, allowing students to design their own course of study. Instead of the traditional grade point average, Evergreen students utilize self and faculty evaluations to
the impact: four-year institutions
53
determine growth in learning (The Evergreen State College, 2018). Unfortunately, one disadvantage for institutions of a smaller size is the possibility of less-trained faculty in the area of assessment, as well as smaller or nonexistent offices for institutional assessment. Often these institutions have fewer resources, both in personnel and in funding, and do not have the budgetary means to support a fully staffed assessment office or to provide professional development for faculty in learning assessment.
Single Large University The single university mirrors many of the same characteristics of the liberal arts institutions when it comes to general education. Universities also support a 4-year undergraduate education yet differ organizationally from liberal arts institutions in that universities may be coupled with professional or graduate schools. Generally, universities develop their own general education curriculum in collaboration with faculty, administrators, and institutional leaders. Similar to public liberal arts institutions, some states or accrediting agencies may dictate which outcomes should be included in the general education curriculum. The result is that there is no unified curriculum among institutions, with every institution defining its own learning outcomes and curriculum for general education. As a “public Ivy” university whose history extends beyond that of modern higher education, the College of William and Mary (W&M) adopted a unique framework toward general education assessment. Different from other public research universities, W&M requires a college curriculum that is rather strictly defined and operated. All W&M undergraduate students, regardless of disciplines and degree programs, must follow a 4-year course plan that provides a “coherent liberal arts education” (The College of William & Mary [W&M], 2018a). In the first year of study, undergraduates explore big ideas and methods of inquiry, followed by courses in the second year that cover three disciplinary knowledge domains (natural world and quantitative reasoning; culture and the individual; and arts, letters, and values). As part of their third year of study, undergraduates are expected to engage in a study abroad/study away program or course equivalent. Finally, students participate in a capstone experience in their final year of study. No alternative test or way of demonstrating proficiency is allowed as a substitute for any of these course requirements, and, with the exception of transfer students, no courses other than W&M’s own approved courses are allowed. This structure of general education curriculum is set to ensure a “coherent liberal arts education,” under which courses on each level equip students with knowledge and skills to support learning in courses on the next level (W&M, 2018b). The vertical structure of the general education program at W&M consciously brings forward the fundamental role of general education in the undergraduate program by paralleling the disciplinary or subject-specific courses throughout a student’s career.
54
gener ally speaking
For institutions with larger student enrollments, the use of qualitative data for learning assessments becomes unfeasible, leading these institutions to embrace more quantitative means of assessment. However, in some cases institutions may opt for both quantitative and qualitative measures, depending on the size and intention of the institution. For example, an institution may utilize the Collegiate Learning Assessment (CLA+) for measuring student gains in critical thinking but also employ a rubric for specified assignments in individual classes to show clear examples of student gains. Universities will have to determine in which areas they can best utilize detailed assessments, such as rubrics, and in which areas a large-scale, standardized measure may be the best fit. For larger institutions, there is the potential for increased personnel and funding resources. Depending on the size of an institution, these universities may have both an assessment office and an institutional research office to assist with data. Although having more administrative support for student learning assessment can be beneficial, the increasing complexity in the organization and curriculum structure can prove to be challenging when developing procedures and appointing appropriate oversight. For example, where institutional leaders have to apportion funding to departments, they may be less concerned with measuring learning gains from general education in favor of utilizing funds to promote more financially productive departments (e.g., schools or colleges dedicated to the graduate study of business or law). In addition, these universities will have higher numbers of faculty members with whom they can collaborate to measure student learning. In cases where there are state or accreditation reporting requirements, having more personnel, whether faculty or administrators, to support the assessment of student learning offers the opportunity to produce quicker and more comprehensive results.
Large State Systems Typically seen at the state level, large systems of networked institutions (e.g., the University of California, the University of Maryland, the State University of New York) have several layers of governance that may inhibit or prescribe general education curricula. In general, there appear to be three primary options of determining general education curricula: the state system articulates general education requirements or learning outcomes, individual institutions articulate requirements or outcomes, or some combination of both. As part of the University of California (UC) system, which coordinates and standardizes many academic activities across its member institutions, the University of California, San Diego (UCSD); University of California, Santa Cruz (UCSC); University of California, Santa Barbara (UCSB); and University of California, Los Angeles (UCLA) share similar general education requirements: entry-level writing and American history and institutions (University of California, n.d.). While there are alternative ways to satisfy these two requirements, such as advanced placement courses in high school, member
the impact: four-year institutions
55
institutions either require or provide their own general education courses. Each of the four institutions discussed has incorporated these two requirements into their general education requirements but varies in their assessment methods and criteria. For example, UCLA increases the threshold for high school courses in satisfying the American history and institutions requirement (i.e., earning a grade average of B or better), beyond what is required by the UC system (i.e., passing), whereas UCSB eliminates high school courses as an option to satisfy this general education requirement. In addition to the two mandated requirements, these UC universities also conform to a standardized general education program, which requires their undergraduate students to demonstrate skills in areas such as social science/humanities/cultures, arts/language arts, and mathematics/ quantitative reasoning, by achieving at least passing grades in at least one course that is pre-approved by faculty or academic committee(s). Although similarly positioned in a statewide university system, the University of Maryland, Baltimore County (UMBC) as a member institution does not have a set of mandates from the system for structuring its general education program. Five functional competencies, or learning outcomes, are addressed by all general education courses: oral and written communication, scientific and quantitative reasoning, critical analysis and reasoning, technological competence, and information literacy (University of Maryland, Baltimore County [UMBC], 2005). Initialized in 2009, general education assessment at UMBC is unique in its multilayer and incorporated nature (UMBC, 2010). First, every course that fulfills general education requirements goes through the assessment cycle on a yearly rotating schedule, measured by student achievements on relevant learning outcomes of general education. Second, those general education learning goals are assessed on the departmental level, where each general education course, previously assessed individually, is reviewed again as part of a group on learning outcome performance. Lastly, on the institutional level, a designated committee is charged with reviewing general education assessment data of all departments and synthesizing general education learning outcome data across the department with the institutional data on student experience, enrollment, and degree outcomes. With regard to assessing student learning from general education, these systems must handle vast amounts of data. In most cases, assessment across a network of institutions will rely on quantitative data, particularly in the form of standardized measures. At institutions such as UMBC, which relies on measures of direct assessment of student learning, data collected are often presented in aggregate form, inhibiting any direct links between learning and the individual student. However, from a systems perspective, institutions may only be collecting data on specific outcomes. For example, an institution may have multiple outcomes for general education, one of which may be mandated by the state system. These large systems are looking for key outcomes from all of their students, even if they are making learning gains in other areas as well. Another organizational benefit that stems from this network view is that these institutions most likely have larger institutional research and/or assessment offices to assist
56
gener ally speaking
with data collection and analysis. Although it may seem an obvious conclusion given their size, these institutions would need significant personnel resources to analyze and report assessment findings in given outcome measures. In addition, the number of faculty able to assist with data collection would also be increased at these institutions.
Public Versus Private The public or private status of an institution primarily affects the power for curricular changes. Within an institution’s governance system, there are a number of factors that play a role in which curriculum is approved and how student learning, as a result, is measured. Public institutions may have more constraints than private institutions; however, the considerations to each constraint are similar among all institutions regardless of public/private status.
State-Mandated Outcomes The writing is on the wall for higher education. Federal education standards have already made similar mandates in primary and secondary (i.e., K–12) education by creating a unified list of outcomes expected per grade level (i.e., Common Core standards). Assuming that students graduate from high school having achieved the state and federally required outcomes, there is little to stop the state and federal governments from trying to prescribe similar outcomes from college and university students. In some ways, there is already a degree of state mandate from public institutions, requiring that institutions be accredited or enroll a certain percentage of in-state students in order to receive public funds. Accrediting bodies are responsible for ensuring that, to some degree, learning outcomes are being achieved. However, the outcomes themselves are largely still the domain of the particular institution. Private institutions may see less threat from state mandates as their day-to-day functions are not contingent on public funding. As public support for higher education dwindles nationwide, there is certainly an argument in favor of letting institutions determine their own requirements.
Funding Whereas public institutions receive the majority of their funding from public sources (i.e., state and federal entities) as well as student tuition, private institutions have managed to remain solvent through higher tuition rates and private donor funding. The importance of proper funding, whether for a public or private institution, lies in an institution’s ability to support curriculum changes; to support an office for assessment; to enroll in large-scale, standardized measures of student learning; and to train faculty in developing learning outcomes and utilizing appropriate assessment measures. State funds may not
the impact: four-year institutions
57
be enough for public institutions to be more innovative in their curriculum redesign or measurement, limiting an institution’s ability to reinvent its general education curriculum. Although private funds would seem beneficial for producing innovation, there lies an inherent risk that donor funds may end up being earmarked for another designation. Tailoring fundraising approaches to highlight the general education curriculum or the need for faculty development in learning assessment can assist institutions in their quest for providing quality general education programming.
Competition Higher education in the United States is awash with competition. Marked by standings in U.S. News and World Report as well as the federal College Score Card, institutions seeking the best and brightest students are constantly trying to put their best foot forward. Although the algorithms involved in determining institutional rankings may leave out general education assessment, it would seem that showcasing increased student learning from general education programming, especially since all 4-year institutions include some form of general education, would be a worthy metric on which to capitalize.
Alumni and Governing Boards In the cases of institutional governance, pressures to include certain kinds of general education programming can occur from governing boards as well as from alumni. For institutions that have alumni with financial stakes in the institution, an appeal for including or excluding specific courses can be tough to navigate. Other institutions may have reputations that influence their curricula. For example, Virginia Commonwealth University has a history of having one of the top arts colleges in the nation. In previous iterations of their general education curriculum, all undergraduate students, regardless of major, were required to take an art elective as part of their general education curriculum. In addition, institutions seeking to expand their role in the surrounding community may stress curricula that preserve town-and-gown relationships or support the surrounding community (e.g., service-learning courses). Private institutions may have programming that is geared toward sectarian ideals or that highlight a specific field or discipline. Steering the general education curriculum in a specific discipline because of these governing forces reflects less on the importance of student learning and more on the influence of those in positions of power.
Institutional Mission Unlike community colleges and other 2-year institutions, which were born out of a need to provide education and technological training in a specific community, 4-year
58
gener ally speaking
institutions have roots in classical education, seminary preparation, and character development. Though these goals morphed after the Morrill Act of 1862, offering more practical and utilitarian versions of higher education, the missions and purposes of 4-year institutions remain largely variant. For specific institutional types (e.g., historically Black colleges and universities [HBCUs], tribal colleges, women’s colleges, Hispanic-serving institutions), institutional missions often present some relation to the unique demographics whom the institution serves. In other instances, such as with denominational institutions, the mission conveys the general values held by the institution that governs its policies, curriculum, and practices. In determining general education requirements, an institution’s mission can highlight those areas that an institution feels are most important to an undergraduate education. Research universities house the forward-thinking and cutting-edge researchers and scholars devoted to the expansion of knowledge in their respective fields. As such, these institutions tend to have as their mission the advancement of knowledge through exploration of the various topics. Liberal arts institutions, by contrast, may place a premium, above their research institution peers, on the advancement and prominence of liberal arts disciplines in higher education. Reflecting on the current and innovative practices that these institutions apply to assess their general education program and curriculum, in particular, can shed light on the directions of general education and its assessment in 4-year institutions.
The Impact of General Education on Student Learning Colleges and universities primarily perform student learning assessment at the program and institutional levels. In order to look for the impact of general education on student learning, a closer inspection of an individual institution’s assessment is required. The challenge in such an approach is that, per previous discussions in this chapter, institutions all have different sizes, missions, and characteristics that are unique to each. Comparing institutions regarding the impact of general education, then, involves national surveys or data reporting. While the National Center for Educational Statistics does not currently maintain student learning data, a national clearinghouse to share this information would be a significant move toward transparency in student learning in higher education. This suggestion, however, comes with some caveats. Along with the uniqueness of each institution, not all institutions ascribe to the same learning outcomes or distribution requirements when it comes to general education curricula. In addition, each institution’s assessment method could vary, highlighting different learning gains based on the tool employed. Even though large-scale reports of student learning from general education are limited, some resources that are attempting to measure student learning across the college experience presently exist.
the impact: four-year institutions
59
The Association of American Colleges and Universities (AAC&U), the National Institute for Learning Outcomes Assessment (NILOA), and the National Survey of Student Engagement (NSSE) are some of the organizational leaders in higher education learning assessment. These organizations provide resources for institutions to formulate their own assessment plans, including developing learning outcomes, and report national trends in the field of learning assessment. Although their primary focus is not on general education per se, these organizations highlight student learning as the goal of higher education. A recent study commissioned by the AAC&U revealed that students at 4-year institutions (65% at public 4-year institutions and 71% at private 4-year institutions) believe that both specific knowledge and skills, such as those developed through major courses, and broad knowledge and skills, such as those emphasized in general education curricula, are important to achieving long-term career success (Hart Research Associates, 2015). Looking further at career preparedness, 4-year students reported higher ratings of preparedness across most learning outcomes compared to 2-year students (Hart Research Associates, 2015). However, when exploring those outcomes more closely, students at private 4-year institutions reported being more optimistic about “critical thinking, [the] ability to locate, organize, and evaluate information from multiple sources, oral communication, complex problem solving, and the ability to solve problems with people from different backgrounds and cultures” compared to students at public 4-year institutions (Hart Research Associates, 2015, p. 20). Further research reveals that, for those institutions that belong to the AAC&U and that assess general education outcomes, the most common methods of assessment used are the application of rubrics to samples of student work and capstone projects (Hart Research Associates, 2016). In addition, for those AAC&U institutions who used rubrics, 42% reported using AAC&U’s VALUE rubrics, and 58% reported having used the VALUE rubrics in the creation of an institutional rubric (Hart Research Associates, 2016). Although not indicative of higher education as a whole, these findings seem to suggest that general education assessment is less diverse than expected, relying primarily on rubrics to demonstrate gains in student learning. Additionally, both studies highlight the lack of detailed information about student learning from general education specifically and instead focus on trends in student learning in higher education at large.
Conclusion The role of general education at 4-year institutions has evolved over time. Supporting a belief that student learning has a foundation in the general education curriculum, the inclusion and development of general education programming is a fundamental element of higher education. The impact of such a curriculum is dependent on many institutional factors, including the size and type of 4-year institution where a student chooses to
60
gener ally speaking
enroll. General education programming and outcomes, though similar in many regards, will differ for students based on the institution and will hinder or limit national comparisons as a result. Although the impact of general education for student learning may not be generalizable beyond a given institution, the further learning success of undergraduate students (e.g., graduation and career attainment) reflects the foundational skills and knowledge gained through their general education programming.
References Brint, S., Proctor, K., Murphy, S. P., Turk-Bicakci, L., & Hanneman, R. A. (2009). General education models: Continuity and change in the U.S. undergraduate curriculum, 1975– 2000. Journal of Higher Education, 80(6), 605–642. doi:10.1080/00221546.2009.11779037 Carstensen, V. (1962). A century of land-grant colleges. Journal of Higher Education, 33(2), 30–37. doi:10.1080/00221546.1962.11772900 Hart Research Associates. (2015). Optimistic about the future, but how well prepared? College students’ views on college learning and career success. Washington, DC: Author. Hart Research Associates. (2016). Trends in learning outcome assessment: Key findings from a survey among administrators at AAC&U member institutions. Washington, DC: Author. Jencks, C., & Riesman, D. (1968). The academic revolution. New York, NY: Doubleday. Miller, G. E. (1988). The meaning of general education: The emergence of a curriculum paradigm. New York, NY: Teachers College Press. Rothblatt, S. (1988). General education on the American campus: A historical introduction in brief. In I. Westbury & A. C. Purves (Eds.), Cultural literacy and the idea of general education (pp. 9–28). Chicago, IL: National Society for the Study of Education. Rudolph, F. (1977). Curriculum: A history of the American undergraduate course of student since 1636. San Francisco, CA: Jossey-Bass. The College of William & Mary. (2018a). The college curriculum. Retrieved from https://www .wm.edu/as/undergraduate/coll/index.php The College of William & Mary. (2018b). Assessing general education. Retrieved from https:// www.wm.edu/offices/iae/assessing_general_education/index.php The Evergreen State College. (2018). The evaluation process. Retrieved from https://evergreen .edu/evaluations/process University of California. (n.d.). UC graduation requirements. Retrieved from http://admission .universityofcalifornia.edu/freshman/additional-requirements/index.html University of Maryland, Baltimore County. (2005). UMBC general education functional competencies. Retrieved from https://provost.umbc.edu/files/2016/04/UMBC-GeneralEducation-Functional-Competencies-2005.pdf University of Maryland, Baltimore County. (2010). General education and assessment: A streamlined process—University of Maryland Baltimore County. Retrieved from https://provost .umbc.edu/files/2016/04/UMBC-General-Education-Assessment-A-Streamlined-Process2010.pdf Veysey, L. R. (1965). The emergence of the American university. Chicago, IL: University of Chicago Press. Winterer, C. (1998). The humanist revolution in America, 1820–1860: Classical antiquity in the colleges. History of Higher Education Annual, 18(1998), 111–129.
Chapter 6
The Larger Impact: Culture and Society Mary Kay Jordan-Fleming and Madeline J. Smith
The previous chapters have examined the impact of general education on student learning in colleges and universities; however, what is the larger impact of general education on culture and society? In other words, how do culture and society benefit from individuals who complete general education curricula in the 21st century? How does the structure of general education curricula contribute to this impact? Further, what are the gaps in our knowledge about the larger impact of general education? The primary objective of this chapter is to broaden our understanding of these topics in order to inform the future development of relevant and impactful general education curricula.
Assessing the Impact To understand how general education impacts culture and society, we must first define impact in this context. Of course, we cannot directly measure causal or even correlational relationships between college students completing general education requirements and the impact of such completion on culture and society. However, we can examine what is known as the civic dimension of general education (The University of California Commission on General Education [CGE], 2007). More specifically, we can examine how this dimension positions students to contribute to the greater public good, which will serve as the definition of impact for the purposes of this chapter. According to the CGE (2007), the civic dimension includes four prongs: civic information, civic
62
gener ally speaking
search skills, appreciation of democratic values, and civic experience. Each of these prongs contributes to general education curricula in an interdisciplinary manner. With a definition of impact in place, it next becomes necessary to determine which institutions yield students and graduates who demonstrate such an impact. Washington Monthly, a nonprofit magazine that offers an alternative to the annual ranking of colleges and universities conducted by U.S. News and World Report, ranks institutions according to their contributions to the public good. These rankings represent metrics such as number of alumni who go on to serve in the Peace Corps, percentage of work-study money that goes to community service, and voting engagement (Washington Monthly, 2018). In the sections that follow, we examine the civic dimension of the general education curricula across the highest-ranked institutions. Further, we discuss how these curricula position graduates to contribute to the public good based on the four prongs of the civic dimension. To reiterate, we do not intend to argue causation nor correlation between institutions’ general education curricula and their Washington Monthly rankings. Rather, we seek to identify and discuss the emergent attributes of general education curricula at institutions that have been recognized for their contributions to the greater public good. Of note, Washington Monthly is not the exclusive source for rankings of the aforementioned type, but rather serves as a valid guide for the purposes of this chapter.
The Civic Dimension As previously mentioned, the CGE (2007) defines the civic dimension of general education through four distinct prongs: civic information, civic search skills, appreciation of democratic values, and civic experience. The first of these prongs, civic information, entails “knowing something about American history and politics and current affairs, enough to be able to read a newspaper or to vote with some appreciation for what might be at stake in an election” (CGE, 2007, p. 27). College students may glean such information from general education courses in disciplines including history, political science, and sociology. Among national research universities, Harvard University is the topranked institution for contributing to the public good (Washington Monthly, 2018). A review of the civic dimension of Harvard’s general education requirements reveals that the institution is currently in the midst of a transition to a revised curriculum that provides students with more flexibility in course selection and timeline for completion. One of Harvard’s (2018) stated goals for the new curriculum is to “prepare students for civic engagement.” Students have an opportunity to gain civic information through courses offered in required general education categories including civics and ethics as well as histories, societies, and individuals. Students can complete these courses at any time during their undergraduate education, which enables them to continue to gain exposure to civic information after their first and second years (Harvard University, 2018).
the larger impact: cultur e and society
63
Exposure to civic information in Harvard’s general education curriculum prior to the transition also appears to have been robust, with course completion requirements in categories including culture and belief, societies of the world, and the United States in the world (Harvard University, 2018). The institution’s commitment to civic information is palpable at the graduate level as well, as evidenced through initiatives such as the Graduate School of Education’s Project Zero, which includes preparing youth in K–12 schools through civic education “not just for the voting booth but for deep engagement in their communities, with critical problems facing our world, offline and online” (Harvard Graduate School of Education, 2018). Although civic information made available through general education enables students to gain more awareness of culture, society, and their role in both, this information is only as useful to them as their ability to interpret it. Thus, civic search skills are also an essential prong of the civic dimension. Individuals have historically faced challenges finding information to assist in their civic decision making, in particular with regard to voting; however, with the rise of the Internet and proliferation of media outlets, citizens began to experience the opposite issue—information overload (CGE, 2007). In other words, the challenge became sorting through vast amounts of available information to determine what is most valid and reliable. According to the CGE (2007), “Civic education . . . should be oriented not only to information acquisition but also to the acquisition of skills and dispositions to enable life-long searching, sorting, and evaluation of information” (p. 27). Evidence of institutions teaching such skills is apparent in the civic dimension of one of Washington Monthly’s top-ranked baccalaureate college’s general education programs. At Goshen College, the Goshen core requires students to complete the first-year experience thread, the intercultural thread, and the perspectives courses thread—all of which contribute to the civic dimension. The first of the threads exposes students to critical reading and analysis as well as information literacy skills, while the second and third threads prepare students to engage in the increasingly information-filled 21st-century world and better understand knowledge creation, respectively (Goshen College, 2018). Similar to the structure of Harvard’s civic dimension, Goshen students can complete the latter two threads at any time during their undergraduate education. Additionally, students complete a study-service term as part of their general education requirements, in part to reinforce global awareness (Goshen College, 2018). Exposure to civic information and civic search skills can undoubtedly help position students to become positively contributing members of society. However, gaining an appreciation of democratic values “is a matter of learning to appreciate widely shared values and ideals of American civic life” (CGE, 2007, p. 28). Stated differently, students will likely be more invested in their roles as active citizens if they have a deeper understanding of and appreciation for the lessons that they learn in the civic dimension of the general education classroom. As noted by the CGE (2007), such appreciation is difficult to impart in the classroom, as faculty must strike a balance between encouraging free
64
gener ally speaking
thinking among students and reinforcing the importance of civic values and ideals. One institution from the liberal arts category of the Washington Monthly (2018) rankings that appears to be striking such a balance is Bowdoin College. Bowdoin structures its entire general education curriculum around the idea of impact; more specifically, the institution seeks to provide a liberal arts education that “takes whatever you’re passionate about . . . and helps you understand how it will impact the world around you” (Bowdoin College, 2018). Students must complete courses in distribution areas such as exploring social difference, where faculty encourage students to gain awareness of differences in human societies while also developing a deeper understanding of how various cultural, historical, and political forces shape these societies (Bowdoin College, 2018). For many students, these courses may provide their first challenge to not only recognize how societies differ, but also to understand why they differ. In turn, students have an opportunity to reflect on and develop a deeper appreciation for democratic values (CGE, 2018). The final prong of the civic dimension, civic experience, acknowledges the “gap between ‘being informed’ and ‘acting as a citizen in the wider world’” (CGE, 2018, p. 28). Students may acquire and even develop an appreciation for relevant information through their general education experiences, but how does this translate to their actions as citizens? Further, how do these actions impact culture and society? Students who volunteer or participate in service learning as part of their general education experiences have historically been more likely than their nonparticipating peers to take on leadership roles in society (Astin & Sax, 1998). As an exemplar institution for incorporating civic experience into a general education curriculum, we turn to Washington Monthly’s (2018) top-ranked liberal arts institution, Berea College. Students can choose to complete an active learning experience–service learning as part of their general education requirements. According to Berea (2018), this experience “is an opportunity for students to explore interconnections among various venues for learning.” Service learning is ingrained in the culture of Berea, as evidenced by the institution’s Center for Excellence in Learning through Service. Of course, the institutions cited throughout our discussion of the civic dimension are not the only exemplars for positioning students to become active citizens through general education. Many other institutions also have innovative and adaptable models for consideration. The key is for each institution to fit the four-prong framework to its individual mission and needs in order to reinforce the civic dimension early and often.
Considerations for the Future Design of General Education Curricula Beyond knowledge and personal growth, higher education institutions prepare students for the workforce and engagement in a vibrant democracy. Weakening public trust and reduced funding of higher education, particularly in the last decade, have exerted
the larger impact: cultur e and society
65
pressure to prioritize workforce preparation over civic engagement. Some states’ governors have weakened independent oversight, slashed funding, and openly campaigned against liberal arts degrees. It is a significant challenge for higher education to effect continuous improvement in both parts of the mission without marginalizing either. Despite criticism by some lawmakers, college faculty and business leaders widely recognize a strong liberal arts curriculum as the answer to changing workplace demands. Millennial graduates face rapidly shifting norms—changing jobs and careers many times, even in the first decade after graduation—that will require them to be lifelong learners with transferable skills that emphasize critical thinking, communication, and innovation. Even in the case of undergraduate majors, Forbes magazine’s cover story called liberal arts graduates Silicon Valley’s “hottest ticket” (Anders, 2015). Graduates with non-technical degrees tend to be high in creativity, versatility, and relationship-building skills. The new acronym STEAM adds arts to STEM fields in recognition of the value added by the formal study of human nature, culture, and creativity. There is a growing consensus that the 21st-century economy, including jobs that have yet to be invented, will require the skills of lifelong learning, transfer of knowledge, critical analysis, and creativity. Executives and hiring managers express high levels of confidence, higher in fact than the public does, that U.S. colleges and universities are preparing students to meet these challenges (Hart Research, 2018). Yet there is room for significant improvement. Employers place the highest value on areas such as oral communication, critical thinking, ethics, teamwork, self-motivation, writing, and real-world problem solving, both for entry-level work and particularly for advancement and leadership. U.S. graduates are not measuring up in these areas. In complex problem solving, for example, fewer than 15% of employers are “very satisfied” with college students’ abilities (Hart Research, 2018). The 2012 report of the National Task Force on Civic Learning and Democratic Engagement is a clarion call to renew the civic engagement portion of higher education’s mission. Citizens of a vibrant global democracy need a firm foundation in the liberal arts and sciences in order to develop critical analysis, scientific reasoning, ethics, and intercultural understanding. Fortunately, we have robust methods for improving general education in order to promote the kinds of learning that benefit students, employers, and society as a whole. First among these is the use of curriculum mapping—a matrix that maps courses to specific general education learning goals—to illustrate the contributions those courses make to the overarching learning outcomes required of all baccalaureate graduates. After an institutional mission has been operationalized in terms of the learning outcomes desired for all students, regardless of major, faculty must engage in the collaborative task of determining how the component skills required for their classes contribute to the common goals of general education. Curriculum mapping is not a task to be undertaken privately or defensively. General education is, of necessity, a collaborative enterprise. Jankowski and Marshall (2017) urge curriculum mapping to be “consensus-based, aligned, learner-centered, and communicated” to all
66
gener ally speaking
stakeholders (p. 78). In addition to mapping curricular content to broad learning outcomes, institutions benefit from mapping where critical demonstrations of learning will be collected—assessment mapping, if you will—identifying specific assignments (e.g., exams, lab reports, artistic productions, research papers, etc.) within courses that will generate artifacts for assessment. The second critical element is scaffolding. Core curricula must have a central focus, coherence, and a hierarchy of increasing complexity. In progressing from the first to final year of an undergraduate career, how do students encounter, practice, and master core learning outcomes? Are courses sequenced appropriately to facilitate learning at each stage? Most importantly, have the general education faculty collaborated across disciplines to build consensus and align student experiences? Effective general education is more than simply a menu of choices between the bookends of a first-year seminar and a culminating course. The third necessity is robust assessment of student learning. The days of assuming that if teaching is taking place, then student learning must be taking place are long past. Assessment in higher education is decades old and has progressed far beyond assumptions based on course grades and graduation rates. Assessment methods must be authentic, reliable, and valid. New options emerging from the leadership of the Association of American Colleges and Universities (AAC&U) (n.d.), the Lumina Foundation (2011), and the National Institute for Learning Outcomes Assessment (NILOA) (2011) have introduced and refined methods for course-based measures that are authentic, high-quality, and cost-effective alternatives to standardized tests. Following responsible and thorough curriculum design, and reliable and valid assessment of student learning, findings about student learning must be analyzed and discussed in order to sharpen curricular focus and shape the agenda for faculty development. Unless curriculum, assessment, and faculty development are seamlessly integrated, student learning will not improve despite reliable and valid methods of assessment. Further, faculty must communicate and collaborate to address assessment findings in order to avoid negating previous investments in curricular design and assessment methodology.
Areas for Future Research The escalating cost of college has intensified scrutiny about students’ return on tuition investment. Popular books such as Our Underachieving Colleges (Bok, 2006) and Academically Adrift (Arum & Roksa, 2011) suggest a critical mission failure in higher education. To restore public trust, higher education will require changes to business as usual, especially in the arena of academic assessment. Until relatively recently, nationally normed standardized tests were the preferred summative measure of student learning, despite limitations in scope. Standardized tests most commonly assess communication and critical thinking rather than outcomes such as cultural sensitivity, ethics,
the larger impact: cultur e and society
67
or social responsibility. Moreover, because general education test items are not systematically related to course content, the implications for pedagogy are unclear if student performance lags. Enter authentic, course-embedded assignments crafted and collected by course instructors. Widespread application of the AAC&U’s 2005 Liberal Education for America’s Promise (LEAP) initiative, Principles of Excellence, and accompanying VALUE (Valid Assessment of Learning in Undergraduate Education) rubrics have changed the face of assessment in higher education. The LEAP initiative focuses on essential learning outcomes best served by liberal education, including cultural and scientific knowledge, intellectual and practical skills (analysis, communication, teamwork, etc.), personal and social responsibility, and integration. Additionally, NILOA advocates for the use of authentic student work to assess the demonstration of competencies in the degree qualifications profile (Lumina Foundation, 2011) and is building an online library of exemplar, customizable assignments. Course-embedded assessment is increasing in popularity and offers significant advantages: (a) students are motivated to demonstrate their learning because assignments are integral to course content and graded by course instructors; (b) faculty and third-party raters can offer meaningful developmental feedback to students; (c) course-embedding incurs no cost or additional effort from students; and (d) because of the fit with course content, faculty clearly understand how to modify pedagogy to address any gaps in student learning (Jordan-Fleming, 2013). Used in combination with university-wide rubrics and trained faculty raters, course-embedded assignments are a versatile and useful alternative to standardized tests. They deepen faculty investment in the process and promote a closing of the assessment loop. Nevertheless, numerous significant challenges remain for course-embedded assessment, often without being recognized or studied experimentally (Jordan-Fleming, 2015). Research is sparse on the issue of irregularities that introduce unwanted noise into assessment results, compromising its validity. Maki (2014) observed that “poorly designed assignments lead to poorly executed student work.” Ewell (2013) offers the most thorough exploration of the relationships among course design, assignment specifications, and assessment. He advocates using curriculum mapping and assignment templates that mirror rubrics. Exploring the multiple possible sources of undesirable variability in course-embedded assessment is a fertile area for future research. In our eagerness to boost reliability and validity, we must guard against trivializing complex student learning into discrete measurable parts in a way that sacrifices the whole for the sum of its parts. There is also significant variability in how third-party raters are trained for independent scoring. Might we benefit from establishing some minimum standards for the rater norming and scoring process to maximize interrater reliability? We cannot accurately determine the impact of general education on culture and society without valid and reliable assessment methods in place. After all, what students learn— or do not learn—in the general education classroom forms the basis of their subsequent
68
gener ally speaking
knowledge and actions as citizens of the world. Thus, more research is needed in the areas mentioned above to ensure that our assessment practices become increasingly useful and relevant over time. Several other potential areas for research based on gaps in our knowledge of how general education impacts culture and society exist as well. How do institutions without a general education curriculum teach the necessary foundational skills that enable them to become active citizens? For example, Evergreen State College does not use a general education curriculum and yet ranks highly in contributions to the public good (Washington Monthly, 2018). Also, how can we effectively merge general education, major, and other requirements in order to provide students with a more integrated learning experience that better aligns with their personal and professional interests? Further, regarding the impact of general education on culture in particular, how do we accurately measure whether the delivery of content in disciplines such as visual arts and humanities increases college students’ awareness of and appreciation for what their local museums and libraries have to offer? Systematically addressing these and related questions will be vital to the future development of general education curricula that are relevant to culture, society, institutions, and—perhaps most importantly—to students.
Conclusion Research by the Lumina Foundation (2011), NILOA (2011), and others has demonstrated convincingly that general education has a singular role in promoting personal fulfillment, civic engagement, and career success. Thus, we can confidently state that general education positively impacts culture and society on some level. However, it is also evident that room for improvement exists. We know how to design, deliver, and assess high-quality general education curricula. Success in doing so will require implementing the steps outlined in this chapter, as well as embracing the NILOA Transparency Framework (2011) for engaging internal and external stakeholders in open, honest, and collaborative processes to improve student learning. Such processes will not subordinate faculty autonomy and disciplinary expertise to the demands of employers and legislators, but rather foster engagement and communication to maximize student success in civic leadership and career productivity. As we continue to assess student learning outcomes across general education curricula broadly, and the civic dimension in particular, we will likely gain more clarity on how these outcomes translate to impact on 21st-century culture and society.
the larger impact: cultur e and society
69
References Anders, G. (2015, August). The new golden ticket. Forbes. Retrieved from https://www.forbes .com/sites/georgeanders/2015/07/29/liberal-arts-degree-tech/#7aa70803745d Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Chicago, IL: University of Chicago Press. Association of American Colleges and Universities. (n.d.). VALUE: Valid Assessment of Learning in Undergraduate Education. Retrieved from http://www.aacu.org/value/ Astin, A. W., & Sax, L. J. (1998). How undergraduates are affected by service participation. Journal of College Student Development, 39(3), 251–263. Berea College. (2018). Active learning experience. Retrieved from http://catalog.berea.edu/ en/2013-2014/Catalog/Academics/The-Academic-Program/General-Education-Program/ Active-Learning-Experience Bok, D. C. (2006). Our underachieving colleges: A candid look at how much students learn and why they should be learning more. Princeton, NJ: Princeton University Press. Bowdoin College. (2018). The Bowdoin curriculum. Retrieved from https://www.bowdoin.edu/ academics/the-bowdoin-curriculum/index.html The University of California Commission on General Education. (2007). General education in the 21st century: A report of the University of California Commission on General Education in the 21st Century. Retrieved from University of California, Berkeley Center for Studies in Higher Education website: https://cshe.berkeley.edu/publications/general-education21st-century-report-university-california-commission-general Ewell, P. T. (2013). The Lumina Degree Qualifications Profile (DQP): Implications for assessment (NILOA Occasional Paper #16). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Goshen College. (2018). The Goshen core. Retrieved from https://www.goshen.edu/core/ Hart Research Associates. (2018). Fulfilling the American dream: Liberal education and the future of work. Washington, DC: Association of American Colleges and Universities. Harvard Graduate School of Education. (2018). Project zero: Civic engagement. Retrieved from http://www.pz.harvard.edu/topics/civic-engagement Harvard University. (2018). Program in general education. Retrieved from https://general education.fas.harvard.edu/categories Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm. Sterling, VA: Stylus Press. Jordan-Fleming, M. K. (2013). Characteristics of courses and assignments that develop students’ interdisciplinary problem-solving skills. Presentation at the annual conference of Association for the Assessment of Learning in Higher Education, June 3–5, 2013, Lexington, KY. Jordan-Fleming, M. K. (2015). Unexplored variables in course-embedded assignments: Is there an elephant in the room? Assessment Update, 27(4), 7–12. Lumina Foundation. (2011). Degree qualifications profile. Indianapolis, IN: Lumina Foundation. Maki, P. L. (2014). Using AAC&U’s VALUE rubrics to assess authentic student work across Massachusetts’ public higher education institutions: An alternative to standardized tests. Presentation at the annual meeting of the Higher Learning Commission, June 13, 2014, Chicago, IL. National Institute for Learning Outcomes Assessment. (2011). Transparency framework. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
70
gener ally speaking
National Task Force on Civic Learning and Democratic Engagement. (2012). A crucible moment: College learning and democracy’s future. Washington, DC: Association of American Colleges and Universities. Washington Monthly. (2018). 2018 College rankings: What can colleges do for you? Retrieved from http://wmf.washingtonmonthly.com/college_guide/2018/WM_2018_Embargoed_ Rankings.pdf
Chapter 7
Case Studies in General Education: Engaging Through Faculty Learning Communities Su Swarat and Alison M. Wrynn
General or liberal education has been a feature of American higher education since the middle decades of the 19th century 1. The Morrill Land-Grant Act of 1862, 7 U.S.C. 301 et seq. (1862), created public higher education in the United States and expanded the opportunity for higher education beyond the model of private and faith-based institutions while promoting liberal education among industrial classes. By the early 20th century, both specialization and distribution emerged in university curricula more broadly. In General Education in a Free Society (Conant, 1945), an analysis of revisions to the general education curriculum at Harvard determined that both general and specialized education were vital in a free society. Many of the conclusions reached by the Harvard faculty will ring familiar today, including a discussion of a broad framework for general education (i.e., courses in the life and physical sciences, social sciences, and the humanities). Social changes in the 1960s and 1970s led to a reduction in the required proportion of general education courses for many reasons, including an increased diversity among college students, many of whom questioned courses they deemed irrelevant or that failed to consider the perspectives of women or underrepresented populations. In 1977, the Carnegie Foundation for the Advancement of Teaching published Missions of the College Curriculum, declaring that general education was a “disaster area” and that students were too focused on career preparation. A number of “reforms” were promoted in the 1990s,
72
gener ally speaking
many of which unfortunately were not the result of careful review or assessment. Today the focus of general education reform is on increasing coherence in general education programs, improving connections between the major and general education, and articulating the philosophy of general education to students.
The California Public Higher Education Context The California Master Plan for Higher Education established the blueprint for higher education in California in 1960. The California State University (CSU)2 is a 23-campus system of higher education, along with the University of California system that has 10 campuses and the 114-campus California Community College system. The Master Plan organized the CSU by combining existing campuses with new ones into one system (Gerth, 2010). General education (GE) emerged in the CSU in May 1961 when the Board of Trustees (BOT) adopted minimum requirements for GE of 45 semester units. In 1967, the BOT faced external pressures to facilitate student transfer by reducing GE requirements. The Academic Senate of the California State Universities and Colleges (ASCSU) provided input, and in early 1968 GE was reduced from 45 to 40 semester units (all lower division). According to Gerth (2010), “The proposed revision is necessarily a compromise, designed to provide opportunity for interdisciplinary courses and to assure considerable institutional autonomy in implementation, while greatly facilitating student transfers to and among the State Colleges” (p. 197). This 40-unit, lower-division GE program was not to last. In early 1977, the ASCSU asked CSU Chancellor Glenn Dumke to establish a GE task force to revise CSU GE policy and programs. The task force submitted a report to the chancellor in early spring 1979. Following systemwide consultation, the revised proposal raised the unit requirement to 48 units—including nine units of upper-division courses. The 1980 centralized GE policy established five broadly defined areas (A–E) that remain unchanged. Over the next 3 decades, CSU campuses added campus-based GE requirements, leading to additional unit requirements beyond 48 on many campuses. Systemwide GE policy (Executive Order [EO] 595 1992) was not revised again for 16 years. The Intersegmental General Education Transfer Curriculum (IGETC) transfer path was created in 1991 to improve transfer for California community college (CCC) students to either the University of California or CSU system. In 2008 (EO 1033), CSU GE student learning outcomes were included for the first time, and campuses were directed to assess their GE programs. In the ensuing eight years, the executive order (EO) on GE has been revised three times (2011, 2015, and 2017). The most recent update to the EO on GE in 2017 asked campuses to align more closely to the long-established requirements in policy, in essence returning to the A–E requirements. Echoing student needs expressed 40 years earlier, the 2017 revision was undertaken to provide equity in GE among campuses and between first-time freshmen and transfer students. Additionally, the revision sought to ensure
engaging through faculty lear ning communities
73
students had clarity about GE requirements systemwide (students frequently attend more than one CCC before transferring to the CSU) and to help ensure student success as part of the system’s ambitious Graduation Initiative 2025 to increase graduation rates and reduce equity gaps. Despite these revisions, the framework adopted nearly 40 years ago for CSU GE remains virtually unchanged.
Historical Attempts to Assess GE at California State University, Fullerton (CSUF) The CSU’s EO on GE in 2008 required each CSU campus to conduct a rigorous assessment of their GE programs to ensure that students develop a solid foundation of liberal education. The Western Association of Schools and Colleges (WASC) Senior College and University Commission (WSCUC), the regional accreditation agency of all CSU campuses, also stated similar requirements among its Criteria for Review. These requirements, together with the faculty’s strong commitment to student success, fueled the GE assessment effort at California State University, Fullerton (CSUF). CSUF is one of the largest of the 23 CSU campuses, enrolling approximately 40,000 students. Serving as an intellectual and cultural center for Southern California and a driver for workforce and economic development, CSUF is among the nation’s top national universities and one of the “most innovative” institutions, as ranked by U.S. News & World Report. Offering 109 degree programs, CSUF also is number 1 in California and number 2 in the nation to award bachelor’s degrees to Hispanics (CSUF News Center, 2018). Most degree programs at CSUF require the satisfactory completion of 120 units, and prior to the revised EO in 2017, 51 (instead of 48) of these units were GE. As has been the case at many institutions, the assessment journey at CSUF has not been a smooth one. Historically, assessment took place only in isolated “pockets” on campus and was largely viewed as an administrative paperwork exercise by many departments in the 2000s. This weakness was highlighted in 2012 by WSCUC during the institution’s reaffirmation review. Specifically, WSCUC called for “(1) the alignment of learning outcomes at all levels; (2) the further development of annual assessment reports; (3) a mechanism for tracking improvements in student learning, pedagogy, and sharing best practices in assessment; and (4) continued coordination, monitoring and support for institution-wide assessment” (WSCUC, 2012, p. 2). CSUF responded to the WSCUC review by making significant changes within the institution, with noteworthy progress since 2014. An institutional policy on assessment and an accompanying “Assessment and Educational Effectiveness Plan” were adopted; a university-wide six-step assessment process was developed and now guides the institution-wide assessment efforts; a set of university learning goals (ULGs), as well as general education learning goals and outcomes were approved; a baseline budget allocation was provided to revitalize the Office of Assessment and Institutional Effectiveness; and a
74
gener ally speaking
central electronic assessment management system was implemented. Specifically, on the front of student learning assessment, all undergraduate programs developed or refined their student learning outcomes (SLOs) and started assessing the SLOs (with varying quality) by the 2014–2015 academic year (AY). The same process was put into place in AY 2015–2016 for all graduate programs. While the 2012 WSCUC review undoubtedly motivated CSUF to make commendable strides in changing the campus perception toward assessment, Ewell (2009) cautioned campuses such as CSUF that the adoption of assessment solely for accreditation purposes tends to be detached from teaching and learning practices and thus is unlikely to lead to transformative changes. As such, faculty resistance to assessment did not simply disappear with the aforementioned progress. The obstacles to faculty engagement in assessment, as summarized by Hutchings (2010) and Cain (2014), still existed—namely the dearth of evidence for the direct impact of assessment on student learning, the unfamiliarity with the assessment vocabulary or language, the absence of scholarly expertise in conducting assessment, the misalignment between assessment requirements and institutional reward systems, and the concern for academic freedom. Furthermore, the local workload issue added to the challenge of involving faculty in assessment. Many faculty at CSUF teach four courses per semester, engage in scholarly research, and participate in professional service (of which assessment is only one of the many options). The full-time faculty often do not object to the rationale for assessment, but feel strongly against having simply “another thing” added to their plate. With GE making up nearly 40% of the required units in most BA and BS degree programs in the CSU, it is a foundational part of the curriculum. Thus, GE assessment should follow the same approach and rigor as regular program-level assessment. However, GE assessment at CSUF, similar to many other institutions (Allen, 2006), faced additional challenges. One reason is that, beyond the aforementioned sources of resistance to assessment, GE faces difficulties such as the lack of faculty ownership or coordination (Bresciani, 2006) and the high proportion of adjunct faculty teaching GE courses who are not expected to take on duties beyond teaching (Allen, 2006). With nearly 40,000 students, CSUF offers over 500 GE courses, with approximately 2,000 course sections each semester. These courses are taught by more than 800 faculty, the majority of whom are part-time. As such, it is particularly challenging to motivate diverse faculty to engage in GE assessment and to create a structure that effectively coordinates this effort across the disciplines. This challenge was acknowledged by a separate GE program review in 2013–2014, which yet again urged CSUF to identify an institutional solution to assess student learning in GE. CSUF’s search for a sustainable GE assessment model began with the establishment of university-wide GE learning goals and outcomes in spring 2015. With the GE learning goals in place, the university led a curriculum mapping exercise that identified the GE learning goals addressed by each GE course. In fall 2016, the Academic Senate GE Committee piloted a GE assessment plan focused on CSUF GE learning goal 1: “Apply
engaging through faculty lear ning communities
75
their understanding of fundamental concepts, methods, and theories in natural sciences and mathematics, arts and humanities, and social sciences.” The committee chose four GE courses from different disciplines. The instructors were asked to identify one course assignment that demonstrated student learning for this learning goal and to submit the aggregated results from their courses at the end of the year. The committee also administered a one-question micro-survey to students enrolled in these four courses as indirect assessment. While students self-reported positive learning gains in these courses, only one course submitted its direct assessment data. The instructors reported that they were confused about the purpose of data collection, as well as frustrated with the lack of communication and support, and thus chose not to participate. The failure of the pilot GE assessment plan is not surprising. Pat Hutchings (2010), in her insightful essay “Opening Doors to Faculty Involvement in Assessment,” laid out six recommendations for faculty engagement. CSUF’s pilot GE assessment plan failed precisely because it violated these recommendations. For example, the data collection mechanism put in place did not “build assessment around the regular, ongoing work of teaching and learning” (Hutchings, 2010, p. 13). There was no faculty development component built in; nor were the course instructors meaningfully engaged in the process. The process did not “create campus spaces and occasions for constructive assessment conversation and action” (Hutchings, 2010, p. 15) and thus did not align the effort with the collaborative inquiry with which faculty are familiar and comfortable (Hersh & Keeling, 2013).
The Faculty Learning Community Model With the lessons learned, CSUF turned to the literature to seek a different model of GE assessment that ideally situates assessment in faculty’s regular teaching practices, involves faculty every step of the way to ensure they have an active voice in how to assess student learning, and thus fosters sustainable improvements in how assessment is conducted in the GE program and on campus in general. The search led to the concept of the faculty learning community (FLC). Defined by Cox (2004) as “a cross-disciplinary faculty and staff group of six to fifteen members (eight to twelve members is the recommended size) who engage in an active, collaborative, yearlong program with a curriculum about enhancing teaching and learning and with frequent seminars and activities that provide learning, development, the scholarship of teaching, and community building,” FLCs have been adapted in various institutions and have proven to be an effective way to motivate faculty professional development (p. 8). For example, Sirum, Madigan, and Klinonsky (2009) described the use of an FLC that brought together life sciences faculty and facilitated curriculum reform and pedagogical improvements for the participating faculty members. Engin and Atkinson (2015) reported the effectiveness of an FLC in helping faculty adopt new teaching and learning technology, particularly how the collaborative effort supported the development of pedagogy that aligned with the technology. More closely related to
76
gener ally speaking
assessment, Schlitz and colleagues (2009) utilized the FLC model to engage faculty in the adoption of a Web-based rubric for assessment, and by doing so developed a culture of assessment among the participants.
The New GE Assessment Model A faculty learning community (FLC) is at the center of the new GE assessment model at CSUF. A group of faculty from multiple disciplines who teach GE courses that share a common learning goal forms the basis of the FLC. The Office of Assessment and Institutional Effectiveness is tasked with the coordination of the FLC. At the beginning of the fall semester, this office works with the Academic Senate GE Committee to choose one learning goals to be the focus of GE assessment for the year. Using the curriculum mapping results mentioned earlier, the office then works with the colleges to identify appropriate courses that will be involved in GE assessment. The faculty who are nominated by the colleges as the instructors and coordinators for these courses form the GE FLC. The FLC goes through a series of working meetings (which also serve as professional development activities) in the fall semester to develop comparable course-embedded assignments that align with the GE learning goal of concern, create a common rubric, and complete rubric calibration. In the spring semester, these course coordinators train their fellow instructors, who teach the other sections of the course on the use of the assignment and on the calibration of the rubric. Student performance data are collected in late spring under the coordination of the OAIE. Data interpretation and improvement planning (i.e., “closing the loop”) take place in the summer. The FLC members are expected to disseminate the assessment findings to their colleagues to promote campus awareness and to encourage faculty participation in future rounds of assessment. A sample timeline of the FLC is illustrated in Figure 7.1.
Jan.
Nov.
Course-level instructor training
Rubric development
Early Fall Course & faculty selection
Summer
1
2
Oct.
Assignment review and revision
Figure 7.1 GE FLC Sample Timeline.
3
Dec.
Rubric calibration
4
5
Spring
Data collection Faculty: Assignment Student: Survey
Data analysis & closing the loop
engaging through faculty lear ning communities
77
As designed, the new GE assessment model places clear connection between assessment and the “regular, ongoing work of teaching and learning” (Hutchings, 2010, p. 13). Since the members of the GE FLC are instructors who teach the GE courses themselves, the assessment tasks and results are much more relevant, and thus motivating. It is reasonable to assume that these faculty are more likely to identify course-specific improvement actions upon reviewing the results. Similar to the story shared by Hutchings (2010), the GE FLC brings together faculty across disciplines to discuss student learning of important skills, which will help surface disciplinary differences but eventually lead to a shared vision of what an important skill (e.g., critical thinking) means for the GE program as a whole. These dialogues also provide professional development opportunities for the faculty to engage with the scholarship of teaching and learning and help highlight the linkage between assessment and what faculty do on a daily basis—classroom instruction. Of note, by identifying faculty involvement in the GE FLC through GE courses (not faculty academic rank), this model seamlessly integrates the engagement of both full-time tenured/tenure-track faculty and part-time adjunct faculty in university-wide assessment efforts, a rare opportunity for part-time adjunct faculty. Two rounds of GE FLC have been completed at CSUF. In 2016–2017, 15 faculty coordinators across disciplines worked together as an FLC to assess the GE learning goal on critical thinking—“seek and acquire relevant information and apply analytical, qualitative, and quantitative reasoning to previously learned concepts, new situations, complex challenges, and everyday problems.” These faculty included nine tenured/tenure-track faculty, two full-time lecturers, and four part-time faculty who collectively unpacked the meaning of critical thinking for the GE program by participating in multiple rounds of dialogue, revising course-embedded assignments in their individual courses, developing a shared scoring rubric, and applying it to assess their students’ performance as related to critical thinking. The FLC also collectively developed student survey questions to gauge students’ self-perception of critical thinking skills as a source of indirect assessment. Through the FLC, assessment results were collected from 2,251 students in upper-division GE courses (i.e., the “exit point” of the GE program), and positive student skill development was observed. Specifically, the percentage of students performing at the level of “proficient” or “advanced” on the rubric criteria (i.e., highest two levels out of four levels) varied between 70.2% and 78.1%, exceeding the 70% criteria for success set by the FLC faculty. Additionally, over 90% of students self-reported as gaining competency in critical thinking skills through the GE curriculum. Similarly, in 2017–2018, seven faculty coordinators from six colleges formed the FLC that assessed the GE teamwork learning goal—“develop skills to collaborate effectively and ethically as leaders and team members.” Through a similar collaborative process, the faculty collected assessment data on 809 students using a shared rubric, measuring how well the students mastered teamwork skills such as setting clear team expectations, providing constructive feedback to each other, and making a meaningful contribution as team members. The results were more positive than the findings on
78
gener ally speaking
critical thinking—the percentage of students performing at the level of “proficient” or “advanced” on the teamwork rubric criteria varied between 83.4% and 93.6%, well exceeding the faculty expectations. The indirect assessment results collected through student self-reports echoed the results as well. The different sizes of the two FLCs reflect the fact that fewer courses were identified as aligned with the teamwork GE learning goal than the goal on critical thinking, but both FLCs included a representative and diverse sample of GE courses at CSUF. Collecting data is not the end of assessment. Instead, the GE FLC aims to drive improvement upon reflection of data. In order to do so, the FLC faculty were provided both the aggregated university assessment findings and the results unique to their individual courses. The last meeting of the FLC, which takes place in the summer, is specifically focused on data review and reflection. The critical thinking FLC, for example, identified students’ ability to articulate the validity and relevance of arguments and conclusions as a common area for improvement, and suggested the idea of applying the same rubric in lower-division GE courses to track student growth due to the GE curriculum. The teamwork FLC found that the ability to provide constructive feedback seemed to be particularly challenging for student groups, such as female students and underrepresented minority students, and thus advocated for a more explicit effort to “meet students where they are” for different student populations. The long-term effect of these ideas for improvement will need some time to be evaluated, but it is quite exciting to see concrete ideas of teaching and learning improvement being generated and implemented by diverse faculty on campus.
Conclusion The GE FLC model has shown to be an effective and sustainable model for CSUF. Participating faculty have repeatedly commented on the benefit of collegial collaboration, cross-discipline conversations, and the ability to learn from other faculty. One mathematics faculty member wrote in the post-FLC survey, The process was quite uncomfortable and intimidating at the beginning. As a math instructor I was terrified to discover I had to help develop a rubric that also had to be relevant for dance, political science, etc., courses!!! After hashing it out in the first two meetings, I really began to appreciate the product that was developing. I also enjoyed the exposure to other departments and their way of thinking.
Another faculty member indicated that what made the FLC work well was “collaboration and hearing multiple voices; collegiality and collective wisdom. I sensed that we were all invested in the process.” Being “vested” is another critical aspect of the FLC that
engaging through faculty lear ning communities
79
made it successful. That is, the activities made explicit the connections between assessment and the faculty’s everyday instructional practices in their own courses. Through these experiences, the participating faculty were able to see how assessment is tied to the core of their professional practices and thus began to truly understand and appreciate the value of assessment, became willing to engage with it, and gradually embraced assessment as a vibrant component of CSUF culture. The GE FLC model allowed all participating faculty to engage on an equal footing, regardless of their tenure status, years of experience, or full- or part-time status. The adjunct faculty became better informed of university or department initiatives from their tenured/tenure-track colleagues, and in turn, since they often teach more lower-division courses, they shared more intimate understanding of student learning and development in the early stages of the curriculum. The “community” format gave the adjunct faculty a place to truly collaborate with tenured/tenure-track faculty on a common project, a rare opportunity for many of the adjunct instructors. The GE FLC model also appeared to build a smooth bridge at CSUF that united faculty and administration in an important endeavor. The FLC activities were organized and facilitated by the Office of Assessment and Institutional Effectiveness (OAIE), and the same office coordinated with the Academic Senate committee and colleges to identify appropriate FLC participants. The OAIE also collected and analyzed assessment data to facilitate “closing the loop.” With many courses from a wide variety of disciplines in GE (i.e., no one college or department owns GE), having a central office that coordinates the entire effort ensured its success. Equally important, the FLC model brought faculty and administrators together in an organic manner, diffused the common perception that assessment is an administrative task, and focused the collective energy on understanding and improving student learning. The success of the GE FLC also relies on the support of institutional leadership. The senior administration at CSUF has repeatedly expressed their commitment to assessment, making it an explicit objective in the 2013–2018 university strategic plan. The provost and deans provided various operational and financial forms of support to this effort. For example, the CSU labor environment required CSUF to provide compensation to the FLC participants. The Provost’s Office committed to $10,000–$15,000 per year in the FLC, which covered stipends to the faculty participants and expenses for the working meetings. In summary, the GE FLC model at CSUF has proven to be an effective way to assess student learning in a large, diverse GE program, and perhaps more important, a valuable venue to foster faculty collaboration and professional development in the scholarship of teaching, learning, and assessment. This model has been presented at various conferences, including the Association of American Colleges and Universities annual meeting and the WSCUC Academic Resource Conference, and received positive peer feedback. The CSU system also acknowledged the promising nature of this approach by encouraging all campuses to adopt similar models in the EO on GE in 2017, calling for robust
80
gener ally speaking
GE assessment plans that prioritize the collection and analysis of evidence of student learning and the use of assessment results to improve the overall GE program. The GE FLC model at CSUF is also an assessment project in and of itself. Frequent informal and formal faculty feedback was collected in every iteration of the FLC and applied to inform the design of the subsequent round. For 2018–2019, the new GE FLC will explore the diversity GE learning goal, a goal that highlights the core value of the institution. Encouraged by past success, CSUF very much looks forward to the insights on student learning and the benefits for faculty development that the new FLC will bring. .
Notes 1 An earlier version of this chapter appeared in the July 2018 NILOA Newsletter. 2 Initially known as the California State Colleges (CSC), then the California State Universities and Colleges (CSUC), the system is now simply the California State University (CSU). For clarity, we refer to it as the CSU throughout, except in direct quotes where it may appear as CSC or CSUC.
References Allen, M. J. (2006). Assessing general education programs. San Francisco, CA: Jossey-Bass. Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review. Sterling, VA: Stylus. Cain, T. R. (2014). Assessment and academic freedom: In concert, not conflict (NILOA Occasional Paper #22). Urbana, IL: National Institute for Learning Outcomes Assessment. Carnegie Foundation for the Advancement of Teaching. (1977). Missions of the college curriculum: A contemporary review with suggestions. San Francisco, CA: Jossey-Bass. Conant, J. B. (1945). General education in a free society: Report of the Harvard committee. Cambridge, MA: Harvard University Press. Cox, M. D. (2004). Introduction to faculty learning communities. New Directions for Teaching and Learning, 97, 5–23. CSUF News Center. (n.d.). CSUF rankings. Retrieved from https://news.fullerton.edu/media/ rankings.aspx Engin, M., & Atkinson, F. (2015). Faculty learning communities: A model for supporting curriculum changes in higher education. International Journal of Teaching and Learning in Higher Education, 27(2), 164–174. Ewell, P. (2009). Assessment, accountability, and improvement: Revisiting the tension (NILOA Occasional Paper #1). Urbana, IL: National Institute for Learning Outcomes Assessment. Gerth, D. R. (2010). The people’s university: A history of the California State University. Berkeley, CA: Regents of the University of California. Hersh, R., & Keeling, R. (2013). Changing institutional culture to promote assessment of higher learning (NILOA Occasional Paper #17). Urbana, IL: National Institute for Learning Outcomes Assessment.
engaging through faculty lear ning communities
81
Hutchings, P. (2010). Opening doors to faculty involvement in assessment (NILOA Occasional Paper #4). Urbana, IL: National Institute for Learning Outcomes Assessment. The Morrill Land-Grant Act of 1862. (1862). 7 U.S.C. 301 et seq. Retrieved from http://www .encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/morrill-landgrant-act-1862 Schlitz, S. A., O’Connor, M., Pang, Y., Stryker, D., Markell, S., Krupp, E., Byers, C., Jones, S. D., & Redfern, A. K. (2009). Developing a culture of assessment through a faculty learning community: A case study. International Journal of Teaching and Learning in Higher Education, 21(1), 133–147. Sirum, K., Madigan, D., & Klionsky, D. J. (2009). Enabling a culture of change: A life science faculty learning community promotes scientific teaching. Journal of College Science Teaching, 38(3), 38–44. Western Association of Schools and Colleges (WASC) Senior College and University Commission (WSCUC). (2012, July 3). WSCUC 2012 Commission action letter [Letter to CSUF President Dr. Mildred Garcia]. Retrieved from http://www.fullerton.edu/ accreditation/university/CAL_120703_CSUFul_EER.pdf
Chapter 8
Case Studies in General Education: Design Thinking for Faculty-Driven Assessment Tim Howard and Kimberly McElveen
Columbus State University (CSU) is one of 28 public colleges and universities in the University System of Georgia (USG). Core characteristics of the USG’s state universities include “responsiveness within a scope of influence defined by the needs of an area of the state” and “a high-quality general education program supporting a variety of disciplinary, interdisciplinary, and professional academic programming at the baccalaureate level, with selected master’s and educational specialist degrees, and selected associate degree programs based on area need and/or inter-institutional collaborations” (USG, n.d.). CSU’s Basic Carnegie Classification is “Master’s Colleges & Universities: Larger Programs.” CSU offers 46 undergraduate degrees and 42 graduate degrees, including one doctoral program, with fall 2017 enrollments consisting of 6,798 undergraduate and 1,654 graduate students. The institution is situated in a mid-sized Metropolitan Service Area and is surrounded mostly by rural counties. Columbus lies in Muscogee County, which, along with four out of five neighboring counties, has a poverty rate between 19% and 24% (U.S. Census Bureau, 2018). More than half of the undergraduate students (3,445) at CSU qualified for Pell Grants in fall 2017. Demographic characteristics of the CSU undergraduate student population include the following: 50% White, 38% Black or African American, 6%
84
gener ally speaking
Hispanic or Latin, 2% Asian, 2% two or more races, and 2% international; 59% female and 41% male; 39% under age 21, 35% age 21–25, 10% age 26–30, and 16% over age 30 (CSU, 2018). As a member institution of the USG, CSU follows a system mandate that frames the general education core curriculum as 42 semester hours of coursework divided into the following five areas: Area A: Communication and Quantitative Outcomes—at least six semester hours of English writing and at least three semester hours of quantitative reasoning; CSU’s consists of nine hours. Area B: Institutional Options—at least three semester hours of courses that address general learning outcomes of the institution’s choosing; CSU’s consists of four to five hours, depending on the major. Area C: Humanities, Fine Arts, and Ethics—at least six semester hours; CSU’s consists of six hours. Area D: Natural Science, Mathematics, and Technology—at least seven semester hours, of which at least four hours must be in a lab science course; CSU’s consists of 10–11 hours, depending on the major. Area E: Social Sciences—at least six semester hours; CSU’s consists of 12 hours. Individual institutions devise their own learning outcomes, but must obtain USG approval for the outcomes and courses that will count in each area.
The Importance of General Education Assessment The assessment of CSU’s general education program is important for improving student learning, eliminating inequities, and maintaining accountability. The institution complies with the institutional effectiveness requirements of its regional accreditor, the Southern Association of Colleges and Schools (SCAS) Commission on Colleges, and the University System of Georgia’s Comprehensive Program Review policy, which mandates a 5-year program review cycle for assessing quality and productivity. Like most public universities, a variety of other factors shape CSU’s direction and future state funding—public outcry over student loan debt, low graduation rates, and skepticism about the value of a liberal arts education. Allen (2006) indicates that there is a concerted effort to view general education as a program instead of viewing each course as a “silo” of learning that operates independently. According to Kuh, Kinzie, Schuh, Whitt, & Associates (2005), this paradigm shift leads to the question “Are students learning what we want them to learn?” It behooves each state institution to build a strong empirical case that it is effective in preparing students to achieve aspirational
design thinking for faculty-dr iven assessment
85
goals. This works best when the faculty and administration work together on a smooth process collecting and analyzing meaningful data to make informed improvements to student learning. According to the University System of Georgia’s College 2025 Report (2018), the institution must adopt an approach that enables “curricular innovation and reform to ensure . . . real-world expectations are integrated across all programs of study” (p. 10). The University System of Georgia recently took a significant step in this direction by requiring its institutions to treat general education as an academic program rather than a set of disciplinary exposure requirements to be checked off by students. General education is reviewed on a 5-year cycle through the Comprehensive Program Review (CPR) process. This entails an examination of effectiveness and productivity, taking into account feedback from students and employers, evaluating the suitability of program outcomes and curriculum design, and formulating a long-term plan for improvement. General education is an interdisciplinary enterprise that involves many faculty, and thus the conversations surrounding this topic must be inclusive yet efficient.
What Not to Do: Lessons Learned the Hard Way In 2009–2010, Columbus State University (CSU) began the redesign of its general education (GE) curriculum to align with new requirements throughout the University System of Georgia (USG). An important early step was to articulate learning outcomes for areas A–E in the GE core. The USG required institutions to develop at least one learning outcome for each of these areas. Institutions were also required to articulate “overlay outcomes” to address critical thinking and global perspectives. Considering the inputs to this design, it is not difficult to see how CSU ended up with a “distribution model” for GE—take a little bit of this, a little bit of that, and get exposed to these different areas. To address this task, CSU formed a 13-person task force with faculty from disciplines including education, computer science, political science, international education, biology, economics, art, English, music, and history. The task force broadly solicited suggestions from academic departments regarding the phrasing of student learning outcomes. In the end, the process yielded a total of 20 GE learning outcomes. It was a very inclusive process that gave everyone who was interested a chance to provide input. Virtually everyone could see a course in their subject area reflected in the GE core and could find an outcome that included the content covered in their courses. Of note, CSU’s 2006 SACS re-accreditation report highlighted a concern that the site visit team did not find evidence that the institution was closing the assessment loop. Thus, it is evident that the institution had not established a culture of evidence-based teaching and learning at that time.
86
gener ally speaking
The next task was to devise a method for assessing the effectiveness of the GE program. The GE assessment method that had been in place involved commercially available standardized tests administered to graduating seniors. Completion of a designated test was required to graduate, but the score had no consequence for the student. The GE Committee recommended a new approach based on course-embedded assessment. This would shorten the time between the achievement of a learning outcome in a particular course and the assessment of the outcome. To help ensure that student work used in the assessment process reflected their best efforts, the committee also recommended that the assessment artifacts (i.e., student work) be actual assignments completed for a grade in the course. Further, the committee added two more expectations for the assessment process: • The artifacts to be used in the assessment process must reflect the content that faculty truly value in a course. • Faculty must be involved in the evaluation of artifacts for the purposes of GE assessment. With these constraints, the committee asked each department that offered courses in the core to propose a plan for collecting student artifacts and assessing applicable GE learning outcomes on a regular basis. The resulting assessment model, compiled in fall 2011 (the 2011 model), called for 24 different methods of assessing GE learning outcomes in individual courses or in clusters of courses taught in the same department. The assessment methods devised by the departments involved multiple sampling methods with different frequencies of sampling. Some departments chose to select samples for review every semester, while others chose to review student work once per year, and still others aimed for multi-year cycles. A point of contention developed around the question of whose work should be included in the assessment samples. Some faculty viewed the assessment process as a method of “proving” that students who successfully complete a course in the GE core can also pass an assessment based on the particular GE learning outcome for that area. With this being the prevailing perspective from the committee, sampling methods were developed that would draw artifacts only from students who ended up receiving a grade of C or better in the course. Some rightly expressed serious concerns about this approach, since it lends itself to blaming “unprepared students” when learning goals are not achieved, fails to address educational inequities, and is unlikely to lead to more inclusive instructional approaches. It may also systematically exclude populations of students whom the university should be serving. The assessment process should lead to a critical evaluation of the institution’s role in student learning and, ultimately, to modifications in curriculum and pedagogy that alleviate inequities and improve learning for all students. Contention over the role of assessment and sampling methods helped stall the implementation.
design thinking for faculty-dr iven assessment
87
When the time came for some departments to review student artifacts, it did not occur. Not all faculty teaching the pertinent courses had been clear about the assessment plan, and some did not understand that they had to have a plan for collecting student artifacts. Given that departments were conducting the reviews in-house, and that different departments had devised different methods and cycles of review, it proved unwieldy to monitor for compliance. Other departments attempted to carry out their assessment plans, only to recognize difficulties inherent in the phrasing of the learning outcomes. Consider Area E: social sciences, which is subdivided into U.S. history, American government, behavioral sciences (psychology, sociology, economics, moral philosophy), and world cultures (options include anthropology, language and culture, geography, world history, and more). One learning outcome designed to span these areas is the following: Articulate how factors such as culture, society, environment, human behavior, decision-making, and diversity shape the role of the individual within society, human relations, or human experience across time, space, or cultures.
The outcome spans so many domains that it proved difficult to design assessment instruments that yielded anything informative to curriculum design and pedagogical practices. Another type of assessment challenge arose when the phrasing of the learning outcome required a form of assessment that was not practical in a course associated with that outcome. For example, in Area C: humanities and fine arts, the learning outcome stated that students will “[g]enerate knowledgeable interpretations of texts, works of art, or music.” Courses that were taught in large lecture sections and assessed with multiple-choice tests could not easily produce evidence that students could generate knowledgeable interpretations. In retrospect, CSU learned several lessons through these experiences. While it is important for faculty to lead the design of learning outcomes and assessment methods, it helps to have some guide rails for the design. A faculty that is, by and large, inexperienced with the assessment of student learning outcomes may need help formulating outcomes. All faculty who teach courses associated with a learning outcome must be aware of the outcome and guided in the development of assessment practices that align with it. Care should also be taken to design an assessment methodology that can be properly administered. Someone needs to be capable of easily monitoring compliance. Budgets and personnel need to be suitable to execute the methodology and ensure that assessment results lead to changes in curriculum design and/or instructional practices.
88
gener ally speaking
Finding a New Way Forward Several years into the 2011 model, it became clear that the assessment plan was not feasible and needed revision. If the university was ever going to adopt a manageable assessment process that improves student learning and addresses systemic inequities, it would need to create and implement a more consistent model. CSU determined that each course that addresses the same learning outcome needs to have the same method of assessment, regardless of the department in which it is taught. Comparable sampling methods and frequencies need to be utilized across the board, all while adhering to some previously established principles: • The GE student learning outcome should be included on the syllabus of each course section in the GE core. • Faculty members should use an assignment that requires a grade; however, the GE assessment does not use the grade on the assignment. • The assessment should reflect elements that faculty value in the course and align with the student learning outcome. • Faculty should be involved in the appraisal of student work in the assessment process. • The assessment process should provide feedback to the areas for improvement in student learning. In a departure from earlier assessment plans, CSU decided to collect artifacts from a designated assignment from every student in all sections of every course as the sampling method. This would help establish a regular and consistent practice without having to add a layer of processing to determine who needs to submit what and when. We will describe the collection logistics in more detail in a later section. This sampling method is also designed to represent all student demographic groups (i.e., age, gender, race/ethnicity, and socioeconomic status), all modes of delivery (i.e., face-to-face, online, hybrid), all categories of faculty (i.e., full-time, part-time, tenured, tenure-track, and non-tenuretrack), and all courses in the GE core. This decision occurred around the time that CSU affiliated with the Association of American Colleges and Universities (AAC&U), which provided many resources—conferences, a summer institute on general education and assessment, and widely tested rubrics. A critical step for CSU was to establish a full-time position for institutional assessment. This enabled a sense of cohesion and consistency to emerge under the leadership of someone knowledgeable of best practices in assessment, as well as requirements of the University System of Georgia and its regional accreditor, the Southern Association of Colleges and Schools Commission on Colleges. The next step was to tap a cadre of faculty involved in the GE program with an interest in assessment. Several faculty from the GE Committee attended conferences and began networking with other institutions
design thinking for faculty-dr iven assessment
89
to learn more about successful assessment practices in higher education. This positioned the institution to recognize which aspects of the 2011 model to maintain, as well as new aspects that might be beneficial. After rethinking current assessment practices, a design team attended a summer institute on general education and assessment to develop a plan for building a new model using the theoretical framework of design thinking. According to Brown (2008), design thinking is a cyclical process that includes five major elements. The first part of the process requires understanding, where you empathize with the users, and defining the problem. The next part requires exploration, where participants generate innovative ideas and prototyping occurs. The final part includes materializing the plan through testing and implementation. Again, this process is continuous, and thus the plan is evaluated and enhanced in a systematic manner. The design team expanded the “testing” process to gain buy-in from institutional constituents in the campus community. This part of the process was a critical step in cultivating an understanding of the assessment plan across the campus community. A valuable lesson learned from institutions that implemented assessment plans was that CSU needed to create buy-in with its constituents throughout the process. It was clear from the beginning that introducing a mandate to faculty would not go over well, and thus a communication plan was developed as part of the assessment plan. This plan was presented to the university’s administration, GE Committee, deans, chairs, and faculty senate. During these campus communications, feedback on the plan was collected and vetted through the design team and GE Committee. The new model would start to take shape in a summer workshop for faculty who would assist with the assessment design. We grouped faculty by outcome area in the GE core for this day-long workshop and paid them a small stipend for their valuable contributions. Each group drafted a rubric for assessing their learning outcome(s). The groups would then attempt to apply that rubric to the assessment of student artifacts collected that spring and summer. The workshop concluded with each group providing three recommendations for moving forward. The next few sections include discussion on collecting student artifacts, developing rubrics, applying rubrics, conclusions, and ongoing challenges.
Establishing the Practice and Collecting Student Artifacts Collecting an assignment from every student in every section of every general education course is a monumental undertaking and a work in progress. Issues to address include communicating expectations to faculty, working with each instructor to align an assignment with a targeted learning outcome, and warehousing all of the student work. When faculty have not been in the habit of aligning an assignment with a particular learning outcome, it can be a challenge getting them to start. This process revealed that multiple communication channels are required. We created a website dedicated to GE
90
gener ally speaking
assessment, including descriptions of the assessment plan and related resources (e.g., learning outcomes and rubrics). We presented the plan to the faculty senate, department chairs, and deans. We also sent mass e-mails to faculty teaching GE courses. As an additional step, we organized an informational workshop as a part of the fall “Welcome Week” faculty development event. We monitored workshop registration to see whether each department had faculty representation. If a department was missing, we contacted the chairs to make sure they were aware of the opportunity and encouraged a departmental representative to attend. Another helpful step was to access CSU’s syllabus archive and check individual syllabi for compliance with expected practices. Did they include the required GE learning outcome? Did they appear to require an assignment aligned with the learning outcome? Ensuring that syllabi state the GE learning outcome is an ongoing issue that we are working to overcome with communication and feedback to the campus community. Beyond communicating the expected practices to faculty, there lies another, perhaps more formidable challenge—warehousing the student work from assignments aligned to the GE learning outcomes. Our ideal solution to this would be to house all student work in the university’s online content management system (CMS). If the instructor has aligned an assignment to the outcome and students submit that assignment to the assignment box in the CMS, then (a) a copy of the student’s work is retained, without any grading comments or marks, and (b) it only needs to be retrieved if it is determined to be included in the sample. This minimizes the amount of time and organization required to handle student artifacts. In cases where the designated assignment involves handwritten work, the faculty member is asked to scan the student papers into the CMS as PDFs before grading them.
Developing the Rubrics to Be Used in Assessment As CSU faculty have learned, it is not easy to develop a rubric that encapsulates what everyone has in mind when articulating the learning outcome, nor is it practical to apply it using actual student artifacts. A well-developed rubric serves to elaborate on learning expectations in a way that the learning outcome alone cannot. Achievement of a functional rubric is not something a group arrives at quickly or easily. This is where CSU appreciates its affiliation with the Association of American Colleges and Universities (AAC&U). The AAC&U’s Liberal Education and America’s Promise (LEAP) initiative began in 2005 to advocate for the value of a liberal arts education (AAC&U, 2005). As part of its advocacy, the LEAP initiative has developed a set of 16 Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics, which have been reviewed and tested extensively by more than 42,000 individuals at more than 4,200 different institutions (AAC&U, 2009). The rubrics address a set of skills commonly included in
design thinking for faculty-dr iven assessment
91
GE learning outcomes at state universities. The rubrics are available at no cost and are meant to be adapted to a university’s individual needs. The LEAP VALUE rubrics (LVRs) served to jump-start rubric development at CSU by furnishing tightly refined descriptions of different levels of achievement within each dimension of the learning outcome. In several cases, an LVR immediately provided a close match to a learning outcome—written communication, oral communication, and quantitative literacy. In other cases, dimensions desired within an outcome were drawn from several different LVRs. For instance, CSU’s humanities learning outcome states that students will “[d]escribe an example of a creative or intellectual endeavor and articulate a connection to the human experience.” CSU faculty decided that achievement of this outcome entails the following dimensions: • • • • •
Explanation of issues Demonstrating knowledge of cultural worldview frameworks Utilization of existing knowledge through research Effective use and presentation of evidence Analysis
The rubric developed by faculty to assess this outcome draws from three different LVRs: critical thinking, intercultural knowledge and competence, and inquiry and analysis. Even with the benefit of the LVRs, a fair amount of coordination and collaboration is required to craft a useful rubric. Multiple departments often teach courses that address a single learning outcome. For example, there are at least six different departments teaching courses that address the social sciences learning outcome. CSU was unable to have all departments represented at the summer assessment workshop where rubric development began. As a result, some initial drafts did not reflect the fullest interpretation of their respective learning outcomes or the methods of assessment commonly used within courses aligned to the outcome. There were still practical considerations to account for, such as assessments in large lecture sections that utilize multiple-choice tests, when the assessment called for a term paper. In order to smooth out these differences of interpretation and practice, we held follow-up meetings with department chairs and groups of faculty to review and refine the rubrics. While this process was very time consuming, it created buy-in to the development of the assessment plan. In our assessment workshops, we apply the rubrics to assess actual student artifacts and incorporate an opportunity to gain feedback on the suitability of the rubric in part to refine it. As anticipated, these exchanges led CSU to challenge long-established practices. Perhaps we cannot effectively assess a particular learning goal with a multiple-choice test due to class size. Do we change the class size, the means of assessment, or the learning outcome? We might find that resource limitations require an arrangement that is less than ideal for all departments involved. An evolving sense of appropriate practices is an outcome of this process.
92
gener ally speaking
Putting Those Rubrics Into Practice Once CSU settled on a set of rubrics and collected student artifacts, the real work of assessment began. CSU seeks to hold assessment workshops once or twice a year. The goal is to convene a group of three to four faculty associated with each learning outcome. Most of the faculty will have taught courses that address the outcome to provide subject-matter expertise, but CSU likes to gain an outside perspective by including a faculty member not teaching in that area. It has been CSU’s experience that faculty become very engaged in reviewing and evaluating different student artifacts through the framework provided by GE learning outcomes and rubrics. After all, the GE learning outcomes reflect goals we have for the learning of all students in all programs of study. Accordingly, faculty are responsible for facilitating learning goals beyond those of their discipline when teaching a GE course. At the assessment workshop, at least two different faculty assess each student artifact. If these assessments produce significantly different results, a third reviewer may be asked to assess the artifact as well. Chances of low interrater reliability can be reduced by taking some time in the beginning for calibration. Calibration entails preselecting student artifacts illustrative of a variety of achievement levels and having the reviewers score the artifacts using the pertinent rubric. During this exercise, time is devoted to compare ratings and discuss the rationales for the ratings. Occasionally, the outcomes of an assessment workshop include recommendations for revising the rubrics. Each rubric reflects a wide range of performance levels, from just emerging to highly developed. For courses in the GE core, a student is likely to perform closer to the emergent level; it is important to emphasize this with faculty reviewers during calibration. Faculty are asked not to compensate for the perceived level of the student when assigning a rating (e.g., not say, “That was really good for a freshman”). Also of note, there may be cases when no evidence can be observed that the student’s work reflects a particular domain, even at the emergent level. This might be due to a lack of student performance in that domain, or it might be that the assignment was not well aligned with the domain. In either case, the reviewer should assign a zero for that domain. As the faculty review student artifacts, the structure of the workshop allows ample time for discussion. Reviewers discuss interpretations, observations, and patterns. Often, they learn about new and innovative ways faculty have developed to address a learning goal. These include course assessments and instructional practices. The reviewers, who may come from different departments, have a chance to discuss their own practices with fellow reviewers. It can be a very productive faculty development opportunity. Approximately 30 minutes before the end of the assessment workshop, we ask each group to formulate a list of three recommendations for moving forward. The recommendations need to include something targeted directly at the improvement of student learning in the outcome area, but they may also include modifications to the rubric or the learning outcome. In our case, revisions of the rubric are fairly common, but
design thinking for faculty-dr iven assessment
93
revisions of the learning outcome should be more rare—changing a learning outcome requires the approval of both the University Curriculum Committee and the University System of Georgia Council on General Education, and this process can take up to one year to complete. After the assessment workshop, the recommendations must be compiled, disseminated, and discussed. Course-related recommendations are sent to the associated departments for further discussion. All recommendations are presented to the General Education Committee, which recommends follow-up actions. The use of student artifacts to improve student learning has transformed the responsibilities of this committee. When a campus community uses evidence of student learning and has faculty-led assessment processes, the conversations and actions about student learning foster a campus community’s own learning. The committee is engaged in conversations about curriculum, enrollment in core classes, and alignment with courses in each general education core area.
Conclusion Effective communication of expectations, assessment results, changes in the assessment process, and changes in the general education curriculum requires a great deal of maintenance. It is natural that most faculty latch on to a given practice—be it an assessment practice or part of the curriculum design—and devote their attention to other concerns. They tend to overlook changes until they perceive a threat. This is why it is important to broadly engage the campus community in discussions of assessment results. What do they tell us about student learning? What actions need to be taken to improve student learning? General education reform is complex, and progress reshaping the core moves at a glacial pace. While the design thinking process is an effective approach to developing sound solutions, academicians tend to get stuck in the prototyping phase, thus not making it to the testing phase. Committees spend an exorbitant amount of time reviewing data and discussing courses, curriculum alignment, student skills and knowledge that translate in a global workforce, as well as applicable policies. Because the institution cannot predict precise outcomes, committees get stuck on the possibility that they could make the wrong decision, thus adversely affecting departments, faculty lines, budgets, and student learning. Common questions and statements arise, such as: What if students choose this course over that one? Will faculty lose their jobs? How will this change affect their departmental budgets? How can we possibly change a course that this professor has taught for 15 years? Will the chair of that department support these changes? Committees often decide that it is not possible to make changes in response to common sentiments such as the need for more data: “The decision needs to be tabled until the
94
gener ally speaking
next semester; present the information to another committee of deans, chairs, or the faculty senate; we are moving too fast!” To move into the prototyping and testing phases, higher education can benefit from Collins’s (2001) belief that organizations first need to decide who, then what. Collins explained the right-people-on-the-bus concept in his book Good to Great. He states, “The executives who ignited the transformations from good to great did not first figure out where to drive the bus and then get people to take it there. No, they first got the right people on the bus—and the wrong people off the bus—and then figured out where to drive it” (p. 13). Before we can make transformational changes, we need to ensure that we have the “right people” on the General Education Committee, then we can determine where we need to go with general education reform. General education is a continuous journey, not a final destination. We will always be finding things to improve, even as we perpetually strive to cultivate a shared vision of the general education program.
References Allen, M. J. (2006). Assessing general education programs. San Francisco, CA. Jossey-Bass. Association of American Colleges and Universities (AAC&U). (2005). About LEAP. Retrieved from https://www.aacu.org/leap Association of American Colleges and Universities (AAC&U). (2009). Inquiry and analysis VALUE rubric. Retrieved from https://www.aacu.org/value/rubrics/inquiry-analysis Brown, T. (2008, June). Design thinking. Harvard Business Review, 84–92. Collins. J. (2001). Good to great: Why some companies make the leap and others don’t. New York, NY: Harper-Collins. Columbus State University. (2018). Academic year 2017 annual report for financial aid data. Retrieved from https://ir.columbusstate.edu/reports/facts17/Financial_Aid_Data.pdf Kuh, G. D., Kinzie, J., Schuh, J. H., Whitt, E. J., Associates. (2005). Student success in college: Creating conditions that matter. San Francisco, CA: Jossey-Bass and American Association for Higher Education. University System of Georgia (n.d.). Mission statement for universities. Retrieved from https:// www.usg.edu/institutions University System of Georgia (2018, August 14). College 2025 report: Adaptability, essential skills, lifelong learning, and partnerships. Retrieved from https://www.usg.edu/assets/ usg/docs/college_2025_report.pdf U.S. Census Bureau (2018). QuickFacts: Columbus [city], Georgia. Retrieved from https:// www.census.gov/quickfacts/columbuscitygeorgia
Chapter 9
Case Studies in General Education: Critical Timing for Critical Reading Bridget Lepore
Reading is a foundational skill for college learning and adult life in the United States. Much—if not all—of the work in college builds on reading; however, many institutions do little to support the development of reading skills beyond reading remediation courses. The issue with this approach is that many students graduating from high school are not prepared for college-level reading (American College Testing [ACT], 2014, 2015, 2016, 2017; Jackson & Kurlaender, 2014; Moore, Slate, Edmonson, Combs, Bustamante, & Onwuegbuzie, 2010; National Endowment for the Arts [NEA], 2004, 2007; Wilkins, Hartman, Howland, & Sharma, 2010). Despite this lack of preparedness, many of these students will enroll in college. Further, even the most proficient readers could benefit from explicit, intentional reading instruction as they move through new material, new formats, and a new form of discourse. General education courses, due to their positioning in the early undergraduate years and the nature of their material, are an excellent place to integrate reading instruction. To address the need to transition students from high school reading to critical college reading, the Kean University general education program designed a course focused on critical reading. This course draws on strengths-based instructional methods, culturally responsive teaching, and community-centered reading to help students build a sense of belonging and community. Further, the program increases students’ awareness and knowledge of reading as well as their ability to manage the transition from high school reading to college critical reading. While college students are expected to have intact
96
gener ally speaking
literacy skills, the need for deeper and more critical reading is higher than ever for both academic life and the world beyond the classroom.
The Need for Critical Reading Research on college and adult reading paints an image of American society that does not read well, nor at length, or even for entertainment purposes. Large-scale studies from the NEA have identified issues in reading among adults throughout the United States, including a decline in reading skills and habits, lowered reading abilities in the workplace, and a widening split between readers and non-readers (NEA, 2004, 2007, 2008). Research specifically focused on high school and first-year college students also indicates that many students are not prepared for college-level reading and that they do not have the reading skills and habits that meet the benchmark of college readiness. In fact, a number of studies have found that between 48% and 52% of students entering college are not prepared for the reading that awaits them (ACT, 2014, 2015, 2016, 2017; Jackson & Kurlaender, 2014; Moore et al., 2010; NEA, 2004, 2007; Wilkins et al., 2010). It is important to understand that many of these students, if not most of them, are neither tested nor placed into remedial or developmental courses. As such, institutions may not be aware that these students need access to reading support. Being underprepared for college reading will impact academic progress, confidence, and success. Still, few institutions have a focus on reading, leaving students to attempt to transition from high school to college reading on their own (Bosley, 2008). Despite the importance of reading, many adults enter college and the workforce with little background and understanding of their own reading lives. For many students in the United States, reading instruction ends in elementary school, leaving them to learn to navigate the written world on their own. This leaves adults with a set of literacy skills formed by early experiences at home and school, and by the context in which they live their lives (Freire & Macedo, 1987)—multimodal, grounded in technology, and highly social in nature (Belshaw, 2014; Leu, Kinzer, Coiro, Castek, & Henry, 2017). In a high school environment where teachers spend time planning lessons that encourage learning or in highly structured work environments, this may not be an issue. In a professional or academic environment, however, a lack of critical reading skills can leave adults feeling uncomfortable and at risk of failure. Adults in the United States need to be able to read critically to navigate an increasingly complex, information-filled world. The rapid adoption of digital technology and the ease with which information can be created and shared with large audiences has changed literacy and placed new demands on reading (Belshaw, 2014). Individuals have access to immense amounts of information of varying quality and content that requires skill to understand, evaluate, and analyze before it can be useful (Leu et al., 2017). Critical reading skills allow readers to read more deeply, move beyond the
cr itical timing for cr itical r eading
97
decoding of words to understand the meaning of them, and develop a strong base for the digital and informational literacy that is necessary for navigating our increasingly information-filled world. In trying to understand the impact of technology and how it affects various aspects of life, several frameworks and avenues of investigation emerge. Studies of information literacy, digital literacy or literacies, and multimodal literacy have all emerged as ways of managing our increasingly information-filled world. Even so, traditional reading literacy has not become irrelevant and instead serves as a conduit to using digital technology as a part of multimodal literacy, and as a foundation for information literacy (Belshaw, 2014). Traditional-age college students have grown up with this technology already in place, and it is part of their world. While we work to understand the impact of the technology on literacy, society, and life itself, our students have come of age in a time where they are highly connected, used to collaboration, able to create and share information seamlessly, and likely to multitask (Belshaw, 2014; Leu et al., 2017). This does not make them expert users of technology, but rather learners who think in technologically enabled ways, expecting that technology is available to them. While most students have consistent access to technology yet are not technology experts, they can benefit from explicit instruction and guidance on how to leverage the technology that they own for the goals that they have (Brooks & Pomerantz, 2017). The role of reading may be changing; however, making meaning of words is at the core of literacy at any age, especially in a technology-driven world. While critical reading skills are important for a student’s personal reading life, they are also necessary for academic success. College students are increasingly asked to read more complex and specialized texts, which require higher-level comprehension, as well as evaluation, synthesis, and analytical skills (National Center for Education Evaluation and Regional Assistance [NCEE], 2013; Springer, Wilson, & Dole, 2014). Yet, in the rare cases when colleges and universities focus on the development of reading skills, it is to remediate students who have not met all admissions requirements. This is concerning given the high number of students who are considered underprepared. With the national attention on college readiness and retention, preparing students for the type of reading demanded of them should be an area that institutions and their faculty prioritize. Unfortunately, most university curriculums view reading as a skill developed in elementary education that does not require ongoing instructional support. Whereas other skill areas such as math and writing usually have first-year courses devoted to helping students transition from high school to college, reading tends to be an area that is neglected or relegated to content-focused courses where reading is used but not discussed beyond the content itself. As a result, students are often left without support and are at risk of failure, when instead, general education programs could be an ideal place for transitioning students to improve their critical reading skills.
98
gener ally speaking
Critical Reading and Community At Kean University, a 4-year public university, the general education (GE) program has eight student learning outcomes (SLOs) that encompass skills including transdisciplinarity and critical thinking, quantitative and information literacy, communications, diversity, citizenship, and ethical judgment. All courses in the GE program are expected to align instruction and content to build these ways of thinking and skills. Each year, data collected from the GE courses are analyzed by the full-time faculty in the program to determine what types of changes can be made to instruction and materials in the next academic year. This is done through a series of structured analysis sessions and a review of specific SLOs at the course level, across multiple courses, over time, and across all first- and second-year student cohorts.
Learning from Assessment The Kean University GE program assesses student work each year to identify areas of strength and need in order to determine what programmatic or course-level changes can be made to improve teaching and learning. Each of the program’s eight SLOs has an associated rubric. The faculty members teaching the core GE courses consistently evaluate student work based on the rubrics and take part in assessment activities throughout the year, including first and second-year courses and the graduating capstone course. A review of longitudinal data, including multiple learning outcomes, levels, and courses, revealed an interesting pattern. Students performed at a high level in advanced skill areas, yet many scored lower in areas linked to getting started with academic work. A closer look at the data indicated a pattern where students could demonstrate skills when explicitly given a topic to work with, but when it came time to plan and create unique works, they often struggled. This was most apparent in areas that demanded creative and critical thinking and where students needed to identify and locate sources of information. Students seemed to have a problem getting started, identifying topics of interest, understanding how to use models of writing and problem solving, and finding ways to approach assignments that went beyond the surface. Discussions with first-year students and faculty revealed that this could be due to a lack of critical reading, the kind of reading that builds a contextual understanding of academic content and the world itself. One example of how this could happen is the ease of access to information through search engines. If we think of finding information and texts in a traditional library environment, faculty are likely to have grown up in a time when they needed to climb the bookshelves to get information. In doing so, they needed to plan, investigate what was available, and often physically locate the necessary texts. When many faculty were completing their bachelor’s degrees, search engines were not available to streamline the search for articles, books, and other texts. Instead, individuals needed to craft a plan
cr itical timing for cr itical r eading
99
to navigate the library system and, in doing so, to understand the context of the information. It was not enough to have a specific item in mind; work required knowledge of scholarship and sources that were related and adjacent. As they located sources, they became aware of other sources on the same topic. Today, due to the available technology, we use library search programs instead. These programs act as trampolines, pushing information in front of us without regard to context and relevance. It is easy to jump directly to the sought-after material without consideration or awareness of the material surrounding it. Undergraduate students use search engines on the Internet and through library systems to locate material, often without a research plan or systematic approach. These search programs make locating materials much simpler; however, in doing so, they place a heavy onus on the audience to determine what is valid, useful, and relevant. Navigating this material relies on critical reading and thinking skills.
“They Don’t Read” Discussions with students and faculty throughout the Kean University (KU) campus revealed that the faculty felt students were not reading at the depth and breadth expected of them. Students were quick to share their thoughts on reading, conveying that they did not enjoy reading academic work and instead tended to be frustrated and disheartened by readings they did not understand. Regarding their reading habits, consistent with the literature, many of the students tended to read brief informational texts or watch videos of the same depth (Burgess & Jones, 2010; Florence, Adesola, Alaba, & Adewumi, 2017; Huang, Capps, Blacklock, & Garza, 2014). It became clear that reading brief informational texts without reflection, discussion, or connection could be a reason why students struggled to get started in tasks that required evidence-based critical and creative thinking. In one discussion, a faculty member jumped up from her seat and exclaimed, “They don’t read!” While the KU students—and all students—read, her comment was a reminder that current students live in a vastly different reading world than their faculty did as students. As a result, they develop skills within the context of their own lives that differ from the ones used in the academic world. At the same time, students may not be developing some of the critical reading skills necessary for college and workplace success. If our college students do not read broadly and widely, with the ability to read deeply, closely, and critically (all skills that college students need to use flexibly and automatically), they risk missing the rich, varied world of knowledge available to them. Especially in the first years of college, students have not had a chance to build the schema, habits, flexible reading capacities, and meta-cognitive processes that allow them to identify when their internal knowledge base is limited, and they need to read more. The world of digital information can simultaneously expand access to information and limit view and experience, creating a situation where people can be successful in small areas while being unaware of the world immediately adjacent.
100
gener ally speaking
This assessment was the inspiration behind a course designed for first-year students with the goal of introducing them to academic discourse and reminding them of reading and self-monitoring strategies they learned as children. Knowing that first-year students do best in an environment that is supportive and open, Kean University adopted a community-centered model that allows the classroom community to guide inquiry and discussion. The model integrated ideas from strengths-based and culturally responsive teaching as a way of engaging students. It also looked at the research on reading in college, and critical reading specifically, to identify key skills and types of readings that students would need familiarity with to move through their undergraduate years of college as well as their adult lives. The result of this process was a course named Critical Reading and Community. This course uses a central book, lateral readings from multiple genres, and a community project to enable students to read critically, widely, and deeply while remaining student-led. The course culminates with a student-designed and -led community service project, tied to the readings of the course. Unlike traditional college courses, the students lead the discussion for the course, with the instructor serving as facilitator and support. The faculty teaching the course are expected to monitor and introduce reading strategies as the course evolves. Class sessions tend to be active, with discussion and debate that move from the central book to current events and personal experiences as the group identifies and investigates a topic of interest and finds a way to contribute to their communities.
Critical Reading and Community (aka the Academic Skill–Building Book Club) At the heart of the course is a book chosen by the instructor. Instructors are urged to choose a book that is meaningful to them, that has themes that are relevant to student life and experience, and that is written to be engaging and interesting. These books often come from areas other than the literary canon and can be popular fiction, non-fiction, or books that most people do not have an opportunity to read. The book is not the content for the course, contrary to literature courses, and instead serves as a facilitator for discussion and further investigation. Instructors cover popular current novels, non-fiction works, and literary works that are more obscure and specialized. What each has in common, though, is that these books challenge students to read and discuss them deeply, not at the informational level but rather by making meaning of what is said and unsaid, drawing on the context in which the book was written and the world in which we live. As the course progresses, the discussions and work fall into two areas: (a) building reading skills, meta-cognition, and self-awareness; and (b) understanding the reading so that students discuss, debate, and develop ideas based on the book. The reading skills portion of the class reacquaints students with skills that they learned as young children
cr itical timing for cr itical r eading
101
and then transitions these skills to the critical skills necessary for college learning. The participatory reading portion of the course uses the central text as a way to introduce information literacy skills such as location sources, citation formats, note taking, and evaluation. As students build familiarity and then fluency with reading and information literacy skills, they begin to apply them as they find various media sources and increasingly reference scholarly sources such as essays and journal articles. By the midpoint of the course, students typically find themes that connect to their campus, local, or broader communities. As they move from reading generally to reading specifically and strategically about the theme(s) of interest, they are challenged to find a way to use what they have learned to serve their communities. The community project takes place in the last few weeks of the course and is built on the themes and readings from the semester. The course requires students to initiate contact with local groups, find out their needs, and then plan how they can improve these communities. Projects have included coat and donation drives, campus events such as informational and signature collections for national causes, and inspirational and stress-reduction activities. What matters for these projects is that students are in control, with faculty stepping back and acting as a liaison to campus resources in order to enable instead of manage the projects. These projects make a difference in the community served and in building a sense of accomplishment and competence in the students, as well as confidence in their ability to use sources to plan and begin a project of value. For students making the transition from a high school environment, this self-awareness and sense of accomplishment based on texts read are critical. They bridge the gap between theory and practice, helping students build personas as a scholars while enabling their entry into scholarly discourse. Students enter college often wanting to be a better version of themselves, expecting a chance to challenge their limits and build new relationships and connections. When we meet them where they are and support them in reaching for their futures, they find a way to engage in learning beyond “getting a grade.” For many of our students, this course is the first time that they have the opportunity to think deeply using academic tools in an environment that rewards critical analysis and supports them in taking risks. In fact, students who have returned to talk about the course have commented that they miss the chance to read and talk with others to make a difference in their lives. For faculty, teaching this course gives them a chance to experience reading and community with first-year students, with the satisfaction of ushering new students into academic discourse. For content experts who often do not have a formal background in teaching, the chance to step back from being the expert and instead focus on learning and supporting student learning can be a new experience. There is something powerful in talking about what you care most about with others when it is done in a way that builds a sense of confidence and community. Due in part to the focus on strengths and responsiveness, faculty are often very aware of their students and get to know them better than they can in a typical course.
102
gener ally speaking
To be fair, not all students enjoy this course. For some, the challenge of reading in a community-centric classroom setting is difficult. Not all students are able to engage with the book in front of them, and while the course can help them learn to manage readings that are not of personal interest, some students are not ready or able to move past this. Finally, some students cling firmly to the idea of a lecture-based classroom where the faculty instructor is expected to perform in class and the students take tests to demonstrate their knowledge. The flexibleness of the course, which is one of its strengths, does not always suit learners who have spent their academic lives in schools that reward memorization and quiet compliance. Moving to independent reading, even in a safe strengths-based and responsive environment, with the ensuing critical thinking demands, may not be something that all students are ready for, want, or value.
Teaching Critical Reading and Community When we designed this course, we felt strongly that any college faculty member on campus with an interest in reading and a desire to learn collaboratively with students should be able to teach the course. Since the course is not a remedial course, the students entering the classroom have intact literacy skills, although reading abilities vary greatly among students. Faculty are not expected to be reading specialists and are asked to integrate common critical reading strategies into their teaching, including previewing and prediction, outlining and annotation, and building context by reading laterally. Faculty members teaching the course are expected to be familiar with active learning activities; be comfortable facilitating student-led group discussions; and have a wide knowledge of their discipline, current society, and available readings. To date, the course has been taught by faculty with expertise in a variety of disciplines, including media and journalism, literature, health, technology, and creative writing. Faculty are asked to teach in a way that maximizes strengths and is responsive to the students and how they learn. Strengths-based teaching means that the focus is on identifying what students bring with them in terms of skills, interests, and experiences. In the classroom, strengths-based teaching means expecting students to contribute to the group, recognizing those contributions, and using them to build upon. Consistent with ideas on strengths-based education, faculty are also learners in this environment— learning about the topics and themes, about their students, and about themselves as teachers. Culturally responsive teaching practices are also a part of this course. Culturally responsive teaching means that we find how our students learn and the tools that work for them and use these as a foundation (Ladson-Billings, 1994). In Critical Reading and Community, strengths-based and culturally responsive teaching means helping students understand their own reading lives, identify their strengths and what works for them, and incorporate this knowledge into the classroom as a core part of teaching and learning. Further, given that Kean University is highly diverse with a large proportion
cr itical timing for cr itical r eading
103
of first-generation students, the course fosters a sense of community among students. In turn, this helps them view themselves as scholars who deserve to be at the institution and who are capable of college-level work. The course also helps to build connections between students as well as with faculty and resources across campus. Discussions with past students indicate that working collaboratively with a faculty member and other students in their first year has significantly contributed to the successful start of their college careers.
Outcomes Assessing this course using standard student learning outcomes is challenging. Kean University (KU) continues to wrestle with how best to design student work samples that demonstrate what students understand while adhering to the rubrics that are in place, which do not incorporate reading as an explicit domain. Given the nature of the course, KU was not looking at diagnosing or remediating reading issues, which eliminated much of the standardized testing for reading. Further, using a placement test was not appropriate given that students were already enrolled at the university. After careful consideration, KU chose to use its standard rubric for critical thinking to evaluate how students performed. What it found was that the range for critical thinking behaviors was wide and that there was no consistency in scoring. This makes sense in consideration of our other first-year data, which revealed that students are just approaching the first level on the GE rubrics in a variety of courses, except in areas where they have had consistent, explicit instruction such as writing, public speaking, and math. Still, some areas of critical thinking tended to be higher for students in Critical Reading and Community than in other first-year courses. This trend needs more review and analysis before the institution can draw further conclusions. While this course fits within the institution’s GE program, steps must be taken to ensure the use of the proper student learning outcomes, student work samples, rubrics, and other instruments in order to generate quality data that will assist in planning. A review of student and instructor experiences, as well as the overall work quality, presented a clearer picture. Student feedback was generally positive for the course, with students able to articulate what they learned about themselves and their reading habits, the skills they learned, and how the course improved their academic work. Faculty discussed the course in positive terms, including comments on students developing independence, judgment, and curiosity. In addition, many faculty noted that attendance, engagement, and work completion were all higher in this course than in other first-year GE courses. The work samples showed that students were generally willing and able to think, document, locate, and read using critical reading strategies in a course where these skills were required. While these are not formal measures of assessment, they are
104
gener ally speaking
indicators that the course is working. Further assessment and evaluation are necessary to determine if and how the skills in this course transfer to other courses and settings.
Future Directions Designing and piloting this course, which is now part of KU’s general education (GE) offerings, was a step forward in helping students understand their reading habits and prepare them for the reading demands of their college and professional lives. Even so, more work needs to be done, especially with first-year courses. While there is much in the literature on reading deficits and needs, there is much less on how students read and develop literacy and other skills that they bring to college. The scholarly community has minimal knowledge about the reading lives of students in and out of the classroom. We are beginning to map out what literacy and critical reading look like in the age of technology and how the multimodal world in which we live, with ease of access to information, affects learning. Further research on these topics is necessary in order to better understand and prepare our students, and the faculty who teach them, to read critically. Just as literacy is changing, so is the role of college faculty. While expertise remains the focus of college faculty, many faculty members spend much of their time teaching. As the academy continues to evolve, meeting the needs of our students may mean changing how we envision and deliver course content. For this reason, Critical Reading and Community is designed to evolve as well, incorporating new forms of materials, teaching strategies, readings, themes, and ideas. For this to happen, faculty need support in their own professional development. Preparing college faculty to support the development of critical reading skills is a crucial part of increasing the reading capacity of students. Given the need for all students to be prepared for critical reading, it is necessary to consider how to scale the course without losing its flexibility while also ensuring that there are faculty who feel comfortable teaching the course. Part of this development should include support for the incorporation of reading instruction into all GE courses. Explicit instruction, aimed at helping students transition between different types of readings, could easily be integrated throughout the GE curriculum. Teaching students how to identify the format and genre of a source, as well as strategies they can use with those sources, can make the difference between students who read regularly and students who do not.
Conclusion Given the large number of high school seniors who are not prepared for college reading and who may not test into developmental or remediation courses, it is important for institutions to find ways to integrate reading instruction and support for all incoming
cr itical timing for cr itical r eading
105
first-year students. Kean University created a course to focus on critical reading skills as a result of assessment data that indicated a need for students to read broadly, deeply, and critically in order to succeed in their coursework and beyond. While the course has been well received, one course of study in critical reading is not going to ensure that students are able and willing to read the material they come across in college and in their adult reading lives. One course can reconnect students to reading skills that they learned as children, build confidence in their ability to engage in academic discourse, and encourage self-monitoring of their reading. For students to be able to read closely, carefully, critically, and most importantly effectively, they need to transition and practice their reading skills just as with any skill in life. Since reading is a foundational skill for all college coursework and for adult life in general, it is well worth the time it takes to ensure students are capable, conscientious, and critical readers. There is a need for higher education to adopt a new mindset toward reading, and general education programs are in a unique position to help prepare our students for critical reading at this critical time as literacy and our world itself continue to change.
References American College Testing. (2014). The condition of college & career readiness 2014. Retrieved from https://www.act.org/content/dam/act/unsecured/documents/CCCR14-National ReadinessRpt.pdf American College Testing. (2015). The condition of college & career readiness 2015. Retrieved from https://www.act.org/content/dam/act/unsecured/documents/CCCR15-National ReadinessRpt.pdf American College Testing. (2016). The condition of college & career readiness 2016. Retrieved from http://www.act.org/content/dam/act/unsecured/documents/CCCR_National_2016.pdf American College Testing. (2017). The condition of college & career readiness 2017. Retrieved from http://www.act.org/content/dam/act/unsecured/documents/cccr2017/CCCR_National_ 2017.pdf Belshaw, D. (2014). The essential elements of digital literacies. Retrieved from http://literaci.es Bosley, L. (2008). “I don't teach reading”: Critical reading instruction in composition courses. Literacy Research and Instruction, 47(4), 285–308. doi:10.1080/19388070802332861 Brooks, D. Christopher, & Pomerantz, J. (2017). ECAR study of undergraduate students and information technology [Research report]. Louisville, CO: ECAR. Burgess, S. R., & Jones, K. K. (2010). Reading and media habits of college students varying by sex and remedial status. College Student Journal, 44(2), 492–508. Florence, F. O., Adesola, O. A., Alaba, B., & Adewumi, O. M. (2017). A survey on the reading habits among colleges of education students in the information age. Journal of Education and Practice, 8(8), 106–110. Freire, P., & Macedo, D. (1987). Literacy: Reading the word and the world. South Hadley, MA: Bergin & Garvey. Huang, S., Capps, M., Blacklock, J., & Garza, M. (2014). Reading habits of college students in the United States. Reading Psychology, 35(5), 437–467. doi:10.1080/02702711.2012.739593
106
gener ally speaking
Jackson, J., & Kurlaender, M. (2014). College readiness and college completion at broad access four-year institutions. American Behavioral Scientist, 58(8), 947–971. Ladson-Billings, G. (1994). The dreamkeepers. San Francisco, CA: Jossey-Bass. Leu, D., Kinzer, C., Coiro, J., Castek, J., & Henry, L. (2017). New literacies: A dual-level theory of the changing nature of literacy, instruction, and assessment. Journal of Education, 197(2), 1–18. doi:10.1177/002205741719700202 Moore, G., Slate, J., Edmonson, S., Combs, J., Bustamante, R., & Onwuegbuzie, A. (2010). High school students and their lack of preparedness for college: A statewide study. Education and Urban Society, 42(7), 817–838. doi:10.1177/0013124510379619 National Center for Education Evaluation and Regional Assistance. (2013). What does it really mean to be college and work ready? The English literacy required of first-year community college students. Retrieved from http://ncee.org/wp-content/uploads/2013/05/NCEE_ EnglishReport_May2013.pdf National Endowment for the Arts. (2004). Reading at risk: A survey of literary reading in America. Retrieved from https://www.arts.gov/sites/default/files/ReadingAtRisk.pdf National Endowment for the Arts. (2007). To read or not to read: A question of national consequence. Retrieved from https://www.arts.gov/sites/default/files/ToRead.pdf National Endowment for the Arts. (2008). Reading on the rise: A new chapter in American literacy. Retrieved from https://www.arts.gov/sites/default/files/ReadingonRise.pdf Springer, S. E., Wilson, T. J., & Dole, J. A. (2014). Ready or not: Recognizing and preparing college‐ready students. Journal of Adolescent & Adult Literacy, 58(4), 299–307. doi:10 .1002/jaal.363 Wilkins, C., Hartman, J., Howland, N., & Sharma, N. (2010). How prepared are students for college-level reading? Applying a Lexile-based approach (Issues & Answers Report, REL 2010–No. 094). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southwest. Retrieved from http://ies.ed.gov/ncee/edlabs
Chapter 10
Case Studies in General Education: Integrating General Education and the Majors Henriette M. Pranger
General education refers to a set of skills, knowledge, habits of mind, and values that prepares students for success in their majors. General education courses are typically offered to all students, regardless of their major(s). This chapter proposes that general education remains relevant when educators teach it within the context of students’ professional and personal goals instead of treating it as distinct from learning in the major. It also presents a case study that illustrates how one college offers most of its general education within the context of the major (Hart Research Associates, 2016) and then reinforces it in cross- or interdisciplinary and high-impact experiences (Kuh, 2008). At first glance, it might appear that the higher education community has reached a consensus on what students should learn in general education courses and how that learning should be assessed. From 2007 to 2009, faculty teams from over 100 institutions participated in the VALUE rubric development process led by the Association of American Colleges and Universities (AAC&U). The resulting 16 rubrics define general education outcomes (e.g., inquiry and analysis, critical thinking, written communication, oral communication, reading) and provide a common way to measure student learning across institutions. Since the AAC&U made the VALUE rubrics available, over 32,000 people from 5,600 organizations and 3,300 colleges around the world have accessed these resources (AAC&U, 2018a). The availability of such validated rubrics
108
gener ally speaking
makes it possible for educators to measure student achievement in some common areas with confidence, compare student achievement across institutions, and set benchmarks. This information tells key stakeholders what students are learning and when curricular improvement may be needed. Despite the popularity of these rubrics, however, a review of several college catalogs quickly reveals that there is no clear consensus across all institutions about what constitutes general education. The New England regional accreditation requirements for reporting on general education indicate that it is firmly established in higher education. For example, the Data First forms from the New England Commission on Higher Education (NECHE) require that colleges provide evidence of student achievement (a) at the institutional level, (b) in general education at the undergraduate level, and (c) in relation to learning in the major. Reporting at these three levels reflects a common view of general education as a necessary aspect of an undergraduate program. Yet, even with common definitions, rubrics, and reporting requirements, some institutions define student outcomes differently. The three levels tend to be found at institutions where general education courses are separate and distinct from courses in the major. These distinctions become apparent when students want to get their general education courses out of the way so that they can pursue their major(s) or when stakeholders (e.g., parents, board members) view general education courses as unrelated to employability skills taught in the major (Shinn, 2014). In traditional institutions, general education outcomes are introduced early in the plan of study and, in some cases, reinforced in the major. However, this is not the only possible approach. Certain institutions have redesigned their general education courses to diverge from traditional humanities frameworks and focus more on current, real, and complex problems to help prepare students for employability and citizenship in a global community. Some public institutions, such as New College in Florida and Evergreen State College in Washington, offer interdisciplinary and integrative curricula grounded in a liberal arts curriculum. Others define student success in unique ways that are unique to their missions. Consequently, the New England regional accreditors recently revised their data reporting forms to include a new section. Specifically, the Data First forms for Standard 8, Educational Effectiveness, afford institutions an opportunity to report data on “other measures of success important to institutional mission” (Commission on Higher Education, n.d.). Goodwin College provides a forward-thinking example of a regionally accredited, nontraditional, relatively new institution where the three-level framework is no longer effective. Over the past decade, Goodwin’s offerings evolved from clearly separate major and general education requirements to a more integrated approach in which general education is offered in the context of the major, for the following reasons: • Pressure to integrate general education (AAC&U, 2018b). Over the last decade, student enrollment patterns at the college showed a preference
integr ating gener al education and the m ajors
for career-focused courses. For example, courses related to quantitative reasoning are replacing traditionally required courses in pure mathematics. Information literacy assignments across a series of courses are preferred over one class in computer or information literacy. In addition, the academic and student support staff want to collaborate with faculty to achieve the college mission. • Student requests for a curriculum that is related to their professional and personal goals. Nontraditional adult learners ask with increasing frequency for meaningful and personally relevant assignments during the first class meeting and in the first course in their program. Credits for prior learning options have contributed to this desire for relevancy, because one course outline does not meet all of the students’ needs. After students earn credit for prior collegiate learning, they see the direct application of the remaining courses to their goals because degree planning is part of Goodwin’s process. An increase in volunteer, service learning, internship, and clinical requirements have also contributed to students recognizing the value of learning that feels applicable, transferable, and practical. As a result, Goodwin continuously seeks new ways to show a direct understanding and clear connection between students’ new learning and their personal and professional goals. Career planning workshops are offered to all students year-round and are embedded in the curriculum. • Stakeholders understand general education when reframed as career/ employability skills. Corporate education and American Council of Education (ACE) credit recommendations for non-collegiate accredited learning initiatives (e.g., StraighterLine), massive online open courses (MOOCs), badges, and boot camps are being used to replace formal, college-based classroom learning and continuing education. Why? Because too many students are not prepared for college and college graduates are not prepared for work, so they seek out additional ways to develop work competencies (Twenge, 2018). Goodwin faculty continue to experiment with ways to design relevant and transformative learning experiences, such as service learning, credit for experiential learning, involving community-based advisory boards in programs, and increasing requirements for clinical and internship experiences (Green, 2018). Often, older students seem more motivated and committed to a course or an assignment when they see a clear alignment between their resources (e.g., time, money, effort) and the direct applicable gain of knowledge and skills for their work or personal life. For example, traditional English composition and literature requirements have become writing requirements; the reading, analysis, and
109
110
gener ally speaking
comprehension skills are retained, but these are taught in the context of a career (e.g., grant writing, investigative report writing). The latter courses have more robust enrollments and completion rates. Another example is the rise of competency-based education in other institutions, which is offered at the student’s own pace with learning coaches and mentors. The registrar’s office even replaces traditional transcripts with skills-based transcripts that help graduates document their prior learning and clearly indicate their new learning to a potential employer. These ideas are under consideration but have not yet been implemented at Goodwin. However, faculty strive to relate learning to students’ personal and career goals, with the belief that these actions help sustain student effort, including in general education courses.
Goodwin College Goodwin College opened in 1999 with the mission of “educating a diverse student population in a dynamic environment that aligns education, commerce and community. [Its] innovative programs of study prepare students for professional careers while promoting lifelong learning and civic responsibility. As a nurturing college community, [it] challenge[s] students, faculty, staff, and administration to fully realize their highest academic, professional, and personal potential” (Goodwin College, n.d.). Goodwin offers 26 career-focused certificates, aligned with 19 associate degrees, 11 baccalaureate degrees, one post-baccalaureate certificate, and two graduate degrees. Thirteen programs maintain specialized programmatic accreditation. Every degree program requires multiple internships or clinical experiences and a capstone experience in which both general education and major program outcomes are assessed (Goodwin College, n.d.). Goodwin continues to be a career-focused, nonprofit, private college known for empowering hardworking students to become sought-after employees. Most of the college’s programs are open enrollment, with selective admission in use for only a small number. The 1999 enrollment was 300 students, which has grown to over 3,500 students in 2018 (Goodwin College, 2017). More than half are first-generation college students, with a large representation of female students and students of color. When Goodwin first opened its doors, the general education curriculum mirrored the State of Connecticut’s baccalaureate standards requiring “a balanced distribution of required courses or restricted electives in the humanities, arts, natural and physical sciences, mathematics, and social sciences comprising at least 25 percent of the degree and 33 percent of the distribution degree” 1. General education at Goodwin evolved from very specific course requirements that were separate and distinct from learning in the major to a range of options that become more integrated with learning in the major every year.
integr ating gener al education and the m ajors
111
Early: Core Model One benefit of the earlier core model (Waltzer, 2000) was that all students completed the same set of courses managed by the General Education department (e.g., English, math, and humanities faculty). Goodwin’s early degrees had few if any electives. As a result of this standard approach, the scheduling of classes, ordering of books, teaching, and assessment practices were also standardized. An unforeseen disadvantage was the creation of a campus culture in which general education outcomes were perceived as the responsibility of the general education faculty. The decision to centralize all responsibility for general education in one academic department inadvertently contributed to a silo mentality, even at a relatively small institution. For example, the medical faculty at this time did not see themselves as writing instructors, nor were they inclined to set aside critical skill development time to teach students about writing.
Later: Mixed Core and Fluid Model Goodwin added new academic programs in response to local employer requests. It designed certificates that were aligned with associate degrees and eventually offered associate degrees that fit into baccalaureate degrees as well. Faculty with doctoral degrees were hired to teach the general education courses; to design new courses, including upper-level electives; and to contribute to Goodwin’s development of a baccalaureate culture. The faculty members expressed a desire to offer more classes, especially in their areas of expertise. They persuaded the administration of the advantages of providing students with more choices. As a result, specific, narrow general education requirements were amended to permit more options. For example, the second-level ENG 102 (i.e., the second three credits in English after English 101) was amended to English 1XX or higher (see Table 10.1 for example options). The general education faculty added additional upper- and lower-level electives.
General Education Faculty Versus Faculty in the Major The general education faculty formed a committee that further defined the general education outcomes (e.g., measurable outcomes, key assignments, and related rubrics). All faculty members from every program could submit a course proposal to the new committee for review. A course in the major could be approved to meet general education requirements. That committee, under the General Education department chair, drafted a document titled “The Educated Person Goodwin College Student” that further embedded the required general education “perspectives and competencies” to become a part of every student’s plan of study.
112
gener ally speaking
Table 10.1 Examples of General Education Changes Required Credits
Core Model (2010)
Mixed Core and Fluid (Distributive Model) 2018
First 3 English Credits
English 101 Composition
English 101
Second 3 English Credits
English 102
English IXX (Writing Elective) ENG 102 ENG 103 Writing Personal Biographies ENG 115 Writing for the Human Services Professional CJ 106/ENG 106 Investigative Report Writing BUS/ENG 212 Grant Writing
Courses in the Major That Were Approved to Meet Gen Ed Competencies Electives
ENG 225 Creative Writing ENG 230 American Literature I ENG 260 Stage, Screen, and Television Drama
Curriculum Drivers
Connecticut State Regulations
• Connecticut State Regulations • Students Transferring to State Colleges • PhD Gen Ed Faculty Areas of Expertise • Gen Ed Faculty Seeking Student Elective Choice • Faculty in the Major Obtaining Gen Ed Competency Approval
2019 + Integrated
The current proposal under discussion is the integration of institutional outcomes and general education outcomes—taught across the curriculum: • Communication: Students will be able to effectively express and exchange ideas through various modes of communication, including written, oral, and digital. • Information Literacy: Students will be able to identify relevant information, evaluate alternatives, synthesize findings, and apply solutions. • Career Readiness: Students will be able to apply their knowledge, skills, and abilities in their chosen field of study. Plans include assessing these outcomes in the major (e.g., IXX assessed at the associate level and then a final, third Advanced Writing course at the baccalaureate level, where writing competency will also be assessed).
What resulted was a mix of required and distributed options (Table 10.1). While the general education faculty created courses in their fields of interest and worked to convince potential students of these courses’ merit (e.g., Literature of the Caribbean), faculty in the major created courses that met the general education outcomes but taught these in the context of the relevant career. For example, the General Education department offered COM 101, Public Speaking, and COM 105, Interpersonal Communications, compared to BUS 210, Organizational Communications (offered by the Business department), and English 102, Writing, compared to CJ 106, Investigative Report Writing, or BUS 212, Grant Writing (for business and human services students). The general education faculty were disappointed to discover that the focused, specialty courses (e.g., Shakespeare, poetry) rarely att racted enough students for the institution to offer frequently because the career-focused students expressed frustration at taking an introductory psychology course, much less a literature course. Students wanted to begin their course of study with courses in their major.
integr ating gener al education and the m ajors
113
The general education faculty became quite innovative in an effort to engage students. For example, Shakespeare became ENG 260, Stage, Screen, and Television Drama. The full-time humanities instructor developed a history course focused on the town and offered an Italian art history course that met at a local restaurant. Despite these efforts, enrollment was consistent in required courses but remained low in general education upper- and lower-division electives. Students preferred job-related electives offered by the faculty in their major. There were several successful cross-department collaborations between the English faculty and faculty in the major. For example, the medical faculty collaborated on the creation and teaching of writing assignments across the major curriculum. During this period, one valedictorian’s graduation speech thanked his humanities teacher. He recounted the time he asked his teacher, “Why do I have to take this humanities course when I just want to be a nurse?” The teacher replied, “After you give your patients their medicine, what are you going to talk about?” This conversation is representative of campus attitudes about the relevancy of general education. As more courses in the major were approved to meet general education requirements, the traditional model of general education continued to evolve. The math teachers who offered statistics, which was required for every major, insisted on teaching students how to calculate Z scores. Faculty in the major wanted the math faculty to teach students how to read a journal article’s results section, which presents statistics, often with little explanation. The inability to come to a meeting of the minds led to the removal of the statistics requirement and the addition of applied research courses in each major. Another result of this campus discussion was the decision to use the American Psychological Association (APA) style rather than that of the Modern Language Association (MLA) in the first six credits of English, because MLA was not used outside of English courses. When Goodwin was approved to offer baccalaureate degrees, the plans of study had directed electives in the major but intentionally included more open electives (six–nine), which could be upper-level general education courses. However, since the major courses also included options to meet the general education elective, the general education faculty and the faculty in the major were competing for “competency course approval.” Courses in the major that met general education competencies were frequently in the schedule, while the general education course enrollment began to decline. For example, multiple sections of Medical Law and Ethics, which was approved to meet the general education ethics and philosophy requirement, runs every semester and has waiting lists, while the traditional PHIL 101 does not run as often. Students were convinced of the applicability of Medical Law and Ethics and its usefulness to their career goals. Faculty became determined to figure out how to meet general education learning outcomes in a career context.
114
gener ally speaking
Moving Forward: Integrating General Education and Major Courses A decade ago, Goodwin’s general education program consisted of prescriptive, limited, and directed requirements in the humanities and other subjects associated with a liberal arts foundation. Since then, campus conversations involving students, faculty, and administrators shifted the curriculum from a mix of directed and distributed requirements to a more integrative approach to general education. For example, the required Humanities 101 was replaced with several humanities options. More importantly, Goodwin began hosting a fall humanities festival over several days, in which faculty and students celebrated how each program develops multicultural agility/intelligence and cultural competence that prepares students for career success in a global economy. This idea of integrating general education and career outcomes is not unique to Goodwin. In 2015, the National Institute for Learning Outcomes Assessment (NILOA) published “General Education Transformed: How We Can, Why We Must,” which addressed how general education can foster essential capacities for career, citizenship, and global engagement. Another 2015 publication, “Open and Integrative,” discussed how technology can be used to link learning experiences that have typically been disconnected and to create new integrative contexts for transformative learning (Bass & Enyon, 2016). Over time, as Goodwin improved in data collection and analysis, the faculty saw the benefits of moving from individual efforts (e.g., rubrics, tests) to shared efforts (e.g., cross-program VALUE rubrics and professional licensing exams). The college recently added a new position, learning outcomes coordinator, which will support the use of technology in assessment efforts. Administration recognized the value and purpose of combining cross-disciplinary experiences from the financial and educational perspectives. These interdisciplinary efforts are most successful in upper-level courses. For example, individual program-specific research courses were integrated into a single research course offered across all majors. Recently, a new vice president of academic affairs restarted the conversation of general education outcomes and assessment, with an emphasis on creating a shared vision for the faculty, and proposed a new plan: • A shared vision. Goodwin’s academic leadership team, which has representation from all academic departments and key academic and student support services, collaborated with the faculty senate to identify three critical knowledge and skill areas that will be taught across the curriculum and reinforced through cocurricular and experiential learning activities, thereby merging general education and institutional outcomes. As Goodwin builds its capacity to assess student
integr ating gener al education and the m ajors
•
•
•
•
•
•
115
achievement in these areas, it is anticipated that the number of institutional outcomes will increase, possibly to five. Goodwin is in Phase 1 of this multiple-phase project, and these student learning outcomes have become integral features of its new academic plan. Integrated institutional and general education outcomes. The leadership and staff that comprise Goodwin’s academic and student support services areas will unite around a set of student learning outcomes that apply to all programs (e.g., courses should build on and be more aligned with each other). Additional professional development, assessment, and curriculum design resources. These will be offered for all full- and part-time faculty; outcomes will be reinforced across the curriculum, and a high priority will be to continue to fund professional development for faculty. For example, Goodwin created a grant-funded Universal Design in Learning (UDL) Fellows program and hosted a regional UDL conference this past fall. These efforts are consistent with NILOA’s 2015 publication on the importance of using evidence-based teaching pedagogy to ensure quality teaching in general education (Kuh, Ikenberry, Jankowski, Cain, Ewell, Hutchings, & Kinzie, 2015). Courses in the major offered as early as the first semester that meet general education outcomes. Adult students or traditional-age students with clear career goals want these courses (e.g., learning communities). Upper-level cross-disciplinary courses. These may include a research course or one in an area of personal development, such as personal finance or death and dying. Flexible assignment design, with clear connections to students’ career goals. These courses see higher enrollments and more completions. Student achievement in these courses will continue to be assessed. Continued staff and faculty collaboration on the design and implementation of high-impact practices. For example, the college has a first-year experience course designed and implemented by faculty and student services staff, and every academic program has internship/clinical experiences as well as the required community-based capstone course.
This case study described Goodwin College’s shift from an overly prescriptive general education curriculum, based in traditional humanities areas, through a stage in which some courses were more program specific to the current situation in which institutional and general education outcomes are combined, often with a career-focused alternative course. The institution anticipates future challenges related to the coherency of curriculum, designing meaningful assessment across courses and programs, and rethinking faculty staffing and department organizational structures.
116
gener ally speaking
Conclusion It is important to participate in the conversation about the future of general education. Keeping general education relevant to students while encouraging appropriate breadth and depth in education—including the ability to see the world, especially their community, from multiple perspectives—is increasingly difficult as that world becomes more complex. Answers to the questions about the relevancy of general education will impact the quality of life in the communities where we work and live. Students will continue to be required to engage in lifelong learning, to work collaboratively (even internationally) on the resolution of complex problems, and to potentially reenvision themselves, given the rapidly changing economy, technology, and occupations. Goodwin believes that these values and skills can be developed by integrating the general education and career-focused outcomes. The institution intends to develop graduates who can apply and transfer their skills across work and life environments. Attending college is one of the most significant financial and personal decisions of one’s life (Bestavros, 2018). This is a heavy responsibility for educators. This book offers the reader ideas to consider as educators determine what should be next for general education curriculum reform. Part of the answer is to recognize the extra-collegiate learning that students bring with them, while also ensuring that their efforts, time, and financial resources applied during college result in new, personally significant, and socially relevant learning outcomes in all courses that comprise their plans of study. At Goodwin, this means continuing to assess and improve how general education is offered within the context of the major and then reinforced in cross- or interdisciplinary and high-impact experiences.
Notes 1 Conn Agencies Regs. § 10a-34-15.
References Association of American Colleges and Universities. (2018a). VALUE rubric development project. Retrieved from http://www.aacu.org/value/rubrics Association of American Colleges and Universities. (2018b). Integrative learning. Retrieved from https://www.aacu.org/resources/integrative-learning Bass, R., & Eynon, B. (2016). Open and integrative: Designing liberal education for the new digital ecosystem. Washington, DC: Association of American Colleges and Universities. Bestavros, A. (2018, August 21). It’s time to tell students what they need to know. The Washington Post. Retrieved from https://www.washingtonpost.com/news/grade-point/
integr ating gener al education and the m ajors
117
wp/2018/08/21/its-time-to-tell-students-what-they-need-to-know/?utm_term=.ce 566c7379af Commission on Institutions of Higher Education. (n.d.). Institutional data forms. Retrieved from https://cihe.neasc.org/institutional-reports-resources/institutional-data-forms Goodwin College. (2017). Institutional profile 2017–2018. Retrieved from https://www.goodwin .edu/files/pdfs/oie/institutional-profile-2017-18.pdf Goodwin College. (n.d.). About Goodwin. Retrieved from https://www.goodwin.edu/about/ mission Green, E. (2018, April 5). With changing students and times, colleges are going back to school. The New York Times. Retrieved from https://www.nytimes.com/2018/04/05/education/ learning/colleges-adapt-changing-students.html Hart Research Associates. (2016). Recent trends in general education design, learning outcomes, and teaching approaches. Retrieved from https://www.aacu.org/publications-research/ publications/recent-trends-general-education-design-learning-outcomes-and Kuh, G. D. (2008). High-impact educational practices: A brief overview. Retrieved from https:// www.aacu.org/leap/hips Kuh, G. D., Ikenberry, S. O., Jankowski, N. A., Cain, T. R., Ewell, P. T., Hutchings, P., & Kinzie, J. (2015). Using evidence of student learning to improve higher education. San Francisco, CA: Jossey-Bass. Shinn, L. (2014, January/February). Liberal education vs. professional education: The false choice. Trusteeship. Retrieved from https://agb.org/trusteeship-article/liberal-education-vsprofessional-education-the-false-choice/ Twenge, J. (2018, June 5). What’s the biggest challenge for colleges and universities? The New York Times. Retrieved from https://www.nytimes.com/2018/06/05/education/learning/ biggest-challenge-for-colleges-and-universities.html Waltzer, K. (2000, November). Liberal general education at Michigan State University—Integrative studies. Paper presented at the 2000 annual meeting of the Council of Colleges of Arts and Sciences, Toronto, ON.
Chapter 11
Guiding Generation Z’s Future: Transforming Student Learning Opportunities to Career Outcomes Jeremy Ashton Houska and Kris Gunawan
A vital part of advancing higher education in the 21st century is ensuring that students are well trained and prepared to face the competitive job market1. Colleges and universities should play an active role in making sure that students develop the skills necessary to successfully work in their respective fields, and that employers are able to find them. The United States’ Bureau of Labor Statistics (2017) projected that employment will continue to grow between 2016 and 2026 at a compound rate of 0.6% per year (i.e., an increase from employing 159.2 million people in 2016 to 169.7 million people by 2026). Many of the expanding career fields will be in the healthcare and technology sectors (Bureau of Labor Statistics, 2018). With job opportunities growing over the next decade, institutional stakeholders should be wary of the current job market and adapt learning curricula to effectively train today’s students. Stepping away from the discussion of general education specifically, this chapter examines learning outcomes more broadly, in particular as they relate to career outcomes for Generation Z. The focus on college students and their ability to land jobs has mainly targeted the millennials (i.e., those who were born between 1980 and 1994, also known as Generation Y). Although this generation continues to be an important cohort to assess, many of them are out of college and at the point at which they have completed their educational training and have likely moved on from a degree to the working world. A new generation
120
gener ally speaking
of college students known as Generation Z (i.e., those born after 1994) is at the forefront of the changing job market. Institutional stakeholders and employers who work with this new generation must continue to reevaluate the types of training necessary to enhance their professional development and avoid issues that could hinder them from being successful at getting hired. Much of what is known about the status of the job market has mainly targeted millennials due to their strong influence on today’s labor force. In fact, one in three Americans are likely to be a millennial in the workforce based on findings of the Pew Research Center (Fry, 2018). As millennials, they were raised by the baby boomer generation (i.e., those born between 1946 and 1964) and grew up with a strong support system from their parents. Many of the millennials were raised with the philosophy that everyone is a winner. They were brought up with a “trophy” mentality, in which they grew up receiving prizes for trying even when they did not win at all. Consequently, this type of upbringing has brought about an assumption that millennials are entitled due to their overconfidence and expectation that everything centers around their life. In further examining millennials, they are well connected to the Internet, being skillful in social media. In fact, they are considered the first adopters of social media (e.g., MySpace and Facebook) and continue to use various online platforms (e.g., YouTube, vlogging) to communicate with others. They have become the most educated cohort, delaying marriage and family for career prospects. However, millennials have been portrayed negatively in the media due in part to the massive student loan debt that they accrued in college and their high unemployment rate (Kasperkevic, 2017). One of the reasons why millennials experienced a struggle in employment after college was due to the economic recession in 2008. However, another reason was that a disconnect was observed between the skills millennials obtained with their degree and employers’ expectations for hire (McNamara, 2009). This concern is gradually changing as colleges and universities are tackling the issue through examining student learning outcomes. Much of the focus, however, is still on the millennials, even when a new cohort has shifted to Generation Z. These post-millennials are often clumped with the millennial generation because of the dearth of information about them at this early stage of their careers. In fact, Generation Z college students who recently graduated are now 22 to 23 years of age (note that this would be true if they were born in 1995 and joined the workforce in 2018). However, as the post-millennial generation continues to grow and express themselves collectively, institutional stakeholders should take note of how they navigate themselves through learning and how they continue to examine their professional identity. Before examining the characteristics of Generation Z and how student learning opportunities can help shape their ability to succeed in the workforce, it is important to address a caveat of examining generational identities. As in any other generation, it is easy to label individuals into a one-size-fits-all type of category. Although the characteristics and qualities of Generation Z are being described as an entity, institutional
guiding gener ation z’s futur e
121
stakeholders should be aware that they should never view these descriptions as definitive. It is easy to oversimplify or overgeneralize ideas that could potentially turn into stereotypes. Individual behaviors are idiosyncratic, and as such, institutional stakeholders should be careful in making quick judgments due to the possibility of having confirmation bias (i.e., the tendency to lean toward certain ideas and behaviors that fit one’s preexisting beliefs). According to Howe and Strauss (2000), generations are based on the experiences cohorts share that could influence their behaviors. A new generation typically forms a personality of its own based on common attitudes, beliefs, and behaviors that may not be as prominent in other generations (Howe & Strauss, 2007). Sandeen (2008) suggested that a generation typically forms about every 20 years. In this chapter, the focus on Generation Z should be used as a guide to help understand this cohort based on their commonalities. However, institutional stakeholders should take every precaution not to assume that everyone is alike in Generation Z.
Understanding Generation Z The majority of students who are currently attending college are known as Generation Z. They are often referred to by other names, such as iGen, post-millennials, Gen Z, homeland generation, centennials, or the net generation (Seemiller & Grace, 2016; Selingo, 2018). These students would have been born between 1995 and the present; however, there have been discrepancies as to the set time of when they would be included as part of Generation Z. The starting year is sometimes marked as 1997 or 2000. The differences in the generation’s origin year are based on people’s perspective of when they believed the generation manifested itself as a particular identity. For the purpose of this chapter, 1995 will be used to cover the earliest and approximate year that started this new generation. Generation Z is different from the millennials because they have been born into events that have already existed that the previous generation did not confront earlier on in their lives. For instance, Generation Z grew up at a time after the terrorist attack of 9/11 in the United States, where terrorism became more of a concern and homeland security became ingrained into the American lifestyle (Seemiller & Grace, 2016). The majority of Generation Z are currently children of Generation X (i.e., those born between 1965 and 1979; the “X” is referred to as a generation that does not want to be defined or labeled as a specified identity; Raphelson, 2014). It is important to understand how the parents of Generation Z were raised and how their upbringing may contribute to the characteristics of Generation Z. When examining Generation X, these individuals were regarded as the “latchkey kids” because they often came home from school to an empty house due to both parents being at work, and thus would carry a house key to open a home without parents present. This, in turn, caused them to learn independence while growing up. Generation X is often described as being pragmatic and cynical about the world because of facing crises, such as the 2008 economic downturn,
122
gener ally speaking
and learning how to overcome them. This generation also has the highest rate of divorce. For Generation Z, they have learned from their Generation X parents about the value of independence and hard work. Similarly, Generation Z does not like to label themselves on any parameter. However, for Generation X, such non-labeling was a form of rebellion, whereas for Generation Z, they are fluid in terms of identity. Thus, they do not see labels as necessary. Individuals in Generation Z are considered more technologically savvy than the previous generations because they grew up with the advancements of the Internet and other forms of communication (e.g., texting, instant messaging, social media, cell phones, vlogs, and even interactivity via videogame systems). Because the Internet has become more prominent in society, information is instantly accessible with which Generation Z is easily bombarded on a daily basis. On a cultural level, Generation Z has become more sensitive to various issues, such as school violence, bullying, economic inequality, gay rights, immigration, legalization of marijuana, and climate change (Seemiller & Grace, 2016). With current issues like the #MeToo movement, they are likely to show more concern for the welfare of others and not just themselves. After observing what the millennials have faced and being brought up by pragmatic parents, Generation Z has a different perception of higher education. With their awareness of student loan debt and skyrocketing tuition fees, they are concerned about their ability to pay back and achieve their goal of getting their degree as a means to a job. They have seen college graduates (specifically, those who are millennials) earn a degree but have difficulty finding or maintaining a stable job. As institutional stakeholders, the anxiety and skepticism of Generation Z surrounding the value of a higher education is a concern that should be targeted and prioritized in colleges and universities. This issue involves how educators teach and approach their ability to assist students to successfully earn their degree and get hired by employers.
What Employers Seek in Their Hires Employers are seeking more than just work experience. They are critically seeking out college graduates who exhibit skills that institutions can teach through course activities and assignments. According to a report from the National Association of Colleges and Employers (2017), skills and attributes that employers seek out and highly value in applicants include problem-solving, collaborating in team settings, communicating effectively through writing, demonstrating leadership, and exhibiting a strong work ethic. Although having a strong grade point average and getting involved in extracurricular activities (e.g., volunteering) can help, these qualities are less of a deciding factor for a hire. In examining learning outcomes, institutional stakeholders should continue to expand on characteristics that promote the desired attributes often built through practice in and outside of a classroom setting. In addition, institutional stakeholders should
guiding gener ation z’s futur e
123
encourage students to find internships and work experience that can help build on these skills and attributes that employers expect. In an online survey conducted by Hart Research Associates (2018), business executives and hiring managers from various private and nonprofit companies were asked about their perspective on higher education and employment success. These employers believed that a degree from a college or university is a worthwhile investment and provides the necessary training to prepare students for different skills and knowledge. From this survey, they believed that after earning a degree, individuals should show competency in areas such as oral and written communication, critical thinking, ethical judgment, effective teamwork, self-motivation, independence, and real-world application of their knowledge. However, the results indicated that college graduates do not fully meet employers’ expectations. Instead, students are underprepared and need more training on these various skills. Most studies focusing on what employers seek in their hires have primarily targeted millennials because those who are of Generation Z are still in college or have only recently graduated. Further assessment is needed to investigate the status of their career success. However, employers have observed some strengths and weaknesses among millennial job applicants. One of the greatest strengths of millennials is that they are able to immerse themselves in technology and the Internet. Additionally, they are able to network socially on a virtual level. Because they are so in tune with their online or virtual connection, though, they may lack in their ability to communicate on an in-person basis. This relationship between technology and communication should be a target concern, not only for millennials but for Generation Z as well. Educators must teach students to have a balance between their ability to use technology and their ability to work independent of their reliance on technology to eventually become well-rounded employees.
Student Learning Opportunities for Career Success Higher education is a constantly changing teaching enterprise, especially with the innovation of technologies (e.g., iClickers, presenter-track cameras) and learning spaces (e.g., online learning, blended classes, and virtual reality). These changes have caused educators to think differently in ways that creatively engage students in their learning. Institutional stakeholders must realize that the training they experienced in the past may not translate the same way to a new generation of college students. An evaluation of student learning involves educators reassessing their own pedagogical approach on an ongoing basis. Additionally, educators must be careful to find a balance between the use of technological resources and their ability to teach skills aside from what is offered from the technologies available. In preparing Generation Z’s future, educators must
124
gener ally speaking
set learning outcomes that promote a well-grounded and meaningful experience for students to obtain jobs successfully. A common framework that is used in the educational setting is Bloom’s taxonomy of higher learning, proposed by Benjamin Bloom and his colleagues (1956) to assess student learning. This model was later revised in 2001 and has continued to help educators form learning objectives around important learning skills (Anderson & Krathwohl, 2001). The model consists of two dimensions: (a) knowledge (i.e., the types of information students obtain) and (b) cognitive processes (i.e., the activities involved in obtaining the knowledge) (see Krathwohl, 2002). The knowledge dimension can be divided into four categories: (a) factual knowledge (terms and details necessary to know about a subject), (b) conceptual knowledge (the theories and principles involved), (c) procedural knowledge (the methods or step-by-step techniques associated with the subject), and (d) metacognitive knowledge (the awareness of one’s own thoughts about a subject). The four categories within the knowledge dimension are important because they provide the content by which cognitive skills operate. Although the knowledge dimension is greatly associated with the cognitive process dimension, much of what will be focused on for this chapter is the latter dimension—cognitive processes. In the revised Anderson and Krathwohl (2001) version of Bloom’s taxonomy, the cognitive process dimension distinguishes action-oriented skills that examine the depth of students’ knowledge. Specifically, there are six categories that promote higher-level learning. They are classified as verb tenses and include (a) remembering (knowledge stored for recall and recognition), (b) understanding (meaning produced through explaining and interpreting), (c) applying (knowledge implemented into a situation), (d) analyzing (the ability to break down information and make connections), (e) evaluating (the process of critiquing and justifying a point of view), and (f) creating (the practice of designing and constructing new work). Berger (2018) has argued that these six categories are often not seen as a cumulative hierarchy by educators. Instead, they sometimes regard them as being mutually exclusive, which would be incorrect. Note that each achievement of the six categories should be considered an enhanced level from which they should be integrated with the previous cognitive skills. Each classified domain should be considered as steps of going deeper into one’s cognitive learning. In examining the cognitive process dimension, educators should consider what employers expect from college students as they graduate with a degree. Student learning opportunities can be implemented in courses, regardless of the subject, to help segue Generation Z students toward career training. The cognitive process dimension of Bloom’s taxonomy can be helpful with this process as it allows educators to consider what types of learning objectives enable college graduates to succeed as potential employees. Commonly used by educators, assigning students to projects is a useful and direct assessment of the students’ skills. However, these projects should go beyond content-centered learning. Rather, educators should have students demonstrate how their project relates to real-world situations. These projects may involve prototyping
guiding gener ation z’s futur e
125
an idea, designing solutions to problems, or making connections to a topic of interest with an application-based activity. The approach of using projects as part of the teaching curricula allows students to get outside of their comfort zone in a classroom setting and to investigate their topics through the use of outside sources. As a result, these projects enable students to see how they can remember, understand, apply, analyze, evaluate, and create content (cognitive processes derived in Bloom’s taxonomy) that could lead to career and lifelong learning. In addition, not only do these projects give students a sense of purpose, they also require students to plan and organize their work to meet certain deadlines and performance criteria (e.g., grading rubrics)—a realistic and practical exposure to the working world. Interestingly, employers have indicated that having a portfolio of students’ applied experiences can be more helpful than evaluating other factors, such as college transcripts (Hart Research Associates, 2018). These projects that are assigned by educators can be a good starting point for getting students to acknowledge their body of work for the use of creating a portfolio of their skill set. These projects may be especially helpful for Generation Z students who are in college full time and have yet to obtain an internship or job experience. Note that employers look more favorably on applicants who have had work experience in the field. Thus, getting students to reflect on their work may help nudge them toward searching for further experience in their discipline. Regardless of whether students are taking an introductory or capstone course, having projects that could be made into a portfolio may help students find greater purpose in their work. Portfolios provide employers with information about their job candidates, specifically of what they are capable of doing. Through this process, employers are able to recognize the accomplishments of the student and evaluate how much the student has learned to apply and organize his or her knowledge in the field. Nevertheless, these types of projects enable Generation Z students to participate in experiential learning (Canon & Feinstein, 2005) that can result in career opportunities in the future.
Career-Building Learning Outcomes Through Metacognition When hiring job candidates, employers look beyond just the work qualifications of the individual. They expect certain basic skills, such as having strong oral and written communication abilities, being able to solve problems and critically think on the job, and adapting easily to collaborative efforts. Although these qualities are expected by employers when hiring, some college students may lack the training. As educators, it is imperative that students get exposure to basic skills at the very start of their academic journey. However, these basic skills are often difficult to assess because they are habits that manifest themselves through practice. Additionally, these basic skills can be challenging for employers to assess because they typically try to make judgments based on the applicant’s resumé and interview. However, as students gradually learn the necessary
126
gener ally speaking
skills, they will be able to provide clear examples as job applicants of how they have fulfilled those qualities in the past. As noted earlier, assigning projects can be beneficial for students to exhibit their ability to produce quality work and exemplify what they are capable of doing in a potential job portfolio. Yet the basic skills that employers desire are often indirectly assessed through these projects. For instance, students’ ability to present a project effectively to a group or work productively in a group is often not as evident relative to what type of work was completed. With this concern, educators must also provide a way for students to see how they have trained themselves with the basic skills that employers seek. A solution for this is to approach it in a metacognitive way, in which students are able to reflect on their work as well as have others (e.g., their classmates and professors) give constructive feedback to them. When examining Generation Z college students, educators should make careful considerations of characteristics that might influence students’ ability to achieve strong basic skills, including technology, information literacy, and teamwork skills. Technology can be a blessing and a curse. Generation Z students tend to adjust to new technologies more easily than the previous generations because of their early exposure to them from their childhood. They continue to live in an age where smartphones, iPads, laptops, and other portable devices are commonplace, and where entertainment (e.g., Netflix, Hulu, Amazon Prime) is easily accessible and literally in the palm of their hands. Communication can happen in an instant without people meeting in person through various means, such as texting, instant messaging, Snapchat, and Twitter. Although technology has become a luxury, it has also produced a major distraction for students. This should be a concern for educators, especially when examining students’ basic communication skills. Not only can the use of technology affect their learning in class (e.g., a decline in academic performance due to cell phone usage in class; see Lepp, Barkley, & Karpinski, 2015), students may become too accustomed to being by themselves with their technological device instead of actively engaging in conversations with others face-to-face. Although this does not mean that educators should avoid the use of technology as a form of communicating with students, educators should set boundaries for when technologies are to be appropriately used in class. To assess this type of learning outcome in a metacognitive way, educators might want to acknowledge students’ use of technology in and outside of class and have them reflect on how much they communicate face to face versus virtually. This metacognitive assessment is important because employers expect that these students can communicate with others on a professional level. Information literacy is highly important, especially considering the speeds at which students of Generation Z are capable of retrieving information from the Internet. They are bombarded with details in social media and must figure out the veracity of information given to them. In an era when “fake news” has become the rage and uncertainty has caused distrust, students from Generation Z must become critical thinkers and problem solvers. Educators must instill in them the need to independently verify information
guiding gener ation z’s futur e
127
through multiple sources and not accept a certain perspective as the end-all-be-all explanation. In a metacognitive approach, educators should have students reflect on how they obtain their information and assess how they create their own perspectives on the issues they face. Across disciplines, educators must also take into consideration the frame of mind that others may have regarding a given issue and make sure that students evaluate how those differences can be resolved. Similarly, information literacy is important to employers, as they want to see that their employees are able to synthesize different insights to a concern and identify possible solutions based on the available resources. Employers value teamwork, and Generation Z should be able to work with others, not only online but in person as well. According to Seemiller and Grace (2016), students of this generation enjoy working on their own but are willing to collaborate with others. For educators, it is important to help students realize that teamwork is a group effort and requires accountability. This means having students delegate their responsibilities in a group and show leadership for the tasks that they are given. Obstacles might arise, such as an individual not doing his or her part or cooperating with the goal of the group. These issues are often difficult to experience but could teach students that collaboration requires being creative with finding solutions to group problems. In addition, group projects force students to not only meet in a classroom setting but also outside of class sessions. Educators can use group projects as a metacognitive opportunity for students to reflect on their strengths and weaknesses as individuals working within a group. By understanding their own behaviors in group settings, students can improve on their collaborative abilities and offer examples to employers of how they have managed to be productive in a team.
Conclusion A great concern when it comes to institutions of higher education and their ability to adapt to skills training is the slowness to evolve. Often, an educator’s established way of teaching may not translate to the skills expected in the job sector today. Some may focus heavily on the scholarship side without integrating application-based activities that reflect real-world situations. Regardless of an educator’s expertise in a given field, he or she must critically evaluate how information is taught to the student and figure out how this information can be useful for advancing students’ professions in the future. McNamara (2009) contended that although there is a recognition for the need for skills training to transfer into workforce readiness, this issue has not been actively targeted. Currently, there remains a skills gap between what students learn in college and what employers expect from their hires. Educators should continue to innovate in their teaching style and stay up to date with what employers are seeking. This means taking on a metacognitive approach, not only for students, but for themselves as well. Educators should reflect and reassess how much their pedagogy is connected to learning
128
gener ally speaking
outcomes that go beyond the field and toward students’ career success. Just as educators must figure out ways to accommodate students’ styles of learning, students must also adapt to learning in relatively unfamiliar ways. Such adaptation may include students learning to collaborate and communicate with others face-to-face, skills highly valued by employers (Hart Research Associates, 2018). As incoming Generation Z students enroll in colleges and universities, institutional stakeholders must continue to be ahead of the game by learning new ways to actively engage students while teaching them the skills necessary to succeed in the employment world. Although technology continues to advance, institutional stakeholders should not fully rely on one resource to develop skills that employers seek. In fact, an overreliance on technology may cause a paradox of progress. Specifically, students may be overly dependent on technology and may struggle to find other solutions without it. Twenty-first-century educators must focus on the development of critical thinking, problem-solving, and team-building skills through various means of learning, as well as help to foster students’ abilities to apply knowledge in the classroom and beyond. Although general education was not the focus of this chapter, this curriculum can be an ideal vehicle for translating learning outcomes to career outcomes among Generation Z and many generations to come.
Notes 1 Correspondence concerning this book chapter should be addressed to Kris Gunawan, 400 Jefferson Street, Hackettstown, NJ 07840, or [email protected].
References Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York, NY: Longman. Berger, R. (2018, September 25). We learn by doing: What educators get wrong about Bloom’s taxonomy. Education Week. Retrieved from https://www.edweek.org/ew/articles/ 2018/09/26/we-learn-by-doing-what-educators-get.html Bloom, B. (Ed.). (1956). Taxonomy of educational objectives: The classification of the educational goals. New York, NY: David McKay. Bureau of Labor Statistics. (2017, October). Projections overview and highlights, 2016–2026. Retrieved from https://www.bls.gov/opub/mlr/2017/article/projections-overview-and-highlights2016-26.htm Bureau of Labor Statistics. (2018, April). Fastest-growing occupations. Retrieved from https:// www.bls.gov/ooh/fastest-growing.htm Canon, H. M., & Feinstein, A. H. (2005). Bloom beyond Bloom: Using the revised taxonomy to develop experiential learning strategies. Developments in Business Simulations and Experiential Learning, 32, 348–356.
guiding gener ation z’s futur e
129
Fry, R. (2018, April 11). Millennials are the largest generation in the U.S. labor force. Pew Research Center Fact Tank. Retrieved from http://www.pewresearch.org/fact-tank/ 2018/04/11/millennials-largest-generation-us-labor-force/ Hart Research Associates. (2018). Fulfilling the American dream: Liberal education and the future of work. Retrieved from https://www.aacu.org/sites/default/files/files/LEAP/2018 EmployerResearchReport.pdf Howe, N., & Strauss, W. (2000). Millennials rising: The next generation. New York, NY: Vintage Books. Howe, N., & Strauss, W. (2007). Millennials go to college (2nd ed.). Great Falls, VA: LifeCourse Associates. Kasperkevic, J. (2017, July 4). Study: Millennials still struggling with student debt and underemployment. Marketplace. Retrieved from https://www.marketplace.org/2017/07/04/ economy/study-millennials-still-struggling-student-debt-and-underemployment Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41(4), 212–218. Lepp, A., Barkley, J. E., & Karpinski, A. C. (2015). The relationship between cell phone use and academic performance in a sample of U.S. college students. Sage Open, 5(1), 1–9. doi:10.1177/2158244015573169 McNamara, B. R. (2009, May). The skill gap: Will the future workplace become an abyss? Techniques: Connecting Education and Careers, 84(5), 24–27. National Association of Colleges and Employers. (2017). Job outlook 2016: The attributes employers want to see on new college graduates’ resumes. Retrieved from http://www.naceweb .org/career-development/trends-and-predictions/job-outlook-2016-attributes-employerswant-to-see-on-new-college-graduates-resumes/ Raphelson, S. (2014, October 6). From GIs to Gen Z (or is it iGen?): How generations get nicknames. NPR. Retrieved from https://www.npr.org/2014/10/06/349316543/don-tlabel-me-origins-of-generational-names-and-why-we-use-them Sandeen, C. (2008). Boomers, Xers, and millennials: Who are they and what do they really want from continuing higher education? Continuing Higher Education Review, 72, 11–31. Seemiller, C., & Grace, M. (2016). Generation Z goes to college. San Francisco, CA: Jossey-Bass. Selingo, J. J. (2018). The new generation of students: How colleges can recruit, teach, and serve Gen Z. Washington, DC: Chronicles of Higher Education.
Chapter 12
The Future Relevance of the General Education Curriculum Kristen L. Tarantino and Madeline J. Smith
As evidenced throughout Generally Speaking, general education has and continues to have an important presence in higher education curricula. The courses and experiences in which students take part while completing their general education requirements expose them to disciplines and ideas that have the potential to challenge their existing worldviews and contribute to their development as well-rounded members of society. In this text, institutional affiliates from around the United States provided evidence of the impact of general education on student learning as well as culture and society at large using a variety of assessment, survey, and other methods. However, such evidence does not remove the need for stakeholders to regularly review the effectiveness and, perhaps most importantly, the relevance of general education curricula across the higher education landscape.
Exploring Relevance and Innovation In 2016, the Educational Testing Service (ETS) released a publication titled “The Critical Role of General Education.” This booklet largely echoed the findings conveyed in Generally Speaking, specifically with regard to the efficacy of general education curricula in exposing students to interdisciplinary topics that are critical to their development as students, citizens, and members of the workforce. Concurrently, ETS and
132
gener ally speaking
contributing authors also echoed the need for the routine review of the relevance of general education. As Flaherty (2016) noted, institutions across the nation share a major concern “that students don’t have much sense of what general education is supposed to be accomplishing” (para. 2). In other words, students may be aware of the general education curriculum yet view it as more so a checklist to complete or a barrier to more interesting major coursework than as an opportunity to evolve as contributing members of society. This concern brings us back to the issue of relevance, specifically as it pertains to students’ personal and professional interests. So, how can faculty, administrators, and other stakeholders actively and collaboratively ensure the relevance of general education curricula in the 21st century and beyond? The Harvard University General Education Review Committee (2016) cited three guiding questions during a revamp of the institution’s own general education curriculum: “What does my area of inquiry have to offer to the society or culture at large? What does a student, who might otherwise have no further education in my area of academic inquiry, need to know in order to appreciate this value? How, in particular, will knowing these things help a student think differently about his or her ethical decisions or approach differently his or her contributions to civil discourse and action?” (p. 6). As alluded to, these questions serve as a guide for developing general education courses. Going further in our discussion of relevance, though, we must also consider potential innovations for general education curricula for the 21st century and beyond. In The End of College, Carey (2015) references higher education’s “race to revolution,” otherwise known as the current efforts that institutions across the nation are making to innovate in their curricula in order to remain competitive and relevant in an era when college costs continue to soar while graduation and employment rates struggle. Carey (2015) alludes to the notion that many institutions are selecting their general education curriculum as an ideal area in which to innovate given its inherent flexibility. While major coursework tends to be more prescriptive, general education courses and experiences can draw from an array of disciplines and can largely be completed at any point during an undergraduate student’s academic career. As for ideas for innovation, examples abound across the higher education landscape. The EAB (formerly known as the Education Advisory Board) recently highlighted the strategies that three institutions are employing to innovate in their general education curricula. Among these institutions, Northern Illinois University is clustering general education courses around themes aligned with its institutional mission. Further, Virginia Tech is merging experiential learning and general education requirements, while the University of Colorado, Colorado Springs recently developed a Bachelor of Innovation program that provides “an alternative to the traditional BA or BS degree that students can take in a variety of disciplines ranging from the pre-professional to the liberal arts” (EAB, 2017, para. 5). The options for innovation in general education are seemingly endless. Perhaps the most important takeaway is to choose strategies that most closely align with the institutional mission and keep pace with our increasingly digital and globalized world.
futur e r elevance of the gener al education cur r iculum
133
Maintaining Relevance in General Education Assessment This text has highlighted various practices in the assessment of student learning as they relate to the general education curriculum. These practices include large-scale assessments, which allow for some cross-institutional comparison, the use of rubrics, and institutionally created measurements of student learning. Areas of future research should be concerned with how to maximize the reporting for learning findings and making educated comparisons. With institutions focusing on their own general education programs, the ability to see trends emerging in general education learning gains is limited. In many cases, institutions may be looking for the same outcomes, such as information literacy or quantitative reasoning. The challenge in presenting any sort of comparison, however, lies in determining whether institutions are defining outcomes in similar ways and/or employing similar methods to measure progress in those outcomes. If general education is foundational for more specialized learning (i.e., majors), then learning gained from general education should be evident in observed changes at the next level of undergraduate education. Institutions that only measure learning gains in the first two years miss opportunities to assess learning that builds on those foundational skills and knowledge. In addition, the longitudinal nature of learning suggests that the impact of general education programming may produce learning gains long after the first two years of college, or even post-graduation. Institutions must begin to address, from an assessment viewpoint, the compacted nature of learning (i.e., learning built on previous learning). Considerations for timing assessment include measuring student learning gains at the beginning and end of the college experience, measuring before and/or after general education only, and future measurement for alumni. The higher education landscape has grown exponentially since the early colonial colleges. In addition to brick-and-mortar institutions of all types and sizes, a growing online platform for higher education introduces future concerns about general education programming and assessment. A common concern in the discussion about online education is quality. This text previously addressed the issues involved with providing quality general education at 2-year and 4-year institutions. Many of the same concerns exist for online education (i.e., financial resources, faculty support); however, the online environment introduces other variables into the learning equation, namely the format of education delivery. In addition to the growing number of students engaged in online education, the student demographics for colleges and universities have shifted over time. Today, the increase in part-time students as well as those students who no longer fit the traditional age range (i.e., 18 to 22 years old) can impact where higher education will see learning gains. With individual experience playing a significant role in learning, undergraduate students who begin higher education study later in life may have already achieved many of the learning outcomes used by general education programs. For part-time students, who may only take one course at a time compared to a full course load for full-time
134
gener ally speaking
students, the learning process may not exhibit gains typically seen for undergraduates. For example, courses that build on learning gained in earlier courses may be stretched too far to appropriately scaffold the learning process for part-time students. Another rapidly growing demographic is the international student population. Applying American standards for a well-rounded education to international students does not take into consideration the previous knowledge and background with which these students enter American institutions. In addition to developing outcomes that reflect the changing student demographic, appropriate measurement of learning should also be employed. For example, where American students are urged to paraphrase and transfer authored work into their own words, Asian cultures may find such a practice to be disrespectful. Measuring successful gains, then, in the ability to compose an essay that adheres to best practices for research and crediting sources may not yield the same results for an international student population. In the current structure of general education, courses that build on the next should theoretically support learning for a particular outcome. However, if a curriculum is more fluid, allowing students to take general education and major courses simultaneously, learning gains may not be as evident. Those responsible for general education programming and assessment strategies must be cognizant of the flow of students’ curricula so that when assessing for student learning, gains can be attributed to a particular course or curriculum. Further, institutions must consider the scaffolding of learning, in which a first-level course introduces students to an outcome like critical thinking and subsequent courses in the curriculum build on that capability, increasing the likelihood that students will further develop their critical thinking abilities. Scaffolding student learning opportunities within the general education curriculum ensures that students continue to utilize new skills and integrate them with their everyday life. A major challenge to the assessment of general education is an institution’s perspective on general education. Institutions must determine their goal in this arena (e.g., is it to develop critical learning skills or to expose students to a variety of topics to have a well-rounded education?). Without clearly defined learning outcomes, institutions may turn to distribution requirements that do not appropriately support student learning. Deciding what to measure can also be complicated by departmental goals. For example, an institution may have critical thinking as a broad learning outcome for general education, but perhaps the school of education at that institution also lists critical thinking among its outcomes. The challenge then becomes one of ownership: which department or course can be credited with producing learning? Instead, institutions should aim for the integration of outcomes across the curriculum. Returning to the critical thinking example, a student will likely not learn critical thinking skills from one solitary course. In addition, multiple outcomes can be utilized in a single course. Institutions must give attention to the importance of planning curriculum to incorporate more than one learning outcome in each course.
futur e r elevance of the gener al education cur r iculum
135
Developing a strategy for assessing general education is imperative to collecting appropriate data and determining whether learning occurred. Adopting a piecemeal approach to assessment, where each course employs its own assessment strategy to determine learning, can have detrimental effects on the general education program as a whole. First, faculty may feel as though they are not supported in performing assessments. By creating and reporting their own assessments, faculty may not see the larger picture. Second, assessing a general education curriculum that is piecemeal does not reflect the integrative nature of learning. If general education builds on itself, a seamless assessment program would reflect this. A final aspect of relevant assessment that requires further examination is the opportunity for measurement error in learning assessment. Institutions must ensure that they are identifying the skills or qualities that they want students to learn and measure those specific skills. If outcomes are identified yet assessments do not align with a particular outcome, the inferences made from any data will be flawed. Further, consideration of students’ skills and knowledge prior to course enrollment also plays a factor in whether data appropriately measure gains. Failure to account for students’ prior knowledge can influence the validity of outcome results and should be taken into consideration when developing a general education assessment plan.
Conclusion Despite its demonstrated effectiveness, general education has often been stigmatized as a mere collection of boxes to check. As we move further into the 21st century, colleges and universities must consider the larger impact of general education curricula on student learning as well as on culture and society if these curricula are to remain relevant. This text has presented innovative practices and recommendations that all stakeholders, both present and future, can use to analyze the state of their own general education curricula. Positioned at the forefront of learning and knowledge creation, the field of higher education has an obligation to ensure that its students are not simply checking boxes. Rather, students should be developing skills and knowledge during their undergraduate years that will guide them to graduation, into careers, and throughout their lives—generally speaking.
References Carey, K. (2015). The end of college. New York, NY: Riverhead Books. EAB. (2017, November 8). How three institutions are rethinking general education. Retrieved from https://www.eab.com/research-and-insights/academic-affairs-forum/expertinsights/ 2017/how-three-colleges-are-rethinking-general-ed
136
gener ally speaking
Educational Testing Service. (2016). The critical role of general education. Princeton, NJ: Educational Testing Service. Flaherty, C. (2016, March 10). Rethinking gen ed. Inside Higher Ed. Retrieved from https:// www.insidehighered.com/news/2016/03/10/undergraduate-curricular-reform-effortsharvard-and-duke-suggest-theres-no-one-way Harvard University General Education Review Committee. (2016). General Education Review Committee final report. Cambridge, MA: Harvard University.
Index
A Accreditation 9, 11, 23, 25, 31, 33, 44, 54, 56, 73, 74, 81, 85, 108–110, 141–145 New England regional accreditation 108 Southern Association of Colleges and Schools Commission on Colleges 44, 48, 84, 85, 88, 143 WASC Senior College and University Commission 73, 81 Alignment 11, 15, 20, 26, 39, 73, 74, 93, 109 Assessment Approach 24 Closing the loop 3, 4, 20, 23–25, 27–30, 32, 48, 76, 79 Course-embedded 15, 39, 67, 86 Direct 13, 15, 44, 45, 55, 75, 77, 78, 124 Indirect 12, 45, 75, 77, 78 Innovative approaches 15 Standardized tests 11, 66, 67, 69, 86, 103 Assessment Results 8, 24–31, 37, 45, 46, 67, 77, 78, 80, 87, 93 Assessment Strategies 8–13, 15, 47, 134, 135 Course-level 19, 26 Program-level 19, 74, 143 System-level 19 Association of American Colleges and Universities 15, 24, 47, 59, 60, 66, 67, 69, 88, 90, 94, 107, 108
VALUE Institute 15, 21 VALUE Rubric 15, 26, 32, 59, 69, 91, 94, 107, 114, 116
B Bloom’s Taxonomy 124, 125, 128, 129
C CAAP 13, 18, 19 California Master Plan 72 Career Outcomes 3, 4, 114, 119, 128 Carnegie Foundation 71 for the Advancement of Teaching 71 Carnegie classification 83 Civic Dimension 4, 61–64, 68 Civic experience 62, 64 Civic information 61–63 Civic search skills 60, 61, 63 CLA+ 13, 16, 21, 54 College of William and Mary 53, 60, 144, 145 Columbus State University 83, 85, 94, 142, 143 Commission on General Education 2, 4, 5, 61–64, 69 Community Colleges 2, 4, 33–41, 43–45, 47, 48, 57, 142 G.I. Bill 34, 35 Mission 35, 48 Two-year institutions 33
138
Competency 19, 22, 24, 26, 28, 29, 55, 60, 67, 77, 109–113, 123 Skills-based 28, 110 Comprehensive Program Review 84, 85 Critical Reading 4, 63, 95–100, 102–105, 143 Critical Thinking 10, 21, 28, 45, 54, 59, 65, 66, 77, 78, 85, 91, 98, 102, 103, 107, 123, 128, 134 Curriculum ix, 1–5, 7, 10–12, 14, 16, 17, 19, 20, 25, 26, 28, 31, 34–36, 38, 40, 41, 44, 47, 49–54, 56–69, 71, 72, 74–80, 84–87, 93, 97, 104, 108–110, 112–116, 119, 122, 125, 128, 131–136, 142, 144 Curriculum mapping 65, 67, 74, 76
gener ally speaking
G
D
General Education ix, 1–5, 7, 23, 24, 26, 31–39, 41, 44–55, 57–69, 71–73, 80, 83–85, 88, 89, 93–95, 97, 98, 104, 105, 107–117, 119, 128, 131–136, 141–145 Curricula ix, 1–4, 49, 51, 53, 54, 57–59, 61–64, 68, 71, 93, 110, 115, 116, 131–135, 144 Definition 2 Programming ix, 3, 4, 52, 57, 59, 60, 133, 134 Generation X 121, 122 Generation Z 4, 119–129 Goodwin College 108, 110, 111, 115–117, 144 Governance 27, 30, 52, 54, 56, 57
Design thinking 4, 5, 83, 89, 93, 94
I
E
Information Literacy 10, 21, 55, 63, 97, 98, 101, 109, 112, 126, 127, 133 Institutional Mission 2, 52, 57, 58, 65, 108, 132 Integration 4, 15, 24, 29, 31, 67, 107, 112, 114, 116, 127, 134
Educational Testing Service 13, 131, 136 Evergreen State College, The 52, 53, 60, 68, 108
F Faculty ix, 1, 3, 4, 7–21, 24–31, 34, 37, 39, 41, 44–47, 50, 52–57, 63–68, 71, 73–81, 83, 85–94, 97–104, 107, 109–115, 132, 133, 135, 142–145 Adjunct faculty 28, 29, 39, 52, 74, 77, 79, 115, 143 Buy-in 17, 52 Development 11, 17, 18, 21, 31, 57, 66, 75, 80, 90, 92, 143, 145 Engagement 9, 27, 29, 74, 75 Faculty Learning Community 75–81 Four-Year Institutions 49, 106, 133 Liberal arts institutions 3, 52, 53, 58, 64 Universities 2, 8, 15, 21, 24, 25, 29, 31, 32, 34, 35, 37, 38, 47, 50–55, 58, 59, 61, 62, 65, 66, 69, 70, 72, 73, 79, 80, 83, 84, 88, 90, 91, 94, 97, 107, 116, 117, 119, 120, 122, 128, 133, 135, 143, 145
J Junior Colleges 33, 34
K Kean University 95, 98–100, 102, 103, 105, 143
L LEAP 26, 67, 90, 91, 94, 117, 129 Liberal Arts 3, 34, 35, 47, 49, 51–53, 58, 64, 65, 84, 90, 108, 114, 132, 141, 142
M Majors 1, 14, 51, 65, 107, 114, 133 Maki, Peggy 8, 21, 22, 67, 69 Metacognition 124–127 Millennials 119–123, 129 Morrill Act 50, 58, 71, 81
index
139
N
T
NILOA 17, 25, 26, 32, 48, 59, 66–69, 80, 114, 115 North Carolina Community College 4, 34, 39–41, 43, 48
Online Education 133
Teaching 9, 12, 14, 17, 21, 23, 26, 28, 30–32, 38, 48, 52, 63, 66, 71, 74, 75, 77–81, 85, 87, 90–92, 95, 98, 100–102, 104, 111, 113, 115, 117, 123, 125, 127, 128, 141–145 Transparency 24, 29, 46, 58, 68, 69 Two-Year Institutions 33
P
U
Penn Commission 1 Private Institutions 56, 57 Public Institutions 33, 56, 57, 108
University of California 2, 5, 54, 60, 61, 69, 72, 80, 145 University of Maryland Baltimore County 60 University of North Dakota 16–18, 141–143, 145 University of South Dakota 18, 20, 141 University System of Georgia 83–85, 88, 93, 94
O
R Research Institutions 50, 52, 53, 58, 62 Resources ix, 8, 25, 27, 29–31, 33, 45–47, 53, 54, 56, 58, 59, 88, 90, 101, 103, 107, 109, 115, 116, 123, 127, 133 Financial 10, 33, 37, 38, 49, 53, 54, 56, 64, 65, 84, 116, 133 Personnel 29, 53, 54, 56, 87 Time 98, 133 Rubrics 13–15, 17, 19, 20, 24–27, 32, 46, 54, 59, 67, 69, 76–78, 88–92, 94, 98, 103, 107, 108, 111, 114, 116, 125, 133
S Scaffolding 66, 134 Skills Gap 35, 127, 129 Stakeholders 2, 3, 7–9, 12, 21, 24, 27, 29, 30, 66, 68, 108, 109, 119–123, 128, 131, 132, 135 Student Artifacts 9, 10, 16, 19, 20, 24, 25, 38, 46, 59, 66, 86–93 Student Demographics 24, 36, 49, 58, 133 First-year 98, 100, 101, 105 International 134 Part-time 133, 134 Student Learning Outcomes 3, 7, 11, 13, 16, 18, 20, 21, 23–26, 31, 32, 37, 44–46, 48, 53–56, 58, 59, 65–69, 72–74, 80, 84–87, 90–92, 98, 103, 113–117, 119, 120, 122, 124, 125, 127, 128, 133, 134, 141, 142, 144
Contributors
Angelia (Angie) G. Adams, Ed.D., has worked in education since 1998. She began
her career as a public school teacher but has taught in a community college since 2006. Adams, who at one time was a community college student herself, now serves as general education outcomes director, Humanities and Social Sciences department chair, and sociology professor at Richmond Community College in Hamlet, North Carolina. As a part of this role, Adams collects and assesses data from the college for the purpose of implementing improvement plans. Additionally, she earned an Ed.D. in higher education leadership from East Carolina University. She participated in the publishing of Sociology: A Practical and Realistic Methodology and is coauthor of Navigating Success at Richmond Community College and in Your Career (2016). Lisa K. Bonneau, Ph.D., is the director of assessment and the accreditation liaison
officer at the University of South Dakota. She earned a Ph.D. in zoology from Oklahoma State University. Over the course of her career, she has taught at multiple institution types (community college, 4-year state schools, and 4-year private liberal arts schools) and prefers teaching biology courses centered on general education and introductorylevel instruction. Her current interests focus on student learning and assessment in natural sciences. Kris Gunawan, Ph.D., is an assistant professor of psychology at Centenary Univer-
sity and the former chair of the Learning Outcomes Assessment Committee. He earned his Ph.D. in experimental psychology with a cognitive science emphasis at the University of Nevada, Las Vegas. His research focuses on pedagogical effectiveness, memory, and discourse processing. Devon G. Hall, Ed.D., is the dean of the Applied Sciences and Engineering de-
partment at Richmond Community College in Hamlet, North Carolina. Richmond
142
gener ally speaking
Community College serves a rural and economically challenged portion of the state. He has been employed at the college since 1993 in various faculty and administrative positions. Dr. Hall is a graduate of Miami-Dade College, one of the largest urban community colleges in the nation, giving him a unique perspective on the varying roles that both rural and urban community colleges play in general education. He is also the author of An Introduction to Business: From a North Carolina Perspective. Joan Hawthorne, Ph.D., recently retired from the University of North Dakota
(UND), where she served as director of assessment and accreditation. A graduate of M.S. and Ph.D. programs at the University of Colorado and UND, she was previously the coordinator for writing across the curriculum and the writing center at UND, where she also taught graduate courses in education and undergraduate courses in honors and English. Dr. Hawthorne serves as an assessment mentor for the HLC Assessment Academy and has frequently presented and published on assessment, particularly in the context of general education programs. Jeremy Ashton Houska, Ph.D., is the director of institutional research and assess-
ment at Centenary University. Formerly on the faculty at Centenary (associate professor of psychology), he has published in the areas of social, personality, sport, and cognitive psychology. As a faculty member at small liberal arts colleges, he spent most of his creative energies conducting scholarship in the areas of teaching and learning. He now oversees Centenary’s institution-wide assessment plan, contributes to strategic planning efforts and accreditation, and serves campus units and academic programs as they plan effectiveness studies. Tim Howard, Ph.D., serves as associate dean in the College of Letters and Sciences
and professor of mathematics at Columbus State University, where he joined the faculty in 1995. He served on the General Education Committee for 10 years and chaired the committee twice. He has authored papers on teacher recruitment and preparation, assessment of tutorial services, peer instruction, graph theory, and nonlinear analysis. He earned a bachelor’s degree in applied mathematics from Brescia University and a master’s and doctorate from the Georgia Institute of Technology. Mary Kay Jordan-Fleming, Ph.D. , is professor of developmental psychology at
Mount St. Joseph University in Cincinnati, Ohio. From 2002 to 2017, she served as academic assessment coordinator for her institution, earning the Excellence in Assessment designation in 2016 from the National Institute for Learning Outcomes Assessment. For the past two years, she has served as reviewer for the Excellence in Assessment award program.
contr ibutors
143
Anne Kelsch, Ph.D., is director of faculty and staff development and professor of
history at the University of North Dakota. She earned her Ph.D. in history from Texas A&M University and has worked in faculty development for over a decade, focusing in particular on new faculty. Dr. Kelsch has served as a senior fellow with the American Association of Colleges and Universities and has published and presented frequently on faculty development and student learning. Doug Koch, Ph.D., is currently the vice provost of academic programs and services
at the University of Central Missouri. Prior to this position, he was the associate dean for the College of Health, Science, and Technology for a short time and was previously the chair of the School of Technology. His degrees are in education, specifically technology education, and his interests are varied, from program-level assessment and authentic assessment to problem solving and operational efficiency. In his current role, he is responsible for university assessment, Higher Learning Commission accreditation, program review, and several other academically related functions. Bridget Lepore, M.A., has many years of experience in higher education in the areas of learning and student learning assessment. After working in professional development and technology in both the corporate and academic fields, she joined Kean University as a full-time lecturer, working with general education students. She currently teaches critical reading, research and writing, and first-year and transfer seminar courses. In addition, she is responsible for coordinating the assessment process for Kean's General Education program including data collection, faculty support, and analysis. Bridget’s current research includes college reading skills, teaching practices focused on community building, and professional learning and development. Kimberly McElveen, Ed.D., is the assistant vice president of institutional assess-
ment at Columbus State University. She has been a part-time faculty member for 25 years for three different institutions in the areas of political science, communicating in a business environment, leadership, and higher education issues and special topics. McElveen has authored publications and presented at conferences at the national level on topics such as student retention, leadership, assessment, using data to make decisions, and higher education issues. She serves as a peer reviewer for the Journal of Student Affairs Research and Practice and is the author of the book Higher Education Resiliency for At-Risk Students. She also serves as a Southern Association of Colleges and Schools Commission on Colleges on-site reviewer. McElveen received a bachelor of arts in political science from the College of Charleston, a master’s of science in management from Troy University, and a doctorate in higher education administration from Georgia Southern University. Her areas of special interest include higher education administration, leadership, strategic planning, institutional assessment, general education, and institutional accreditation.
144
gener ally speaking
Nhung Pham, Ph.D., is currently completing a postdoc in assessment and accred-
itation. She supports the University of Central Missouri’s institutional and program assessment and is a university steering member of the Higher Learning Commission (HLC) accreditation committee. Pham is also a peer reviewer for the Higher Learning Commission. Her degrees include a Ph.D. in curriculum and instruction and a master’s in higher education administration. Her research interests are the assessment of student learning outcomes, institutional effectiveness, and faculty qualifications.
Henriette M. Pranger, Ph.D., is the assistant vice president for institutional effec-
tiveness, former dean of faculty, and General Education department chair at Goodwin College. She recently published “Ensuring Student Success: A Systematic Approach to Specialized Programmatic Accreditation” with Drs. Paula Dowd and Kelli Goodkowsky in the Association for the Assessment of Learning in Higher Education’s journal, The Intersection of Assessment and Learning. Prior to working at Goodwin College, Dr. Pranger taught humanities and technology courses at a local community college, where she also received an award for teaching excellence. She holds a bachelor of arts in philosophy from Trinity College in Hartford, Connecticut. She earned her master’s and doctorate degrees in adult and vocational education from the University of Connecticut. Yue Adam Shen, Ph.D., has broad experience in higher education assessment, from
career development and student success to general education curricula. She has a Ph.D. in educational methodology, policy, and leadership from the University of Oregon. In addition to assessment, she’s also an experienced quantitative methodologist and data analyst. Shen is currently taking a gap year from higher education to work as a product analyst in the San Francisco Bay Area. Madeline J. Smith, Ph.D., has nearly a decade of experience in the field of higher
education, specifically in the areas of academic program development and student learning outcomes assessment. After starting her career in the academic affairs division of the Ohio Department of Higher Education, Smith completed a Ph.D. in educational policy, planning, and leadership with an emphasis in higher education administration from the College of William and Mary. She subsequently served as the assistant director of assessment at Christopher Newport University and the University of Georgia. Smith also served as the manager of assessment, data, and research at Johns Hopkins University prior to returning to the University of Georgia to become the director of assessment. She has been published in the Journal of College Student Development and was a contributing author to The Dynamic Student Development Meta-Theory: A New Model for Student Success. Su Swarat, Ph.D., currently serves as the assistant vice president for institutional
effectiveness at California State University, Fullerton (CSUF), where she provides leadership and oversees operations or institutional and programmatic assessment, quality
contr ibutors
145
assurance, institutional research, and analytical and educational research. Prior to this position, Swarat served as the director of assessment and educational effectiveness at CSUF, working collaboratively with campus partners to establish an effective and sustainable campus-wide assessment process and to facilitate a culture of assessment that supports teaching and learning. She also supports several scholarship of teaching and learning (SoTL) initiatives, as well as university and discipline accreditation. Before CSUF, Swarat served in similar capacities leading assessment, evaluation, and SoTL research at the Southwest College of Naturopathic Medicine and Health Science and Northwestern University. She received her Ph.D. in learning sciences from Northwestern University. She also holds a master’s degree from Purdue University and a bachelor’s degree from Peking University, both in areas of biology. Kristen L. Tarantino, Ph.D., is an academic coach and editor at Heartful Editor,
as well as an independent researcher in the field of higher education. Having worked with college students for over 10 years, she has conducted and published research on how students make meaning from their college experiences, including personal traumatic events as well as participation in institutionally supported programming. Her research interests center on the factors that influence student learning and how to appropriately measure learning gains. She has taught at the College of William and Mary and Old Dominion University, specializing in assessment for college student learning. She holds a Ph.D. in educational policy, planning, and leadership with a higher education emphasis and a certificate in college teaching from the College of William and Mary. Alison M. Wrynn, Ph.D., served as the director of undergraduate studies and general
education at California State University (CSU), Fullerton from 2014 to 2016, where she was involved in the initial formation of sustainable general education program assessment. She is currently the interim assistant vice chancellor, academic programs and faculty development interim state university dean, academic programs at the CSU Office of the Chancellor, where she previously was the state university associate dean for academic programs. Prior to this, Wrynn was a professor of kinesiology at CSU Long Beach for 14 years. In this position, she taught and assessed numerous general education courses. Wrynn holds a B.S. in physical education from Springfield (Massachusetts) College, an M.A. in physical education from CSU Long Beach, and a Ph.D. in human biodynamics from the University of California, Berkeley. Ryan Zerr, Ph.D., is a professor of mathematics and the director of general education
at the University of North Dakota, where he has been for the past 16 years. He earned his Ph.D. from Iowa State University and has held various administrative positions at the department, college, and university levels. His nonmathematical work has focused largely on general/liberal education, and he has worked in various ways with both the Association of American Colleges & Universities and state-level organizations to advance the cause of liberal education.